text
stringlengths
4
2.78M
--- abstract: 'We propose a scenario which can explain large lepton asymmetry and small baryon asymmetry simultaneously. Large lepton asymmetry is generated through the Affleck-Dine (AD) mechanism and almost all the produced lepton numbers are absorbed into Q-balls (L-balls). If the lifetime of the L-balls is longer than the onset of electroweak phase transition but shorter than the epoch of big bang nucleosynthesis (BBN), the large lepton asymmetry in the L-balls is protected from sphaleron effects. On the other hand, small (negative) lepton numbers are evaporated from the L-balls due to thermal effects, which are converted into the observed small baryon asymmetry by virtue of sphaleron effects. Large and positive lepton asymmetry of electron type is often requested from BBN. In our scenario by choosing an appropriate flat direction in the minimal supersymmetric standard model, we can produce positive lepton asymmetry of the electron type but totally negative lepton asymmetry.' author: - 'M. Kawasaki, Fuminobu Takahashi, and Masahide Yamaguchi' title: 'Large lepton asymmetry from Q-balls' --- Introduction {#sec:introduction} ============ The success of big bang nucleosynthesis (BBN) is one of the most powerful pieces of evidence of standard big bang cosmology [@BBN]. Roughly speaking, the predicted primordial abundances of light elements (D, $^3$He, $^4$He, and $^7$Li) coincide with those inferred from observations for the baryon-to-photon ratio $\eta \sim 5 \times 10^{-10}$. However, as observations are improved and their errors are reduced, a small discrepancy may appear [@BBN2]. Furthermore, $\eta$ is also determined by observations of small scale anisotropies of the cosmic microwave background radiation (CMB)[@BOOMERANG; @MAXIMA; @DASI; @BM], which may also cause the small discrepancy. Of course, the discordance may be completely removed as observations are further improved. However, it is also probable that such small discrepancies are genuine and suggest additional physics in BBN. These discrepancies are often eliminated if predicted primordial abundance of $^{4}$He is decreased. Such a decrease is realized if there exists large and positive lepton asymmetry of electron type [@KS]. This is mainly because the excess of electron neutrinos shifts the chemical equilibrium between protons and neutrons toward protons, which reduces the predicted primordial abundance of $^{4}$He. Note that this effect is much more effective than the corresponding speed up effect, that is, an increase of the Hubble expansion due to the presence of the chemical potential, which makes the predicted primordial abundance of $^{4}$He increase. However, large and positive lepton asymmetry of the electron type is incompatible with small baryon asymmetry if we take account of the sphaleron effects, which convert lepton asymmetry to baryon asymmetry of the same order with the opposite sign [@sphaleron]. This problem is evaded if one of the following three conditions is satisfied, that is, (a) lepton asymmetry is generated after electroweak phase transition but before BBN, (b) sphaleron processes do not work, (c) positive lepton asymmetry of the electron type is generated but no total lepton asymmetry is generated. The first condition was discussed in the context of neutrino oscillations [@oscillation]. In this case, large lepton asymmetry is generated through oscillations between active neutrinos and sterile neutrinos. In order to explore the second condition, one should note that the presence of large chemical potential prevents restoration of electroweak symmetry [@nonrestoration]. Based on this fact, large lepton asymmetry compatible with small baryon asymmetry was discussed [@second]. The third condition is discussed by March-Russell [*et al.*]{} [@MRM]. The Affleck-Dine mechanism produces positive lepton asymmetry of electron type but no total lepton asymmetry, that is, $L_{e} = - L_{\mu} > 0$ and $L_{\tau} = 0$ for some flat direction, which generates small baryon asymmetry due to thermal mass effects of sphaleron processes. In this paper we consider another possibility, which is something like the combination of (a) and (b). The Affleck-Dine (AD) mechanism produces positive lepton asymmetry of the electron type but totally negative lepton asymmetry by choosing an appropriate flat direction in the minimal supersymmetric standard model (MSSM) [@AD]. As an example, we identify the “$e^{c}_{1}L_{2}L_{3}$” flat direction to be the AD field and consider the Affleck-Dine leptogenesis. Here subscripts represent the generations. Then, $L_{e} = - L_{\mu} = - L_{\tau} = - L_{\rm total} > 0$ is realized. The shift of the chemical equilibrium between neutrons and protons due to the positive chemical potential of electron neutrinos affects the results of BBN dominantly, while the speed-up effect caused by all the species of neutrinos is relatively negligible. After the Affleck-Dine leptogenesis, the AD field experiences spatial instabilities and deforms into nontopological solitons, Q-balls (L-balls) [@Kusenko; @Enqvist; @Kasuya1]. Then, almost all the produced lepton numbers are absorbed into the L-balls [@Kasuya1; @Kasuya3]. If the lifetime of such L-balls is longer than the onset of electroweak phase transition but shorter than the epoch of BBN, the large lepton asymmetry is protected from sphaleron effects and later released into the universe by the decay of the L-balls. On the other hand, small (negative) lepton numbers are evaporated from the L-balls due to thermal effects before the electroweak phase transition, which are transformed into small baryon asymmetry through the sphaleron effect. In our scenario we consider the Affleck-Dine mechanism and the subsequent Q-ball formation in the gauge-mediated supersymmetry (SUSY) breaking model. This is mainly because, in the gravity-mediated SUSY breaking model, the energy per unit charge of Q-balls is large enough to produce the lightest supersymmetric particles (LSPs) so that they will overclose the universe. Therefore, we do not consider the gravity-mediated SUSY breaking model. Since we assume that the AD field starts oscillating from the gravitational scale to produce large lepton asymmetry, the produced Q-balls are “new” [@Kasuya2] or “delayed”- type [@Kasuya3], depending on the sign of the coefficient of the one-loop correction to the effective potential. However, since the decay processes of the new type Q-balls are not completed before BBN, our scenario does not apply for them. Thus, we concentrate on delayed-type Q-balls in gauge-mediated SUSY breaking models. The rest of the paper is as follows. In Sec. \[sec:ADmechanism\], we briefly review the Affleck-Dine mechanism and properties of Q-balls. In Sec. \[sec:lepton\], we discuss our mechanism to generate large lepton asymmetry compatible with small baryon asymmetry. Section \[sec:con\] is devoted to discussion and conclusions. Affleck-Dine mechanism and Q-ball formation {#sec:ADmechanism} =========================================== In this section we briefly review the Affleck-Dine mechanism and properties of Q-balls. In MSSM, there exist flat directions, along which there are no classical potentials in the supersymmetric limit. Since flat directions consist of squarks and/or sleptons, they carry baryon and/or lepton numbers, and can be identified as the Affleck-Dine (AD) field. In the following discussion, we adopt the “$e^{c}LL$” direction as the AD field. In this case the AD field carries only the lepton number. These flat directions are lifted by supersymmetry (SUSY) breaking effects. In the gauge-mediated SUSY breaking model, the potential of a flat direction is parabolic at the origin, and almost flat beyond the messenger scale [@Kusenko; @Kasuya3; @Gouvea], $$V_{gauge} \sim \left\{ \begin{array}{ll} m_{\phi}^2|\Phi|^2 & \quad (|\Phi| \ll M_S), \\ {\displaystyle}{M_F^4 \left(\log \frac{|\Phi|^2}{M_S^2} \right)^2} & \quad (|\Phi| \gg M_S), \\ \end{array} \right.$$ where $m_{\phi}$ is a soft breaking mass $\sim$ O(1 TeV), $M_F$ is the SUSY breaking scale, and $M_{S}$ is the messenger mass scale. Since gravity always exists, flat directions are also lifted by gravity-mediated SUSY breaking effects [@EnqvistMcDonald98], $$V_{grav} \simeq m_{3/2}^2 \left[ 1+K \log \left(\frac{|\Phi|^2}{M^2} \right)\right] |\Phi|^2,$$ where $K$ is the numerical coefficient of the one-loop corrections and $M$ is the gravitational scale ($\simeq 2.4 \times 10^{18}$ GeV). This term can be dominant only at high energy scales because of small gravitino mass ${ \mathop{}_{\textstyle \sim}^{\textstyle <} }O(1\mbox{ GeV})$. There is also the thermal effect on the potential, which appears at two-loop order as pointed out in Ref. [@AnisimovDine]. This effect comes from the fact that the running of the gauge coupling $g(T)$ is modified by integrating out heavy particles which directly couple with the AD field. This contribution to the effective potential is given by $$V_T^{(2)} \sim c~ \alpha_{\rm w}^2 T^4 \log\frac{|\Phi|^2}{T^2},$$ where $|c| \sim 1$, and $\alpha_{\rm w} \equiv g_{\rm w}^2/4 \pi$ represents the gauge coupling constant of the weak interaction since we consider $e^c LL$ direction. Though the sign of $c$ depends on flat directions, it is irrelevant to our discussion since we assume that the zero-temperature potential dominates over the thermal effects. Note that $\alpha_{\rm w}$ should be replaced with $\alpha_{s}$ for those flat directions which contain squarks. The lepton number is usually created just after the AD field starts coherent rotation in the potential, and its number density $n_L$ is estimated as $$n_L(t_{osc}) \simeq \varepsilon \omega \phi_{osc}^2,$$ where $\varepsilon({ \mathop{}_{\textstyle \sim}^{\textstyle <} }1)$ is the ellipticity parameter, which represents the strongness of the A term, and $\omega$ and $\phi_{osc}$ are the angular velocity and amplitude of the AD field at the beginning of the oscillation (rotation) in its effective potential. Actually, however, the AD field experiences spatial instabilities during its coherent oscillation, and deforms into nontopological solitons called Q-balls [@Kusenko; @Enqvist; @Kasuya1]. When the zero-temperature potential $V_{gauge}$ dominates at the onset of coherent oscillation of the AD field, the gauge-mediation type Q-balls are formed. Their mass $M_{Q}$ and size $R_Q$ are given by [@Dvali] $$\label{eq:mass} M_Q \sim M_F Q^{3/4}, \qquad R_Q \sim M_F^{-1} Q^{1/4}.$$ From the numerical simulations [@Kasuya1; @Kasuya3], the produced Q-balls absorb almost all the charges carried by the AD field and the typical charge is estimated as [@Kasuya3] $$Q \simeq \beta \left(\frac{\phi_{osc}}{M_F}\right)^4$$ with $\beta \approx 6 \times 10^{-4}$. There are also other cases where $V_{grav}$ dominates the potential at the onset of coherent oscillation of the AD field. If the coefficient of the one-loop correction $K$ is negative, the gravity-mediation type Q-balls (“new” type) are produced [@Kasuya2]. On the other hand, if $K$ is positive, Q-balls do not form until the AD field leaves the $V_{grav}$ dominant region. Later it enters the $V_{gauge}$ dominant region and experiences instabilities so that the gauge-mediation type Q-balls are produced (delayed-type Q-balls) [@Kasuya3]. In our scenario described in the next section, the AD field starts to oscillate from the gravitational scale, i.e., $\phi_{osc} = M$, which leads to the formation of new or “delayed”-type Q-balls. However, our scenario does not work for new type Q-balls because the produced Q-balls are large and do not decay before BBN. Hence, we concentrate on the delayed-type Q-balls below. Since the sign of $K$ is in general indefinite and dependent on the model of the messenger sector in gauge-mediated SUSY breaking models, we assume that $K$ is positive and delayed-type Q-balls are formed. When the AD field starts to oscillate in the $V_{grav}$ dominant region, where $H_{osc} \sim \omega \sim m_{3/2}$, the lepton number is produced as $n_L \simeq \varepsilon m_{3/2}\phi_{osc}^2$. Since the delayed-type Q-balls are formed only after the AD field enters the $V_{gauge}$ dominant region for positive $K$, the charge of Q-ball is given by $$\label{eq:delayq} Q \sim \beta \left(\frac{\phi_{eq}}{M_F}\right)^4 \sim \beta \left(\frac{M_F}{m_{3/2}}\right)^4$$ with $\phi_{eq} \sim M_F^2/m_{3/2}$. Here the subscript “eq” denotes a value when the gauge- and the gravity-mediation potentials become equal. Thus the delayed-type Q-balls are formed at $H_{eq} \sim M_F^2/M$. As we mentioned above, Q-balls absorb almost all the charges carried by the AD field. If we adopt the $e^{c}LL$ direction, all the lepton charges are confined in the Q-balls, namely, L-balls. Consequently, we must take out lepton charge from the L-balls through the evaporation, diffusion, and their decay. Part of the evaporated lepton charge is transformed into baryon charge by the sphaleron process, which accounts for the present baryon asymmetry. In the case of L-balls, they decay into leptons such as neutrinos via gaugino exchanges. The decay rate of Q-balls is bounded as [@Coleman] $$\label{eq:qdecay} \left|\frac{dQ}{dt}\right| { \mathop{}_{\textstyle \sim}^{\textstyle <} }\frac{\omega^{3} A}{192 \pi^{2}},$$ where $A$ is a surface area of the Q-ball. For L-balls, the decay rate is estimated as a value of the order of the upper limit. According to Refs.[@Kasuya3; @Laine; @Banerjee], we evaluate the evaporation rate of L-balls, which is given by [@Laine] $$\begin{aligned} \label{eq:evap} \zeta_{\rm evap} \equiv \frac{dQ}{dt}&=&-\kappa (\mu_{Q}-\mu_{\rm plasma}) T^2 4 \pi R_{Q}^{2},\nonumber\\ &\simeq& - 4 \pi \kappa \mu_{Q} T^2 R_{Q}^{2} ~~~{\rm for~~}\mu_{Q} \gg \mu_{\rm plasma} , \end{aligned}$$ where $\mu_{Q}$ and $\mu_{\rm plasma}$ are chemical potentials of the Q-ball and plasma, and the coefficient $\kappa { \mathop{}_{\textstyle \sim}^{\textstyle <} }1$ includes statistical and other numerical factors. The chemical potential of the Q-ball is given as $\mu_{Q} \simeq \omega$ since the energy of the $\phi$ particle inside the Q-ball is $\omega$. At $T { \mathop{}_{\textstyle \sim}^{\textstyle >} }m_{\phi}$, large numbers of the scalar particles building up Q-balls are in the plasma, which implies $\kappa \sim 1$. On the other hand, at $T { \mathop{}_{\textstyle \sim}^{\textstyle <} }m_{\phi}$, the evaporation from Q-balls is suppressed by the Boltzmann factor. In the case of L-balls, the main process of the evaporation is $\phi \phi \rightarrow ll$ through wino or bino exchange, which yields $\kappa \sim \alpha_{\rm w}^{2} T^2 /m_{\phi}^2$ at $T { \mathop{}_{\textstyle \sim}^{\textstyle <} }m_{\phi}$. However, if the charge transport is not effective enough, the evaporated lepton charges in the “atmosphere” of the L-ball will establish chemical equilibrium there. In this case, the dissipation of the charge is determined by the diffusion. The diffusion rate is estimated as [@Banerjee], $$\begin{aligned} \label{eq:diff} \zeta_{\rm diff} \equiv \frac{dQ}{dt} &=& -4 \pi D R_{Q} \mu_{Q} T^2 \nonumber\\ &\simeq& -4 \pi D T^2,\end{aligned}$$ where the diffusion constant $D$ of relativistic sleptons and leptons in a hot plasma is given by $D \simeq a/T$ with $a \sim 20$ [@Weldon; @Nelson]. In short, the time scale of the charge transportation is determined by the evaporation rate when $|\zeta_{\rm evap}| < |\zeta_{\rm diff}|$, and by the diffusion rate when $|\zeta_{\rm evap}| > |\zeta_{\rm diff}|$. The amount of the evaporated charges can be estimated by integrating Eqs. (\[eq:evap\]) and (\[eq:diff\]) in the course of the evolution of the universe. When the AD field starts to oscillate at the gravitational scale, its oscillation energy is comparable to the total energy of the universe. Therefore, the energy of the universe will be dominated by the AD condensate or Q-balls soon after the reheating and the universe continues to be matter dominated. The thermal history of the universe is rather involved because radiation comes from both decays of an inflaton and Q-balls. However, in fact, we have only to consider two cases where the cosmic temperature decreases monotonically. Large Lepton Asymmetry from L-ball {#sec:lepton} ================================== In this section we give a detailed explanation of our scenario. Our goal is to generate small baryon asymmetry and positive large lepton asymmetry of the electron type simultaneously. In general, however, this is difficult to accomplish because the chemical equilibrium induced by the sphaleron transition forces the baryon and the lepton asymmetries to be of the same order with opposite sign [@sphaleron]. Hence we must get over two problems : (i) how to protect large lepton asymmetry from being converted to baryon asymmetry by the sphaleron process, and (ii) how to reconcile the opposite sign of baryon and lepton asymmetries. We show that these two obstacles can be evaded by considering the Affleck-Dine leptogenesis and the subsequent L-ball formation using $e^{c}LL$ direction. First of all, we give the outline of our scenario and the solution to the problem (i). Large lepton asymmetry can be generated if the A terms, which make the AD field rotate in the effective potential, originate from some Kähler potential with vanishing superpotential. Then the AD field starts to oscillate with large initial amplitude $\phi_{osc} \simeq M$ and ellipticity $\varepsilon \simeq 1$. As spatial instabilities grow, delayed-type L-balls are formed and absorb almost all charges carried by the AD field. It is essential to our scenario that the lepton asymmetry confined in the L-balls is kept from the sphaleron process. However, a small part of lepton charges confined in the L-balls are evaporated due to thermal effects. Thus, lepton charges evaporated until the electroweak phase transition ($T { \mathop{}_{\textstyle \sim}^{\textstyle >} }T_{C} \sim 300$ GeV) $\Delta Q_{ew}$ are partly converted to baryon asymmetry through the sphaleron process, which explains the present small baryon asymmetry. On the other hand, large lepton asymmetry comes out through the decay of the remnant L-balls after the electroweak phase transition, which must be completed before BBN. Thus the small ratio $\Delta Q_{ew}/Q$ is the source of hierarchy between the baryon and the lepton asymmetries. Next we give a solution to the problem (ii), that is, the sign of the lepton asymmetry. What we want to generate is positive baryon asymmetry and positive lepton asymmetry of electron type. However, the sphaleron process converts positive lepton asymmetry into negative baryon asymmetry. To surmount this problem, we adopt the $e^{c}_{1}L_{2}L_{3}$ direction as the AD field, which leads $L_{e} = - L_{\mu} = - L_{\tau} = - L_{\rm total} > 0$. Thus the positive lepton asymmetry of the electron type is generated, whilst total lepton asymmetry is necessarily negative in order to have positive baryon asymmetry through the sphaleron transition. At the epoch of BBN, charged leptons except electrons have already disappeared through decay and annihilation processes. Also, because of the charge neutrality of the universe, lepton asymmetry stored in electrons are comparable to baryon asymmetry, which is rather small. Thus, there can exist large lepton asymmetry only in the neutrino sector. For later use, we define the degeneracy parameter $\xi_{l}$ as the ratio of the chemical potential to the neutrino temperature. The presence of chemical potentials speed up the universe, which leads to an increase in the $n/p$ ratio. However, its effect is negligible in comparison with the effect of the shift of chemical equilibrium between protons and neutrons due to the chemical potential of electron neutrino in the case of $|\xi_{\nu_e}| = |\xi_{\nu_\mu}| = |\xi_{\nu_\tau}|$. Now we give a quantitative estimate for our scenario. We assume that the zero-temperature potential dominates, i.e., $V_{gauge} \gg V_{T}^{(2)}$, at the formation of the delayed-type L-balls with $H \sim H_{eq}$ : $$\label{eq:potential} \alpha_{\rm w}^2 T_{eq}^4 < M_F^4,$$ where $T_{eq}$ is the temperature of the universe just before the delayed-type L-balls are formed. As shown below, this constraint is automatically satisfied for the cases we consider. The delayed-type L-balls must decay before BBN, $$\tau_{Q}=\left(\frac{1}{Q} \left|\frac{dQ}{dt}\right|\right)^{-1} { \mathop{}_{\textstyle \sim}^{\textstyle <} }1 {\rm sec},$$ which leads to the constraint $$\label{eq:bbndecay} \frac{m_{3/2}}{10 {\rm MeV}} { \mathop{}_{\textstyle \sim}^{\textstyle >} }\left(\frac{M_{F}}{10 {\rm TeV}}\right)^{4/5}\,.$$ Here Eq. (\[eq:qdecay\]) is used. In order to estimate the baryon and the lepton to entropy ratio, it is necessary to evaluate the entropy production by the decay of the L-balls. The decay temperature of the L-balls, $T_d$, is given by $$\begin{aligned} \label{eq:dectemp} T_{d} &=& \left(\frac{90}{\pi^2 g_{*}}\right)^{1/4} \sqrt{M \frac{M_{F} Q^{-5/4}}{48 \pi}}\nonumber \\ & \simeq & 1.3 {\rm MeV} \left(\frac{M_{F}}{10 {\rm TeV}}\right)^{-2} \left(\frac{m_{3/2}}{10 {\rm MeV}}\right)^{5/2},\end{aligned}$$ where $g_{*}=10.75$ counts the total number of effectively massless degrees of freedom. Now we turn to an account of the total evaporated charge, $\Delta Q$, and the evaporated charge at temperatures above the electroweak phase transition, $\Delta Q_{ew}$. In fact we have only to consider the following two cases. In the other cases, the temperature during the presence of L-balls does not exceed $T_{C}$ so that the evaporated lepton numbers are not converted into baryon numbers. First we consider the case that the delayed-type L-balls are formed before the reheating and decay after that (case A). This is realized if the following two conditions are satisfied, $$\begin{aligned} \label{eq:case1} M_F &>& T_{RH},\\ T_{RH} &>& T_d.\end{aligned}$$ Then the temperature at the L-ball formation is given by $T_{eq}=\sqrt{M_F T_{RH}}$, which automatically satisfies the requirement (\[eq:potential\]). The temperature of the universe is approximately given as $$\label{eq:temperature1} T \simeq \left\{ \begin{array}{ll} {\displaystyle}{\left(T_{RH}^2 M H \right)^{1/4}} & {\rm for~} {\displaystyle}{T_{RH} < T }, \\ {\displaystyle}{\left(T_{RH}^{-1} M^2 H^2 \right)^{1/3}} & {\rm for~} {\displaystyle}{T_p < T < T_{RH}}, \\ {\displaystyle}{\left(T_{d}^2 M H \right)^{1/4}} & {\rm for~} {\displaystyle}{T_{d} < T < T_{p}}, \\ \sqrt{H M} & {\rm for~} {\displaystyle}{T < T_{d}}, \end{array} \right.$$ where $T_p \equiv \left(T_{RH} T_d^4 \right)^{1/5}$ denotes the temperature when the radiation derived from the decay of the L-balls dominate over those derived from the inflaton. Note that the cosmic temperature decreases monotonically in this case. The condition that the chemical equilibrium induced by the sphaleron transition are well established is given by $$T_{eq}=\sqrt{M_F T_{RH}} > T_C.$$ With the use of Eq. (\[eq:temperature1\]), the evaporation rate with respect to the temperature is estimated as $$\left(\frac{dQ}{dT}\right)_{evap} \simeq \left\{ \begin{array}{ll} {\displaystyle}{4 \pi \frac{T_{RH}^2 M}{M_F T^{3}} Q^{1/4}} & {\rm for~} m_\phi,\,T_{RH}<T<T_{eq} ,\\ {\displaystyle}{4 \pi \alpha_{\rm w}^2 \frac{T_{RH}^2 M}{m_\phi^2 M_F T } Q^{1/4}} & {\rm for~}T_{RH}<T< m_\phi,\,T_{eq} ,\\ {\displaystyle}{4 \pi \frac{M}{ M_F \sqrt{T T_{RH}}} Q^{1/4}} & {\rm for~} m_\phi,\,T_p<T<T_{RH} ,\\ {\displaystyle}{4 \pi \alpha_{\rm w}^2 \frac{M T^{3/2}}{m_\phi^2 M_F T_{RH}^{1/2}} Q^{1/4}} & {\rm for~} T_p<T<m_\phi,\,T_{RH} ,\\ {\displaystyle}{4 \pi \frac{T_{d}^2 M}{M_F T^{3}} Q^{1/4}} & {\rm for~} m_\phi,\,T_{d}<T<T_{p} ,\\ {\displaystyle}{4 \pi \alpha_{\rm w}^2 \frac{T_{d}^2 M}{m_\phi^2 M_F T } Q^{1/4}} & {\rm for~}T_{d}<T< m_\phi,\,T_{p}. \end{array} \right. \label{eq:gmevap}$$ On the other hand, the diffusion rate with respect to the temperature is given by $$\left(\frac{dQ}{dT}\right)_{diff} \simeq \left\{ \begin{array}{ll} {\displaystyle}{4 \pi a \frac{T_{RH}^2 M}{T^4}} & {\rm for~} T_{RH} < T < T_{eq},\\ {\displaystyle}{4 \pi a \frac{M}{T_{RH}^{1/2}T^{3/2}}} & {\rm for~}T_p < T < T_{RH},\\ {\displaystyle}{4 \pi a \frac{T_{d}^2 M}{T^4}} & {\rm for~} T_{d} < T < T_{p}. \end{array} \right. \label{eq:gmdiff}$$ By integrating Eqs. (\[eq:gmevap\]) and (\[eq:gmdiff\]), the evaporated charges $\Delta Q$ and $\Delta Q_{ew}$ are found to be the same order, and given by $$\label{eq:delqew} \Delta Q \simeq \Delta Q_{ew} \simeq \left\{ \begin{array}{ll} {\displaystyle}{\frac{4 \pi a}{3} \frac{T_{RH}^2 M}{ m_{\phi}^3} }& {\rm for~} T_{RH} < m_{\phi} < T_{eq}, \\ {\displaystyle}{8 \pi a \frac{M}{\sqrt{m_{\phi} T_{RH}}}} & {\rm for~}T_{p} < m_{\phi} < T_{RH}, \\ {\displaystyle}{\frac{4 \pi a}{3} \frac{T_{d}^2 M}{ m_{\phi}^3} }& {\rm for~} T_{d} < m_{\phi} < T_{p}, \end{array} \right.$$ where we have used $T_C \sim m_{\phi}$. Next we consider the case where delayed-type L-balls are formed after the reheating and the temperature decreases monotonically (case B). This is realized if the following conditions are satisfied, $$\begin{aligned} \label{eq:case2} T_{RH} &>& M_{F},\\ M_{F} &>& \left(T_{RH}^2 T_d^3 \right)^{\frac{1}{5}},\end{aligned}$$ Then the temperature at the Q-ball formation is given by $T_{eq}=\left(M_F^4/T_{RH} \right)^{1/3}$, which again satisfies the requirement (\[eq:potential\]). Though the time evolution of the cosmic temperature is the same as case A, the requirement for the sphaleron process to work now reads $$T_{eq}=\left(M_F^4/T_{RH} \right)^{1/3} > T_C.$$ The evaporated charges $\Delta Q$ and $\Delta Q_{ew}$ can be estimated similarly and given by $$\label{eq:delqew2} \Delta Q \simeq \Delta Q_{ew} \simeq \left\{ \begin{array}{ll} {\displaystyle}{8 \pi a \frac{M}{\sqrt{m_{\phi} T_{RH}}}} & {\rm for~}T_{p} < m_{\phi} < T_{eq}, \\ {\displaystyle}{\frac{4 \pi a}{3} \frac{T_{d}^2 M}{ m_{\phi}^3} }& {\rm for~} T_{d} < m_{\phi} < T_{p} \end{array} \right..$$ Finally we estimate the baryon (lepton) to entropy ratio, using the results derived above. The baryon to entropy ratio is then given by $$\begin{aligned} \label{eq:nbs} &&\frac{n_{B}}{s} = \frac{8}{23} \frac{m_{3/2} M^2 }{ \frac{2 \pi^2}{45} g_{*} T_{d}^3} \frac{\frac{\pi^2}{90}g_* T_d^4}{ m_{3/2}^2} \frac{\Delta Q_{ew}}{Q},\nonumber \\ &&~~~~ =\frac{2}{23} \frac{T_d}{m_{3/2}} \frac{\Delta Q_{ew}}{Q},\nonumber \\ &\sim& \left\{ \begin{array}{ll} {\displaystyle}{4 \times 10^{-11} \left(\frac{m_{\phi}}{1{\rm TeV}}\right)^{-3} \left(\frac{m_{3/2}}{1{\rm GeV}}\right)^{\frac{11}{2}} \left(\frac{T_{RH}}{10{\rm GeV}}\right)^{2} \left(\frac{M_F}{10^6{\rm GeV}}\right)^{-6}} & \\ ~~~~~~~~~~~~~~~~~~~~~~~~\qquad\qquad\qquad\qquad\qquad\qquad\qquad {\rm for~} {\displaystyle}{T_{RH} < m_{\phi} < T_{eq}} & ({\rm case A})\\ {\displaystyle}{ 3 \times 10^{-11} \left(\frac{m_{\phi}}{1{\rm TeV}}\right)^{-\frac{1}{2}} \left(\frac{m_{3/2}}{1{\rm GeV}}\right)^{\frac{11}{2}} \left(\frac{T_{RH}}{10^7{\rm GeV}}\right)^{-\frac{1}{2}} \left(\frac{M_F}{3 \times 10^6{\rm GeV}}\right)^{-6} } & \\ ~~~~~~~~~~~~~~~~~~~~~~~~\qquad\qquad\qquad\qquad\qquad\qquad\qquad {\rm for~} {\displaystyle}{T_{p} < m_{\phi} <T_{eq}}& ({\rm case B}) \end{array} \right. ,\nonumber \\ &&\end{aligned}$$ where we have used Eqs. (\[eq:delayq\]), (\[eq:dectemp\]), (\[eq:delqew\]), and (\[eq:delqew2\]). Also we have assumed the maximal CP violation. In the same way, the lepton number to entropy ratio is given by $$\begin{aligned} \label{eq:nls} \frac{n_{L}}{s} &=&- \frac{T_d}{4 m_{3/2}} \nonumber \\ &\sim& -0.01 \times \left(\frac{m_{3/2}}{1{\rm GeV}}\right)^{\frac{3}{2}} \left(\frac{M_F}{5 \times 10^5 {\rm GeV}}\right)^{-2},\end{aligned}$$ which yields [@KS] $$\begin{aligned} \xi_{\nu_e} &\simeq& - 10 \times \frac{n_{L}}{s}\nonumber \\ &\sim& 0.1 \times \left(\frac{m_{3/2}}{1{\rm GeV}}\right)^{\frac{3}{2}} \left(\frac{M_F}{5 \times 10^5{\rm GeV}}\right)^{-2}.\end{aligned}$$ The allowed regions for $m_{3/2}$, $M_{F}$, and $T_{RH}$ is shown in Fig. \[fig:region\], where the baryon to entropy ratio takes the value required from BBN, $$\label{eq:nbscon} 10^{-11} { \mathop{}_{\textstyle \sim}^{\textstyle <} }\frac{n_B}{s} { \mathop{}_{\textstyle \sim}^{\textstyle <} }10^{-10}.$$ Here we adopt a rather loose constraint because of the uncertain CP phase. As can be seen from Fig. \[fig:region\], there are two allowed regions : (i) $m_{3/2} \sim 0.1 - 1$GeV, $M_{F}\sim 10^5 - 10^6$GeV, and $T_{RH}\sim 1 - 10^3$GeV, (ii) $m_{3/2} \sim 0.1 - 1$GeV, $M_{F}\sim 10^6$GeV, and $T_{RH}\sim 10^6 - 10^9$GeV. Roughly speaking, the regions (i) and (ii) correspond to the cases A and B respectively. Also we plot the contours of the degeneracy of electron neutrinos in Fig. \[fig:contour\], which shows that the large and positive lepton asymmetry of electron type can be generated in our scenario. For reference, the present constraint of $\xi_{\nu_e}$ by the analyses of BBN and CMB data is given by [@Hansen], $$\begin{aligned} \label{eq:xi} -0.01 { \mathop{}_{\textstyle \sim}^{\textstyle <} }&\xi_{\nu_e}& { \mathop{}_{\textstyle \sim}^{\textstyle <} }0.22.\end{aligned}$$ Thus, our scenario can generate both small baryon asymmetry and positive large lepton asymmetry of electron type at the same time by virtue of the AD leptogenesis and subsequently formed L-balls. Discussion and conclusions {#sec:con} ========================== In this paper we have proposed a scenario which accommodates small baryon asymmetry and large lepton asymmetry simultaneously. The large lepton asymmetry is generated through the Affleck-Dine mechanism and almost all the produced lepton charges are absorbed into L-balls which are formed subsequently. Thus, most of the produced lepton numbers do not suffer from the sphaleron process. Only a small fraction evaporated from the L-balls due to thermal effects is converted into baryon asymmetry, which is responsible for the present baryon asymmetry. As a concrete example, we consider positive large lepton asymmetry of the electron type. The excess of electron neutrinos shifts the chemical equilibrium between protons and neutrons toward protons so that the predicted primordial abundance of $^{4}$He is decreased, which often gives a solution to the discrepancy of BBN itself or that between BBN and CMB. However, the sphaleron process converts lepton asymmetry into baryon asymmetry with the opposite sign. To circumvent this problem, we identify the $e^{c}_{1}L_{2}L_{3}$ flat direction to be the AD field. Then the Affleck-Dine leptogenesis can generate positive lepton asymmetry of the electron type but totally negative lepton asymmetry, which is converted into positive baryon asymmetry. Of course, one should notice that by use of another flat direction such as $e^{c}_{2}L_{1}L_{3}$, we can obtain negative lepton asymmetry of the electron type and also total negative lepton asymmetry. Recently, it was pointed out that complete or partial equilibrium between all active neutrinos may be accomplished through neutrino oscillations in the presence of neutrino chemical potentials, depending on neutrino oscillation parameters [@equilibrium]. In the case of partial equilibrium, our scenario needs no change. Only complete equilibrium can spoil our scenario. Even if neutrino oscillation parameters lead to complete equilibrium, our scenario may still work since it is possible that the L-balls decay just before BBN and the complete equilibration cannot be attained, which needs further investigation. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} --------------- M.Y. was partially supported by the Japanese Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science, and Technology. [9]{} [BBN]{} See, for example, E. W. Kolb and M. S. Turner, [*The Early Universe*]{} (Addison-Wesley, New York, 1990). [BBN2]{} See, for example, K. A. Olive, G. Steigman, and T. P. Walker, . [BOOMERANG]{} P. de Bernardis [*et al.*]{}, ;\ A. E. Lange [*et al.*]{}, ;\ C. B. Netterfield [*et al.*]{}, . [MAXIMA]{} S. Hanany [*et al.*]{}, ;\ A. Balbi [*et al.*]{}, . [DASI]{} C. Pryke [*et al.*]{}, . [BM]{} M. Tegmark and M. Zaldarriaga, ;\ A. H. Jaffe [*et al.*]{}, . [KS]{} H. S. Kang and G. Steigman, ;\ N. Terasawa and K. Sato, . [sphaleron]{} V. A. Kuzmin, V. A. Rubakov, and M. E. Shaposhnikov, ;\ S. Y. Khlebnikov and M. E. Shaposhnikov, ;\ J. A. Harvey and M. S. Turner, . [oscillation]{} K. Enqvist, K. Kainulainen, and J. Maalampi, ;\ R. Foot, M. J. Thomson, and R. R. Volkas, ;\ X. Shi, . [nonrestoration]{} A. D. Linde, ; ;\ J. Liu and G. Segre, ;\ S. Davidson, H. Murayama, and K. Olive, . [second]{} J. A. Harvey and E. W. Kolb, ;\ A. Casas, W. Y. Cheng, and G. Gelmini, ;\ J. McDonald, . [MRM]{} J. March-Russell, A. Riotto, and H. Murayama, . [AD]{} I. Affleck and M. Dine, . [Kusenko]{} A. Kusenko and M. Shaposhnikov, . K. Enqvist and J. McDonald, . S. Kasuya and M. Kawasaki, . S. Kasuya and M. Kawasaki, . S. Kasuya and M. Kawasaki, . A. de Gouvêa, T. Moroi, and H. Murayama, . K. Enqvist and J. McDonald, . A. Anisimov and M. Dine, . G. Dvali, A. Kusenko, and M. Shaposhnikov, . A. Cohen, S. Coleman, H. Georgei, and A. Manohar, . M. Laine and M. Shaposhnikov, . R. Banerjee and K. Jedamzik, . H. A. Weldon, . A. E. Nelson, D. B. Kaplan, and A. G. Cohen, . [MMY]{} T. Moroi, H. Murayama, and M. Yamaguchi, . S. H. Hansen [*et al.*]{}, . [equilibrium]{} A. D. Dolgov, [*et al.*]{}, ;\ Y. Y. Y. Wong, ;\ K. N. Abazajian, J. F. Beacom, and N. F. Bell, . ![\[fig:region\] The allowed region for $m_{3/2}$, $M_{F}$ and $T_{RH}$, where our scenario succeeds and the baryon to entropy ratio satisfies the following bounds : $10^{-11} { \mathop{}_{\textstyle \sim}^{\textstyle <} }n_B/s { \mathop{}_{\textstyle \sim}^{\textstyle <} }10^{-10}$. Note that there does not exist any upper bound on $T_{RH}$ from the gravitino problem [@MMY; @Gouvea], since the L-balls dominate the universe and their decay temperature is rather low. The two separate allowed region roughly corresponds to the cases A and B discussed in the text. ](mt.ps "fig:"){width="10cm"} ![\[fig:region\] The allowed region for $m_{3/2}$, $M_{F}$ and $T_{RH}$, where our scenario succeeds and the baryon to entropy ratio satisfies the following bounds : $10^{-11} { \mathop{}_{\textstyle \sim}^{\textstyle <} }n_B/s { \mathop{}_{\textstyle \sim}^{\textstyle <} }10^{-10}$. Note that there does not exist any upper bound on $T_{RH}$ from the gravitino problem [@MMY; @Gouvea], since the L-balls dominate the universe and their decay temperature is rather low. The two separate allowed region roughly corresponds to the cases A and B discussed in the text. ](mft.ps "fig:"){width="10cm"} ![\[fig:contour\] The contours of the electron neutrino degeneracy are shown. The trapeziform area between two solid lines represents the allowed region where our scenario works and the baryon to entropy ratio satisfies the bounds: $10^{-11} { \mathop{}_{\textstyle \sim}^{\textstyle <} }n_B/s { \mathop{}_{\textstyle \sim}^{\textstyle <} }10^{-10}$. The contours represents $\xi_{\nu_e} = 0.005,~0.01,~0.02,~0.06,~0.1$ from top to bottom. ](xi.ps){width="13cm"}
--- abstract: 'In the present paper we study the existence of solutions for some nonlocal problems involving Orlicz-Sobolev spaces. The approach is based on sub-supersolutions.' author: - 'Giovany J. M. Figueiredo' - Abdelkrim Moussaoui - 'Gelson C.G. dos Santos' - 'Leandro S. Tavares' title: 'A sub-supersolution approach for some classes of nonlocal problems involving Orlicz spaces' --- Universidade de Brasília, Departamento de Matemática, CEP: 70910-900,\ Brasília-DF, Brazil A. Mira Bejaia University, Biology Department, Targa Ouzemour, 06000,\ Bejaia- Algeria Universidade Federal do Pará, Faculdade de Matemática, CEP: 66075-110, Belém-PA, Brazil Universidade Federal do Cariri, Centro de Ciências e Tecnologia, CEP:63048-080, Juazeiro do Norte-CE, Brazil Introduction ============ Let $\Omega $ be a bounded domain in $\mathbb{R}^{N}$ $(N\geq 3)$ with $C^{2} $ boundary $\partial \Omega $. In the present paper we focus on the problems of quasilinear elliptic nonlocal equations $$\left\{ \begin{array}{rcl} \label{problema-(P)}-\Delta _{\Phi }u & = & f(u)|u|_{{L^{\Psi }}}^{\alpha }+g(u)|u|_{L^{\Lambda }}^{\gamma }\;\;\mbox{in}\;\;\Omega , \\ u & = & 0\;\;\mbox{on}\;\;\partial \Omega \end{array}% \right. \eqno{(P_1)}$$and$$\left\{ \begin{array}{rcl} \label{problema-(P)}-\Delta _{\Phi _{1}}u & = & f_{1}(v)|v|_{L^{\Psi _{1}}}^{\alpha _{1}}+g_{1}(v)|v|_{L^{\Lambda _{1}}}^{\gamma _{1}}\;\;% \mbox{in}\;\;\Omega , \\ -\Delta _{\Phi _{2}}v & = & f_{2}(u)|u|_{L^{\Psi _{2}}}^{\alpha _{2}}+g_{2}(u)|u|_{L^{\Lambda _{2}}}^{\gamma _{2}}\;\;\mbox{in}\;\;\Omega , \\ u=v & = & 0\;\;\mbox{on}\;\;\partial \Omega ,% \end{array}% \right. \eqno{(P_2)}$$where $\alpha _{i},\gamma _{i},$ $i=0,1,2,$ with $\alpha _{0}:=\alpha $ and $% \gamma _{0}:=\gamma ,$ are positive constants, $|.|_{L^{\Psi }}$ (resp. $% |.|_{L^{\Lambda }}$) denotes the norm in the Orlicz space $L^{\Psi }(\Omega ) $ (resp. $L^{\Lambda }(\Omega )$) and the nonlinearities $% f_{i},g_{i}:[0,+\infty )\rightarrow \lbrack 0,+\infty ),$ $i=0,1,2,$ with $% f_{0}:=f$ and $g_{0}:=g$, are continuous and nondecreasing functions. Here, $% \Delta _{\Phi _{i}}$ stands for the $\Phi _{i}-$Laplacian operator, that is, $\Delta _{\Phi _{i}}w=\mathrm{div}\,(\phi _{i}(|\nabla w|)\nabla w),$ for $% i=0,1,2$, where $\Phi _{i}:\mathbb{R}\rightarrow \mathbb{R}$ are $N$-functions of the form $$\Phi _{i}(t):=\int_{0}^{|t|}\phi _{i}(s)sds, \label{phi}$$with $\phi _{i}:[0,+\infty )\rightarrow \lbrack 0,+\infty )$ being $C^{1}$ functions satisfying $$(t\phi _{i}(t))^{\prime };\quad \forall t>0\leqno{(\phi_1)}$$$$\lim_{t\rightarrow 0^{+}}t\phi _{i}(t)=0,\quad \lim_{t\rightarrow +\infty }t\phi _{i}(t)=+\infty \leqno{(\phi_2)}$$and that there exist $l_{i},m_{i}\in (1,N),$ $i=0,1,2$ such that$$l_{i}-1\leq \frac{(\phi _{i}(t)t)^{^{\prime }}}{\Phi _{i}(t)}\leq m_{i}-1,% \text{ \ }\forall t>0,\leqno{(\phi_3)}$$where $\phi _{0}:=\phi ,$ $l_{0}:=l$ and $m_{0}:=m.$ Note that the condition $(\phi _{3})$ implies that $$l_{i}\leq \frac{\phi _{i}(t)t^{2}}{\Phi _{i}(t)}\leq m_{i},\text{ \ }\forall t>0,\leqno{(\phi_3)'}$$for $i=0,1,2.$ In addition, $\Psi _{i}$ and $\Lambda _{i},$ for $i=0,1,2,$ with $\Psi _{0}:=\Psi $ and $\Lambda _{0}=\Lambda ,$ are $N$-functions satisfying the $\Delta _{2}$ condition. According to hypotheses $(\phi _{1})-(\phi _{3})$, a wide class of operators can be incorporated in problems $(P_{1})$ and $(P_{2})$, for instance: - $\phi (t)=p|t|^{p-2},$ $t>0,$ with $p>1.$ The operator $\Delta _{\Phi } $ is the $p$-Laplacian operator. - $\phi (t)=p|t|^{p-2}+q|t|^{q-2},$ $t>0,1<p<q.$ Here $\Delta _{\Phi }$ is the $(p,q)$-Laplacian operator applied in quantum physics (see [Benci-Fortunato]{}). - $\phi (t)=2\gamma (1+t^{2})^{\gamma -1},$ $t>0$ and $\gamma >1$. $% \Delta _{\Phi }$ appears in nonlinear elasticity problems [@FN2]. - $\phi (t)=\gamma \frac{(\sqrt{1+t^{2}}-1)^{\gamma -1}}{\sqrt{1+t^{2}}}, $ $t>0$ and $\gamma \geq 1$. The operator $\Delta _{\Phi }$ arises in minimal surfaces theory for $\gamma =1$ (see [@Daco page 128]) and nonlinear elasticity for $\gamma >1$ (see [@FN0]). - $\phi (t)=\frac{pt^{p-2}(1+t)\ln (1+t)+t^{p-1}}{1+t},$ $t>0$. The operator $\Delta _{\Phi }$ appears in plasticity problems (see [@FN2]). However, the lack of properties such as homogeneity complicates handling the nonlinear $\Phi _{i}-$Laplacian operator which, therefore, constitutes a serious obstacle in the study of the problems $(P_{1})$ and $(P_{2})$. Thereby, it requires relevant topics of nonlinear functional analysis, especially theory of Orlicz and Orlicz-Sobolev spaces (see, e.g. [@AF; @RR] and their abundant reference). Another mathematical difficulty encountered comes out from the nonlocal caracter of $(P_{1})$ and $(P_{2})$. It is due to the presence of terms $|\cdot |_{{L^{\Psi _{i}}}}$ and $|\cdot |_{L^{\Lambda _{i}}}$ that make the equations in $(P_{1})$ and $(P_{2})$ no longer a pointwise identities. For more inquiries on nonlocal problems we refer to [@CL-; @Chen-Gao] where systems of elliptic equations are examined. With regard to the scalar case, we quote the papers [AC,ACM,CF,kirchhoff,Ma,PZ,ZP]{}. Such problems are important for applications in view of the significant number of physical phenomena formulated into nonlocal mathematical models. For instance, they appear in the study of the flow of a fluid through a homogeneous isotropic rigid porous medium, as well as in the study of population dynamics (see, e.g., [@DDX; @S]). Relevant contributions regarding nonlocal problems fit the setting of $% (P_{1})$ and $(P_{2})$. In particular, Alves & Covei [@Alves-Covei] applied the sub-supersolution method to show the existence results for problem involving Kirchhoff-type operator $$\left\{ \begin{array}{rcl} -a\left( \int_{\Omega }u\right) \Delta u & = & h_{1}(x,u)f\left( \int_{\Omega }|u|^{p}\right) +h_{2}(x,u)g\left( \int_{\Omega }|u|^{r}\right) \;\;\mbox{in}\;\;\Omega , \\ u & = & 0\;\;\mbox{on}\;\;\partial \Omega ,% \end{array}% \right.$$where $a,f,g,$ and $h_{i}$ ($i=1,2$) are given functions. The case of nonlocal problems driven by the $p$-Laplacian differential operator is investigated by [@CFL]. Combining sub-supersolutions method with the classical theorem due to Rabinowitz [@rabinowitz], the authors proved the the existence of solutions for quasilinear problem of the form $$\left\{ \begin{array}{rcl} -\Delta _{p}u & = & |u|_{L^{q}}^{\alpha (x)}\;\;\mbox{in}\;\;\Omega , \\ u & = & 0\;\;\mbox{on}\;\;\partial \Omega ,% \end{array}% \right. \eqno{(P)}$$where $\alpha $ is a nonnegative function defined in $\overline{\Omega }.$ Then, they extend the results to the nonlocal quasilinear elliptic system $$\left\{ \begin{array}{rcl} -\Delta _{p_{1}}u & = & |v|_{L^{q_{1}}}^{\alpha _{1}(x)}\;\;\mbox{in}% \;\;\Omega , \\ -\Delta _{p_{2}}v & = & |u|_{L^{q_{2}}}^{\alpha _{2}(x)}\;\;\mbox{in}% \;\;\Omega , \\ u & = & 0\;\;\mbox{on}\;\;\partial \Omega .% \end{array}% \right. \eqno{(S)}$$The semilinear case, that is when $p_{1}=p_{2}=2$ is investigated by Corrêa-Lopes [@CL-] and Chen-Gao [@Chen-Gao] for systems of the form $$\left\{ \begin{array}{rcl} -\Delta u^{m} & = & f(x,u)|v|_{L^{p}}^{\alpha }\ \text{in}\ \Omega , \\ -\Delta v^{n} & = & g(x,v)|u|_{L^{q}}^{\beta }\ \text{in}\ \Omega , \\ u & = & v=0\ \text{on}\ \partial \Omega \end{array}% \right. \label{stat}$$with $m,n\geq 1$ and $\alpha ,\beta >0$. The existence of solutions is obtained by means of topological methods, namely, Galerkin method, fixed point theory as well as sub-supersolutions techniques. Motivated by the aforementioned papers, our goal is to establish the existence of (positive) solutions for problems $(P_{1})$ and $(P_{2})$ involving sublinear and concave-convex terms. The approach relies on the method of sub-supersolution. However, besides the nonlocal nature of the problems, this method cannot be easily implemented due to the presence of $% \Phi _{i}$-Laplacian operator in the principle part of the equations. At this point, to the best of our knowledge, it is for the first time when nonlocal problems involving $\Phi _{i}$-Laplacian operator are studied. A significant feature of our result lies in the obtaining of the sub- and supersolution in the Orlicz-Sobolev spaces setting and, involving nonlocal terms. This is achieved by the choice of suitable explicit functions with an adjustment of adequate constants. The rest of the paper is organized as follows: Section 2 is devoted to the needed properties in Orlicz and Orlicz-Sobolev spaces. Section 3 (resp. Section 4) contains existence results for problem $(P_{1})$ (resp. $(P_{2})$) involving sublinear and concave-convex structures. Preliminaries ============= In this section we recall some results on Orlicz-Sobolev spaces. We say that a continuous function $\Phi : \mathbb{R} \rightarrow [0,+\infty)$ is a N-function if: - $\Phi$ is convex, - $\Phi(t) = 0 \Leftrightarrow t = 0 $, - $\displaystyle\lim_{t\rightarrow0}\frac{\Phi(t)}{t}=0$ and $% \displaystyle\lim_{t\rightarrow+\infty}\frac{\Phi(t)}{t}= +\infty$, - $\Phi$ is even. We say that a N-function $\Phi$ verifies the $\Delta_{2}$-condition, and we denote by $\Phi \in \Delta_2$, if $$\Phi(2t) \leq K\Phi(t),\quad \forall t\geq t_0,$$ for some constants $K,t_0 > 0$. Regarding the condition $\Delta_2$ it is important to note that such property is satisfied under the condition $% (\phi_3)^{^{\prime }}$ in the case of the $N-$function is given by . We fix an open set $\Omega \subset \mathbb{R}^{N}$ and a N-function $\Phi$. We define the Orlicz space associated with $\Phi$ as follows $$L^{\Phi}(\Omega) = \left\{ u \in L^{1}(\Omega) \colon \ \int_{\Omega} \Phi% \Big(\frac{|u|}{\lambda}\Big)dx < + \infty \ \ \mbox{for some}\ \ \lambda >0 \right\}.$$ The space $L^{\Phi}(\Omega)$ is a Banach space endowed with the Luxemburg norm given by $$| u |_{L^\Phi} = \inf\left\{ \lambda > 0 : \int_{\Omega}\Phi\Big(\frac{|u|}{% \lambda}\Big)dx \leq1\right\}.$$ [@FN]\[min-max\] Consider $\Phi$ a $N$-function of the form and satisfying $(\phi_1), (\phi_2)$ and $(\phi_3)$. Set $$\zeta_0(t)=\min\{t^\ell,t^m\}~~\mbox{and}~~ \zeta_1(t)=\max\{t^\ell,t^m\},~~ t\geq 0.$$ Then $\Phi$ satisfies $$\zeta_0(t)\Phi(\rho)\leq\Phi(\rho t)\leq \zeta_1(t)\Phi(\rho),~~ \rho, t> 0,$$ $$\zeta_0(|u|_{\Phi})\leq\int_\Omega\Phi(u)dx\leq \zeta_1(|u|_{\Phi}),~ u\in L_{\Phi}(\Omega).$$ For a $N-$function $\Phi$, the corresponding Orlicz-Sobolev space is defined as the Banach space $$W^{1, \Phi}(\Omega) = \Big\{ u \in L^{\Phi}(\Omega) \ :\ \frac{\partial u}{% \partial x_{i}} \in L^{\Phi}(\Omega), \quad i = 1, ..., N\Big\},$$ endowed with the norm $$\Vert u \Vert_{1,\Phi} = |\nabla u|_{L^\Phi} + |u|_{L^\Phi}.$$ The $\Delta_2$-condition also implies that $$u_n \to u \,\,\, \mbox{in} \,\,\, L_{\Phi}(\Omega) \Longleftrightarrow \int_{\Omega}\Phi(|u_n-u|) \to 0$$ and $$u_n \to u \,\,\, \mbox{in} \,\,\, W^{1,\Phi}(\Omega) \Longleftrightarrow \int_{\Omega}\Phi(|u_n-u|) \to 0 \,\,\, \mbox{and} \,\,\, \int_{\Omega}\Phi(|\nabla u_n- \nabla u|) \to 0.$$ Consider $u,v\in W^{1,\Phi}(\Omega)$ we will say that $-\Delta_{\Phi}u\leq -\Delta_{\Phi}v$ in $\Omega$ if $$\int_{\Omega}\phi(|\nabla u|)\nabla u \nabla\varphi\leq\int_{\Omega}\phi(|\nabla v|)\nabla v \nabla\varphi,$$ for all $\varphi\in W_{0}^{1,\Phi}(\Omega)$ with $\varphi\geq0.$ The following results will be often used. [@Tan-Fang Lemma 4.1]\[comparison\] Let $u,v \in W^{1,\Phi}(\Omega)$ with $-\Delta_{\Phi}u\leq -\Delta_{\Phi}v$ in $\Omega$ and $u \leq v$ in $% \partial \Omega$ (i.e $(u-v)^{+} \in W^{1,\Phi}_{0}(\Omega)$), then $u(x) \leq v(x)$ a.e in $\Omega.$ [@Tan-Fang Lemma 4.5]\[Tan-Fang\] Let $\lambda >0$ and consider $% z_{\lambda }$ the unique solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi}z_{\lambda} &=&\lambda\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} z_{\lambda}&=&0\;\;\mbox{on}\;\;\partial\Omega, \end{array} \right. \end{aligned} \label{probl-linear-lambda}$$where $\Phi $ is given by and $\Omega \subset \mathbb{R}^{N}$ is an admissible domain. Define $\rho _{0}=\frac{1}{2|\Omega |^{\frac{1}{N}% }C_{0}}.$ If $\lambda \geq \rho _{0},$ then $|z_{\lambda }|_{L^{\infty }}\leq C^{\ast }\lambda ^{\frac{1}{l-1}}$ and $|z_{\lambda }|_{L^{\infty }}\leq C_{\ast }\lambda ^{\frac{1}{m-1}}$ if $\lambda <\rho _{0}.$ Here $% C^{\ast }$ and $C_{\ast }$ are positive constants dependending only on $l,m,N $ and $\Omega $. Regarding to the function $z_{\lambda}$ of the previous result, it follows from [@Lieberman page 320] and [@Tan-Fang Lemma 4.2] that $% z_{\lambda} \in C^{1}(\overline{\Omega}) $ with $z_\lambda >0$ in $\Omega.$ The scalar case =============== We say that $u\in W_{0}^{1,\Phi }(\Omega )\cap L^{\infty }(\Omega )$ is a (weak) solution of $(P_{1})$ if $$\int_{\Omega }\phi (|\nabla u|)\nabla u\nabla \varphi =\int_{\Omega }(f(u)|u|_{{\Psi }}^{\alpha }+g(u)|u|_{{\Lambda }}^{\gamma })\varphi ,$$for all $\varphi \in W_{0}^{1,\Phi }(\Omega ).$ Given $u,v\in \mathcal{S}(\Omega):= \{u: \Omega \rightarrow \mathbb{R}: u \ \text{is measurable}\},$ we write $u \leq v$ if $u(x) \leq v(x)$ a.e in $% \Omega.$ We denote by $[u,v]$ the set $$[u,v]:=\bigl\{w\in \mathcal{S}(\Omega): u(x)\leq w(x)\leq v(x)\;\text{a.e in}% \;\Omega\bigl\}.$$ We say that $(\underline{u},\overline{u})$ is a sub-super solution pair for $% (P)$ if $\underline{u},\overline{u}\in $ $W_{0}^{1,\Phi }(\Omega )\cap L^{\infty }(\Omega )$ are nonnegative functions that satisfy the inequality $% 0<\underline{u}\leq \overline{u}$ in $\Omega $ and if for all $\varphi \in W_{0}^{1,\Phi }(\Omega )$ with $\varphi \geq 0$ the following inequalities hold $$\int_{\Omega}\phi(|\nabla \underline{u}|)\nabla\underline{u}% \nabla\varphi\leq \int_{\Omega}(f(\underline{u})|\underline{u}% |_{L^{\Psi}}^{\alpha} + g(\underline{u})| \underline{u}|^{\gamma}_{L^{% \Lambda}})\varphi$$ and $$\int_{\Omega}\phi(|\nabla \overline{u}|)\nabla\overline{u}\nabla\varphi\geq \int_{\Omega}(f(\overline{u})|\overline{u}|_{L^{\Psi}}^{\alpha} + g(% \overline{u}) | \overline{u}|^{\gamma}_{L^{\Lambda}})\varphi .$$ The following result will play an important role in our arguments. \[sub-supermethod\] Suppose that $f,g:[0, +\infty) \rightarrow \mathbb{R} $ are nondecreasing, continuous and nonnegative functions. Consider also that $\alpha, \gamma \geq 0$ and that there exists a sub-supersolution pair $% (\underline{u},\overline{u})$ for problem $(P_1). $ Then there exists a nontrivial solution $u$ for $(P_1)$ with $u \in [\underline{u},\overline{u}% ]. $ We have that $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi} \underline{u} &\leq &f(\underline{u})| \underline{u}|^{\alpha}_{L^\Psi} + g(\underline{u})| \underline{u}|^{\gamma}_{L^\Lambda}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \underline{u}&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ and $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi} \overline{u} &\geq&f(\overline{u})| \overline{u}|^{\alpha}_{L^\Psi} + g(\overline{u})|\overline{u} |^{\gamma}_{L^{\Lambda}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \overline{u}&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ Denote by $u_1$ the unique solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi} u_1 &=& f(\underline{u})| \underline{u}|^{\alpha}_{L^\Psi} +g(\underline{u})| \underline{u}|^{\gamma}_{L^\Lambda}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} u_1&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ Note that the mentioned solution exist because the term $f(\underline{u})|% \underline{u}|^{\alpha}_{L^\Psi}$ is bounded. Since $\underline{u} \leq \overline{u}$ in $\Omega$, $f$ is nondecreasing and $\alpha,\gamma \geq0$, we have that $f(\underline{u})| \underline{u}|^{\alpha}_{L^\Psi}\leq f(% \overline{u})| \overline{u}|^{\alpha}_{L^\Psi}$ and $g(\underline{u})| \underline{u}|^{\gamma}_{L^\Psi}\leq g(\overline{u})| \overline{u}% |^{\gamma}_{L^\Lambda},$ then it follows from Lemma \[comparison\] that $% \underline{u} \leq u_1 \leq \overline{u}$ in $\Omega.$ Let $u_{2}$ be the solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi} u_2 &=&f(u_1)| u_1|^{\alpha}_{L^\Psi} +g(u_1)| u_1|^{\gamma}_{L^\Lambda}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} u_2&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$Since $\underline{u}\leq u_{1}\leq \overline{u}$ in $\Omega $, we have that $$f(\underline{u})|\underline{u}|_{L^{\Psi }}^{\alpha }+g(\underline{u})|% \underline{u}|_{L^{\Lambda }}^{\alpha }\leq f(u_{1})|u_{1}|_{L^{\Psi }}^{\alpha }+g(u_{1})|u_{1}|_{L^{\Lambda }}^{\gamma }\leq f(\overline{u})|% \overline{u}|_{L^{\Psi }}^{\alpha }+g(\overline{u})|\overline{u}% |_{L^{\Lambda }}^{\gamma }\ \text{in}\ \Omega .$$Thus from Lemma \[comparison\] we get, $$\underline{u}\leq u_{1}\leq u_{2}\leq \overline{u}\ \text{in}\ \Omega .$$Note also that $-\Delta _{\Phi }u_{i}\leq f(\overline{u})|\overline{u}% |_{L^{\Psi }}^{\alpha }+g(\overline{u})|\overline{u}|_{L^{\Lambda }}^{\gamma },i=1,2.$ Thus we can construct a sequence $u_{n}$ such that $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi} u_n &=&f(u_{n-1})| u_{n-1}|^{\alpha}_{L^\Psi} + g(u_{n-1})| u_{n-1}|^{\gamma}_{L^\Lambda}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} u_n&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}% \eqno{(P_n)}$$with $-\Delta _{\Phi }u_{n}\leq f(\overline{u})|\overline{u}|_{L^{\Psi }}^{\alpha }+g(\overline{u})|\overline{u}|_{L^{\Lambda }}^{\gamma }$ in $% \Omega $ and $\underline{u}\leq u_{n}\leq \overline{u}$ in $\Omega $ for all $n\in \mathbb{N}.$ Using the $C^{1,\alpha }$ estimates up to the boundary (see [@Lieberman]), we have that $u_{n}$ is a bounded sequence in $% C^{1,\theta }(\overline{\Omega })$ for some $\theta \in (0,1].$ Since the embedding $C^{1,\theta }(\overline{\Omega })\hookrightarrow C^{1}(\overline{% \Omega })$ is compact, we can extract a subsequence with $u_{n}\rightarrow u$ in $C^{1}(\overline{\Omega })$ for some $u\in C^{1}(\overline{\Omega })$. Passing to the limit in $(P_{n})$, we have that $u$ is a nontrivial solution for problem $(P_{1})$. A sublinear scalar problem -------------------------- In this section we use Lemma \[sub-supermethod\] and a suitable sub-supersolution pair to prove the existence of solution for a nonlocal problem of the type $$\left\{ \begin{array}{rcl} -\Delta _{\Phi }u & = & u^{\beta }|u|_{L^{\Psi }}^{\alpha }\ \mbox{in}\ \Omega , \\ u & = & 0\ \mbox{on}\ \partial \Omega ,% \end{array}% \right. \eqno{(P_S)}$$where $\alpha ,\beta \geq 0$ are constants saisfying certain conditions. The above problem is considered in [@CFL] for the $p-$Laplacian case and with $\beta =0$. We complete the study done in [@CFL Theorem 4.1] by considering constants exponents and a more general operator. \[teo-sublinear\] Suppose that $\alpha, \beta \geq 0$ with $0 < \alpha + \beta < l-1,$ where $l$ is given in $(\phi_3).$ Then $(P_S)$ has a positive solution. We will start by constructing $\overline{u}$. Let $\lambda>0$ and consider $% z_{\lambda}\in W_{0}^{1,\Psi}(\Omega)\cap L^{\infty}(\Omega)$ the unique solution of where $\lambda$ will be chosen later. For $\lambda >0$ large by Lemma \[Tan-Fang\] there is a constant $K>1$ that does not depend on $\lambda$ such that $$\label{desig1-p-supsol} 0<z_{\lambda}(x)\leq K\lambda^{\frac{1}{l-1}}\;\text{in}\;\Omega.$$ Since $0<\alpha+\beta<l-1$ we can choose $\lambda>1$ such that occurs and $$\label{desig2-p-supsol} K^{\beta}\lambda^{\frac{\alpha+\beta}{l-1}} |K|^{\alpha}_{L^\Psi}\leq\lambda.$$ By and we get $$z_{\lambda}^{\beta}|z_{\lambda}|_{L^{\Psi}}^{\alpha}\leq \lambda.$$ Therefore $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi}z_{\lambda}&\geq&z_{\lambda}^{\beta}|z_{\lambda}|_{{\Psi}}^{% \alpha}\;\;\mbox{in}\;\;\Omega,\\ \vspace{.2cm} z_{\lambda}&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ Now we will construct $\underline{u}.$ Since $\partial \Omega$ is $C^2$ there is a constant $\delta >0$ such that $d \in C^{2}(\overline{ \Omega_{3 \delta}})$ and $|\nabla d(x)| \equiv 1$ where $d(x):= dist(x,\partial \Omega) $ and $\overline{ \Omega_{3 \delta}}:=\{x \in \overline{\Omega}; d(x) \leq 3 \delta\}$(see [@Gilbarg-Trudinger Lemma 14.16] and its proof). Let $\sigma \in (0, \delta)$. A direct computation implies that the function $\phi=\phi(k,\sigma)$ defined by $$\eta(x)=\left\{% \begin{array}{lcl} e^{kd(x)}-1 & \text{ if } & d(x)<\sigma, \\ e^{k\sigma}-1+\int_{\sigma}^{d(x)}ke^{k\sigma}\Big(\frac{2\delta-t}{% 2\delta-\sigma}\Big)^{\frac{m}{l-1}}dt & \text{ if } & \sigma\leq d(x)<2\delta, \\ e^{k\sigma}-1+\int_{\sigma}^{2\delta}ke^{k\sigma}\Big(\frac{2\delta-t}{% 2\delta-\sigma}\Big)^{\frac{m}{l-1}}dt & \text{ if } & 2\delta \leq d(x)% \end{array} \right.$$ belongs to $C^{1}_{0}(\overline{\Omega})$ where $k>0$ is an arbitrary number. Direct computations implies that $$-\Delta_{\Phi}(\mu\eta)=% \begin{cases} - \mu k^2 e^{kd(x)} \frac{d}{dt} \left( \phi(t)t\right)\bigg\vert_{t= \mu k e^{kd(x)}} - \phi(\mu k e^{kd(x)})\mu k e^{kd(x)}\Delta d \;\; \mbox{ if}% \quad d(x)<\sigma, \\ \mu k e^{k \sigma}\left( \frac{m}{l-1}\right)\left(\frac{2\delta -d(x)}{% 2\delta- \sigma} \right)^{\frac{m}{l-1}-1}\left( \frac{1}{2\delta - \sigma}% \right) \frac{d}{dt}\left( \phi(t)t\right)\bigg\vert_{t = \mu k e^{k \sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)} \\ -\phi\left( \mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}% \right)^{\frac{m}{l-1}}\right) \mu k e^{k\sigma} \left( \frac{2\delta-d(x)}{% 2\delta - \sigma}\right)^{\frac{m}{l-1}}\Delta d \;\; \mbox{ if}\quad \sigma < d(x)<2\delta, \\ 0\;\; \mbox{ if}\quad 2\delta<d(x)% \end{cases}%$$ for all $\mu >0$. If $k$ is large and $d(x)<\sigma,$ we have that $-\Delta_\Phi(\mu \phi)\leq 0.$ In fact, note that by $(\phi_3)$ we have for $k$ large that $$\label{negative} \begin{aligned} -\Delta_{\Phi} (\mu \eta) =& - \mu k^2 e^{kd(x)} \frac{d}{dt}\left( \phi(t)t\right)\bigg\vert_{t= \mu k e^{kd(x)}} - \phi(\mu k e^{kd(x)})\mu k e^{kd(x)}\Delta d \\ \leq &- k^2 \mu e^{kd(x)}(l-1)\phi(\mu k e^{kd(x)}) - \phi(\mu k e^{kd(x)})\mu k e^{kd(x)}\Delta d \\ =& \mu ke^{kd(x)}\phi (\mu k e^{kd(x)})(-k(l-1) - \Delta d)\\ \leq & 0, \end{aligned}$$ because $\Delta d$ is bounded near the boundary and $l>1.$ Now we will estimate $-\Delta_{\Phi} (\mu \eta)$ in the case $\sigma < d(x) < 2\delta.$ Note that from $(\phi_3)$ and Lemma \[min-max\] we get [ $$\label{est1} \begin{aligned} \mu & k e^{k \sigma}\left( \frac{m}{l-1}\right)\left(\frac{2\delta -d(x)}{2\delta- \sigma} \right)^{\frac{m}{l-1}-1}\left( \frac{1}{2\delta - \sigma}\right) \frac{d}{dt}\left( \phi(t)t\right)\bigg\vert_{t = \mu k e^{k \sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)} \\ \leq &\mu k e^{k \sigma}\left( \frac{m}{l-1}\right)\left(\frac{2\delta -d(x)}{2\delta- \sigma} \right)^{\frac{m}{l-1}-1}\left( \frac{m-1}{2\delta - \sigma}\right) \phi \left( \mu k e^{k \sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}}\right)\\ \leq & \left(\frac{m-1}{2\delta-\sigma}\right) \left(\frac{m}{l-1} \right) \frac{\Phi \left( \mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta-\sigma}\right)^{\frac{m}{l-1}}\right)}{\mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta -\sigma}\right)^{\frac{m}{l-1}}} \frac{1}{\left(\frac{2\delta -d(x)}{2\delta - \sigma}\right)}\\ \leq & \max \left\{ (\mu k e^{k\sigma})^{m-1} \left( \frac{2\delta - d(x)}{2\delta -\sigma}\right)^{m \left( \frac{m}{l-1}\right) - \left(\frac{m}{l-1} +1 \right)}, (\mu k e^{k\sigma})^{l-1} \left( \frac{2\delta - d(x)}{2\delta -\sigma}\right)^{l \left( \frac{m}{l-1}\right) - \left(\frac{m}{l-1} +1 \right)}\right\}\\ \times & \left( \frac{m-1}{2\delta -\sigma}\right) \left( \frac{m}{l-1}\right). \end{aligned}$$ ]{} Since $m,l >1$, we get $l \left(\frac{m}{l-1} \right) - m \left(\frac{m}{% l-1} +1\right), m \left(\frac{m}{l-1} \right) - m \left(\frac{m}{l-1} +1\right)> 0. $ Note that $0 \leq \left( \frac{2\delta - d(x)}{2\delta - \sigma}\right) \leq 1.$ Thus by we get $$\label{est2} \begin{aligned} \mu & k e^{k \sigma}\left( \frac{m}{l-1}\right)\left(\frac{2\delta -d(x)}{2\delta- \sigma} \right)^{\frac{m}{l-1}-1}\left( \frac{1}{2\delta - \sigma}\right) \frac{d}{dt}\left( \phi(t)t\right)\bigg\vert_{t = \mu k e^{k \sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)}\\ \leq & \left( \frac{m-1}{2\delta-\sigma}\right) \left( \frac{m}{l-1}\right) \max \{(\mu k e^{k\sigma})^{m-1}, (\mu k e^{k\sigma})^{l-1}\} \\ =& C_1 \left( \frac{1}{2\delta -\sigma}\right) \max \{(\mu k e^{k\sigma})^{m-1}, (\mu k e^{k\sigma})^{l-1}\}, \end{aligned}$$ where $C_1$ is a constant that does not depend on $\mu $ and $k.$ On other hand, we have by Lemma \[min-max\] that [ $$\label{est3} \begin{aligned} &\left|\phi \left(\mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}} \right) \mu k e^{k\sigma} \left( \frac{2\delta - d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}} \Delta d\right|\\ \leq & \phi \left(\mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}} \right) \mu k e^{k\sigma} \left( \frac{2\delta - d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}} \displaystyle\sup_{\overline{\Omega_{3\delta}} } |\Delta d| \\ \leq & C \frac{\Phi \left( \mu k e^{k\sigma} \left(\frac{2\delta - d(x)}{2\delta - \sigma} \right)^{\frac{m}{l-1}}\right)}{\mu k e^{k\sigma} \left( \frac{2\delta - d(x)}{2\delta - \sigma}\right)^{\frac{m}{l-1}}}\\ \leq & C \max \left\{ (\mu k e^{k\sigma})^{m-1} \left( \frac{2\delta - d(x)}{2\delta -\sigma}\right)^{m \left( \frac{m}{l-1}\right) - \left(\frac{m}{l-1} +1 \right)}, (\mu k e^{k\sigma})^{l-1} \left( \frac{2\delta - d(x)}{2\delta -\sigma}\right)^{l \left( \frac{m}{l-1}\right) - \left(\frac{m}{l-1} +1 \right)}\right\}\\ \leq & C_2 \max\{(\mu k e^{k\sigma})^{m-1}, (\mu k e^{k\sigma})^{l-1}\}, \end{aligned}$$ ]{} where $C_2$ is a constant that does not depend on $\sigma, k$ and $\mu.$ Thus from and we have that $$\label{midle} - \Delta_{\Phi} u \leq \max\left\{ \frac{C_1 }{2\delta-\sigma}, C_2 \right\} \max\{(\mu k e^{k\sigma})^{m-1}, (\mu k e^{k\sigma})^{l-1}\},$$ if $\sigma < d(x) < 2\delta.$ Consider the function $\eta$ and the numbers $\mu, \sigma$ and $k>0$ described before. Let $\sigma = \frac{\ln 2}{k}$ and $\mu = e^{-k}.$ Then $% e^{k\sigma} =2 $. If $k>0$ is large, we have from that $$\label{f1} -\Delta_{\Phi}(\mu \eta) \leq 0 \leq (\mu \eta)^{\beta}| \mu \eta|^{\alpha}_{L^\Psi}$$ in the case $d(x) < \sigma.$ For any $k>0$ we have $\eta (x)\geq e^{k\sigma }-1=2-1=1$ in $\Omega .$ Thus there is a constant $C_{3}>0$ that does not depend on $k>0$ such that $$(\mu \eta )^{\beta }|\mu \eta |_{L^{\Psi }}^{\alpha }\geq \mu ^{\alpha +\beta }C_{3}$$Since $0<\alpha +\beta <l-1$, the L’Hospital’s rule implies that $$\lim_{k\rightarrow +\infty }\frac{k^{l-1}}{e^{k(l-1-(\alpha +\beta ))}}=0.$$Thus, it is possible to consider a large $k_{0}>0$ such that $$C_{3}\geq \max \left\{ C_{1}\frac{1}{2\delta -\frac{\ln 2}{k}},C_{2}\right\} \max \{2^{m-1},2^{l-1}\}\frac{k^{l-1}}{e^{k(l-1-(\alpha +\beta ))}},$$for all $k\geq k_{0}.$ From $\eqref{midle},$ we have that $$-\Delta _{\Phi }(\mu \eta )\leq (\mu \eta )^{\beta }|\mu \eta |_{L^{\Psi }}^{\alpha } \label{f2}$$in the region $\sigma <d(x)<2\delta $ for $k>0$ is large enough. If $d(x)> 2\delta$ we have $$\label{f3} - \Delta_{\Phi}(\mu \eta) = 0 \leq (\mu \eta)^{\beta} |\mu \eta |^{\alpha}_{L^\Psi}.$$ Thus from , and we have that $\mu \eta $ is a subsolution for $(Ps).$ Note that from $\eqref{midle}, \eqref{f1}$ and $% \eqref{f3}$ we have for $k,\lambda>0 $ large enough that $-\Delta_{\Phi} (\mu \eta) \leq -\Delta_{\Phi} z_{\lambda}$. Thus from Lemma \[Tan-Fang\] we have $\mu \eta \leq z_{\lambda}$ in $\Omega.$ From Lemma [sub-supermethod]{} we have the result. An interesting question for problem $(P_S)$ is the existence of solution in the case $l-1 < \alpha + \beta.$ A concave-convex scalar problem: -------------------------------- In this section we will consider a concave-convex problem of the type $$\left\{ \begin{array}{ll} -\Delta _{\Phi }u=\lambda u^{\beta }|u|_{L^{\Psi }}^{\alpha }+\theta u^{\xi }|u|_{L^{\Lambda }}^{\gamma } & \mbox{in}\;\;\Omega , \\ u=0 & \mbox{on}\;\;\partial \Omega ,% \end{array}% \right. \eqno{(P)_{\lambda,\theta}}$$ where $\alpha ,\beta ,\xi ,\gamma \geq 0$ are constants satisfying certain conditions and $\lambda ,\theta >0$ are positive numbers. The local version of $(P)_{\lambda ,\theta }$ for the Laplacian operator was considered in the famous paper by Ambrosetti-Brezis-Cerami [@ABC] in which a sub-supersolution argument is used. Our result is the following one. Suppose that $\alpha,\beta,\xi,\gamma \geq 0$ and consider also that $0 < \alpha +\beta < l-1 $. The following assertions hold. **$(i)$** If $m-1<\xi+\gamma,$ then given $\theta>0$ there exists $\lambda_{0}>0$ such that for each $\lambda\in(0,\lambda_{0})$ the problem $(P)_{\lambda,\theta}$ has a positive solution $u_{\lambda,\theta}.$ **$(ii)$** If $l-1<\xi+\gamma$, then given $\lambda>0$ there exists $\theta_{0}>0$ such that for each $\theta\in(0,\theta_{0})$ the problem $(P)_{\lambda,\theta}$ has a positive solution $u_{\lambda,\theta}.$ Suppose that $(i)$ occurs and fix $\theta >0.$ Let $z_{\lambda }\in W_{0}^{1,\Phi }({\Omega })\cap L^{\infty }(\Omega )$ be the unique solution of where $\lambda \in (0,1)$ will be chosen before. Lemma \[Tan-Fang\] implies that for $\lambda>0$ small enough there exists a constant $K>1$ that does not depend on $\lambda$ such that $$\label{desig1-p-supsol-concavo} 0<z_{\lambda}(x)\leq K\lambda^{\frac{1}{m-1}}\;\text{in}\;\Omega.$$ Let $\overline{K}:=\max \big\{K^{\beta }|K|_{L^{\Psi }}^{\alpha },K^{\xi }|K|_{L^{\Lambda }}^{\gamma }\}.$ For each $\theta >0$ we can choose $% 0<\lambda _{0}<1$ small enough, depending on $\theta ,$ such that the inequalities $$\lambda \geq \left( \lambda ^{\frac{\alpha +\beta +m-1}{m-1}}\overline{K}% +\theta \overline{K}\lambda ^{\frac{\xi +\gamma }{m-1}}\right) ,\ \text{for all}\ \lambda \in (0,\lambda _{0})$$and hold because $\alpha +\beta >0$ and $% m-1<\xi +\gamma .$ Thus, there is a small $\lambda _{0}>0$ such that $$\begin{aligned} (\lambda z_{\lambda }^{\beta }|z_{\lambda }|_{L^{\Psi }}^{\alpha }+\theta z_{\lambda }^{\xi }|z_{\lambda }|_{L^{\Lambda }}^{\gamma })& \leq \lambda (K\lambda ^{\frac{1}{m-1}})^{\beta }|K\lambda ^{\frac{1}{m-1}}|_{L^{\Psi }}^{\alpha } \\ & +\theta (K\lambda ^{\frac{1}{m-1}})^{\xi }|K\lambda ^{\frac{1}{m-1}% }|_{L^{\Lambda }}^{\gamma } \\ & \leq \lambda .\end{aligned}$$for all $\lambda \in (0,\lambda _{0}).$ Thus for $\lambda \in (0,\lambda _{0})$ we get$$\lambda z_{\lambda }^{\beta }|z_{\lambda }|_{L^{\Psi }}^{\alpha }+\theta z_{\lambda }^{\xi }|z_{\lambda }|_{L^{\Lambda }}^{\gamma }\leq \lambda .$$ Now consider $\eta, \delta,\sigma,\mu$ and as in the proof of Theorem [teo-sublinear]{}. Fix $\lambda\in(0,\lambda_{0})$. Since $\alpha+\beta<l-1$ the arguments of the proof of Theorem [teo-sublinear]{} implies that if $\mu=\mu(\lambda)>0$ is small enough then $$-\Delta_{\Phi}(\mu \eta)\leq\lambda \ \text{in} \ \Omega$$ and $$\begin{aligned} -\Delta_{\Phi}(\mu\eta)&\leq \lambda(\mu\eta)^{\beta}|\mu\eta|_{L^{\Psi}}^{\alpha} \\ &\leq \lambda(\mu\eta)^{\beta}|\mu\eta|_{L^{\Psi}}^{\alpha} + \lambda(\mu\eta)^{\xi}|\mu\eta|_{L^{\Lambda}}^{\gamma}.\end{aligned}$$ The weak comparison principle implies that $\mu \eta \leq z_{\lambda}$ for $% \mu = \mu(\lambda) >0$ small enough. Therefore $(\mu \eta, z_{\lambda})$ is a sub-super solution pair for $(P)_{\lambda, \theta}.$ Now we will prove the theorem in the second case. Consider again $\eta ,\delta ,\sigma $ and $\mu $ as in the proof of Theorem \[teo-sublinear\]. Let $\lambda \in (0,\infty )$. Since $\alpha +\beta <l-1$ we can repeat the arguments of Theorem \[teo-sublinear\] to obtain $\mu =\mu (\lambda )>0$ small depending only on $\lambda $ such that$$-\Delta _{\Phi }(\mu \eta )\leq 1\;\;\text{and}\;\;-\Delta _{\Phi }(\mu \eta )\leq \lambda (\mu \eta )^{\beta }|\mu \eta |_{L^{\Psi }}^{\alpha }\;\;\text{% in}\;\Omega .$$ Let $z_{M}\in W_{0}^{1,\Phi}(\Omega)\cap L^{\infty}(\Omega)$ the unique solution of $\eqref{probl-linear-lambda}$ where $M>0$ will be chosen later. For $M\geq 1$ large enough there is a constant $K>1$ that does not depend on $M$ such that $$\label{desig2-p-supsol-concavo} 0<z_{M}(x)\leq KM^{\frac{1}{l-1}}\;\text{in}\;\Omega.$$ We want to obtain $M>1$ such that $$\label{eqnova} M\geq\left(\lambda z_{M}^{\beta}|z_{M}|_{L^{\Psi}}^{\alpha}+\theta z_{M}^{\xi}|z_{M}|_{L^{\Lambda}}^{\gamma}\right)\;\mbox{in}\;\Omega$$ occurs. Denoting by $I$ the right-hand side of , we have from that $I\leq M$ if$$1\geq \lambda \overline{K}M^{\frac{\alpha +\beta }{l-1}-1}+\theta \overline{K% }M^{\frac{\xi +\gamma }{l-1}-1}, \label{equiv-rel2}$$where $\overline{K}:=\max \{K^{\beta }|K|_{L^{\Psi }}^{\alpha },K^{\xi }|K|_{L^{\Psi }}^{\gamma }\}.$ Since $0<\alpha +\beta <l-1<\xi +\gamma ,$ the function $$\Psi (t)=\lambda \overline{K}t^{\rho -1}+\theta \overline{K}t^{\tau -1},t>0,$$where $\rho :=\frac{\alpha +\beta }{l-1}$ and $\tau :=\frac{\xi +\gamma }{l-1% },$ belongs to $C^{1}\big((0,\infty ),\mathbb{R}\big)$ and attains a global minimum at$$M_{\lambda ,\theta }:=M(\lambda ,\theta )=L\Biggl(\dfrac{\lambda }{\theta }% \Biggl)^{\frac{1}{\tau -\rho }} \label{minimum}$$where $L:=(\frac{1-\rho }{\tau -1})^{\frac{1}{\tau -\rho }}.$ The inequality is equivalent to find $M_{\lambda ,\theta }>0$ such that $% \Psi (M_{\lambda ,\theta })\leq 1.$ By we have $\Psi (M_{\lambda ,\theta })\leq 1$ if and only if $$\lambda \overline{K}(1-\rho )^{\frac{\rho -1}{\tau -\rho }}\left( \frac{% \lambda }{\theta }\right) ^{\frac{\rho -1}{\tau -\rho }}+\theta \overline{K}% (1-\rho )^{\frac{\tau -1}{\tau -\rho }}\left( \frac{\lambda }{\theta }% \right) ^{\frac{\tau -1}{\tau -\rho }}\leq 1$$Notice that the above inequality holds if $\theta >0$ is small enough because $\alpha +\beta <l-1<\xi +\gamma $. Thus for $\lambda >0$ fixed there exists $\theta _{0}=\theta _{0}(\lambda )$ such that for each $\theta \in (0,\theta _{0})$ there is a number $M=M_{\lambda ,\theta }>0$ such that occurs. Consequently we have . Therefore $$-\Delta _{\Phi }z_{M}\geq \lambda z_{M}^{\beta }|z_{M}|_{L^{\Psi }}^{\alpha }+\theta z_{M}^{\xi }|z_{M}|_{L^{\Lambda }}^{\gamma }\ \text{in}\ \Omega .$$Considering if necessary a smaller $\theta _{0}>0,$ we get $M\geq 1$ . Therefore $-\Delta _{\Phi }(\mu \eta )\leq -\Delta _{\Phi }z_{M}$ in $\Omega .$ The weak comparison principle implies that $\mu \eta \leq z_{M}$. Then $% (\mu \eta ,z_{M})$ is a sub-supersolution pair for $(P)_{\lambda ,\theta }.$ The proof is finished. The system case =============== We say that $(u_{1},u_{2})\in (W_{0}^{1,\Phi _{1}}(\Omega )\cap L^{\infty }(\Omega ))\times (W_{0}^{1,\Phi _{2}}(\Omega )\cap L^{\infty }(\Omega ))$ is a (weak) solution of $(P_{2})$ if $$\int_{\Omega }\phi (|\nabla u_{i}|)\nabla u_{i}\nabla \varphi =\int_{\Omega }(f_{i}(u_{j})|u_{j}|_{L^{\Psi _{i}}}^{\alpha _{i}}+g_{i}(u_{j})|u_{j}|_{L^{\Lambda _{i}}}^{\alpha _{i}})\varphi _{i},$$for all $\varphi _{i}\in W_{0}^{1,\Phi _{i}}(\Omega )$ with $i,j=1,2$ and $% i\neq j.$ We say that the pairs $(\underline{u}_{i},\overline{u}_{i}),i=1,2$ are sub-supersolution pairs for $(P_{2})$ if $\underline{u}_{i},\overline{u}% _{i}\in W_{0}^{1,\Phi _{i}}(\Omega )\cap L^{\infty }(\Omega )$ are nonnegative functions with $0<\underline{u}_{i}\leq \overline{u}_{i}$ in $% \Omega $ and if for all $\varphi _{i}\in W_{0}^{1,\Phi _{i}}(\Omega )$ with $% \varphi _{i}\geq 0$ the following inequalities are verified $$\left\{ \begin{array}{r} \displaystyle\int_{\Omega }\phi _{i}(|\nabla \underline{u}_{i}|)\nabla \underline{u}_{i}\nabla \varphi _{i}\leq \displaystyle\int_{\Omega }\left( f_{i}(\underline{u}_{j})|\underline{u}_{j}|_{L^{\Psi _{i}}}^{\alpha _{i}}+g_{i}(\underline{u}_{j})|\underline{u}_{j}|_{L^{\Lambda _{i}}}^{\gamma _{i}}\right) \varphi _{i}, \\ \displaystyle\int_{\Omega }\phi _{i}(|\nabla \overline{u}_{i}|)\nabla \overline{u}_{i}\nabla \varphi _{i}\geq \displaystyle\int_{\Omega }\left( f_{i}(\overline{u}_{j})|\overline{u}_{j}|_{L^{\Psi _{i}}}^{\alpha _{i}}+g_{i}(\overline{u}_{j})|\overline{u}_{j}|_{L^{\Lambda _{i}}}^{\gamma _{i}}\right) \varphi _{i},% \end{array}% \right. \label{eq2.1}$$for all $\varphi _{i}\in W_{0}^{1,\Phi _{i}}(\Omega )$ with $i,j=1,2$ and $% i\neq j.$ The following lemma is needed to obtain a solution for system $(P_2).$ \[sub-supermethod-sys\] Suppose that $f_i,g_i:[0, +\infty) \rightarrow \mathbb{R}, i=1,2$ are nondecreasing, continuous and nonnegative functions. Consider also that $\alpha_i, \gamma_i \geq 0, i =1,2$ and that there exist sub-supersolution pairs $(\underline{u}_i,\overline{u}_i), i=1,2$ for $% (P_2). $ Then there exists a solution $(u,\widetilde{u})$ for $(P_2)$ with $% u \in [\underline{u}_1,\overline{u}_1]$ and $\widetilde{u} \in [\underline{u}% _2,\overline{u}_2].$ Consider $u_1$ the solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1} u_1 &=& f_1(\underline{u}_2)| \underline{u}_2|^{\alpha_1}_{L^{\Psi_1}} +g_1(\underline{u}_2)| \underline{u}_2|^{\gamma_1}_{L^{\Lambda_1}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} u_1&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ Using the monotonicity of $f_1, g_1$ and the fact that $\underline{u}_2 \leq \overline{u}_2$ a.e in $\Omega$ we get $$- \Delta_{\Phi_1} \overline{u}_1 \geq f_1(\overline{u}_2)|\overline{u}% _2|^{\alpha_1}_{L^{\Psi_1}} + g_1(\overline{u}_2) |\overline{u}% _2|^{\gamma_1}_{L^{\Lambda_1}} \geq -\Delta_{\Phi_1}u_1 \ \text{in} \ \Omega,$$ therefore $u_1 \leq \overline{u}_1.$ Note also that $$-\Delta_{\Phi_1} u_1 = f_1(\underline{u}_2)|\underline{u}_2|^{\alpha_1}_{L^{% \Psi_1}} + g_1(\underline{u}_2) |\underline{u}_2|^{\gamma_1}_{L^{\Lambda_1}} \geq -\Delta_{\Phi_1} \underline{u}_1 \ \text{in} \ \Omega.$$ Therefore $\underline{u}_1 \leq u_1 \leq \overline{u}_1$ a.e in $\Omega.$ Denote by $\widetilde{u}_1$ the weak solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1} \widetilde{u}_1 &=& f_2(\underline{u}_1)| \underline{u}_1|^{\alpha_2}_{L^{\Psi_2}} +g_2(\underline{u}_1)| \underline{u_1}|^{\gamma_2}_{L^{\Lambda_2}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \widetilde{u}_1&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ From the definition of $\underline{u}_2$ and $\overline{u}_2$ we have that $% - \Delta_{\Phi_2} \underline{u}_2 \leq-\Delta_{\Phi_2} \widetilde{u}_1 \leq - \Delta_{\Phi_2} \overline{u}_2$ in $\Omega.$ Therefore $\underline{u}_2 \leq \widetilde{u}_1 \leq \overline{u}_2$ in $\Omega.$ Consider $u_2$ the solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1} u_2 &=& f_1(\widetilde{u}_1)| \widetilde{u}_1|^{\alpha_1}_{L^{\Psi_1}} +g_1(\widetilde{u}_1)| \widetilde{u}_1|^{\gamma_1}_{L^{\Lambda_1}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} u_2&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ Using the fact that $\underline{u}_2 \leq \widetilde{u}_1 \leq \overline{u}% _2 $ in $\Omega$ and the monotonicity of the functions $f_1$ and $g_1,$ we have $- \Delta_{\Phi_1} u_1 \leq - \Delta_{\Phi_1} u_2 \leq - \Delta_{\Phi_1} \overline{u}_1$ in $\Omega.$ Therefore $\underline{u}_1 \leq u_1 \leq u_2 \leq \overline{u}_1$ in $\Omega.$ Consider $\widetilde{u}_2$ the solution of the problem $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_2} \widetilde{u}_2 &=& f_2(u_1)| u_1|^{\alpha_2}_{L^{\Psi_2}} +g_2(\widetilde{u}_1)| u_1|^{\gamma_2}_{L^{\Lambda_2}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \widetilde{u}_2&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ A direct computation imply that $\underline{u}_2 \leq\widetilde{u}_1 \leq \widetilde{u}_2 \leq \overline{u}_2$ in $\Omega.$ Proceeding with the previous reasonings we construct sequences $u_n$ and $\widetilde{u}_n$ satisfying $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1} u_n &=& f_1(\widetilde{u}_{n-1})| \widetilde{u}_{n-1}|^{\alpha_1}_{L^{\Psi_1}} +g_1(\widetilde{u}_{n-1})| \widetilde{u}_{n-1}|^{\gamma_1}_{L^{\Lambda_1}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \widetilde{u}_n&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ and $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_2} \widetilde{u}_n &=& f_2(u_{n-1})| u_{n-1}|^{\alpha_2}_{L^{\Psi_2}} +g_2(u_{n-1})| u_{n-1}|^{\gamma_2}_{L^{\Lambda_2}}\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} \widetilde{u}_n&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ where $\widetilde{u}_0 : = \overline{u}_2$ and $u_0 := \overline{u}_1.$ Arguing as in Lemma \[sub-supermethod\] we obtain the result. A sublinear system In this section we use Lemma \[sub-supermethod-sys\] and suitable sub-supersolution pairs to prove the existence of solution for the the nonlocal system $$\left\{ \begin{array}{rcl} -\Delta _{\Phi _{1}}u & = & v^{\beta _{1}}|v|_{\Psi _{1}}^{\alpha _{1}}\ % \mbox{in}\ \Omega , \\ -\Delta _{\Phi _{2}}v & = & u^{\beta _{2}}|u|_{\Psi _{2}}^{\alpha _{2}}\ % \mbox{in}\ \Omega , \\ u=v & = & 0\ \mbox{on}\ \partial \Omega ,% \end{array}% \right. \eqno{(P^{'}_S)}$$where $\alpha _{i}$ and $\beta _{i},i=1,2$ are constants saisfying certain conditions. It is interesting to note that the set of hipothesis of the next result is different from the system version of $(P)$ considered in [@CFL Theorem 5.2] in the constant exponent case. \[teo-sublinear-sys\] Suppose that $\alpha_i, \beta_i \geq 0$ with $0 < \alpha_1 + \beta_1 < l_i -1,0< \alpha_2 + \beta_2 < l_i -1, i =1,2$. Then $% (P^{^{\prime }}_S)$ has a positive solution. Let $\lambda>0$ and consider $z_{\lambda}\in W_{0}^{1,\Phi_1}(\Omega)\cap L^{\infty}(\Omega)$ and $y_{\lambda}\in W_{0}^{1,\Phi_2}(\Omega)\cap L^{\infty}(\Omega)$ the unique solutions of where $\lambda$ will be chosen later. For $\lambda >0$ sufficiently large, by Lemma \[Tan-Fang\] there is a constant $K>0$ that does not depend on $\lambda$ such that $$\label{desig1-p-supsol-sis} 0<z_{\lambda}(x)\leq K\lambda^{\frac{1}{l_{1}-1}}\;\text{in}\;\Omega,$$ and $$\label{ddesig1-p-supsol} 0<y_{\lambda}(x)\leq K\lambda^{\frac{1}{l_{2}-1}}\;\text{in}\;\Omega.$$ Since $0< \alpha_1 + \beta_1 < l_2 -1,$ we can choose $\lambda >0$ large enough satisfying $K^{\beta_1} |K|^{\alpha_1}_{L^{\Psi_1}}\lambda^{\frac{ \alpha_1 + \beta_1}{l_2 -1}} \leq \lambda.$ Thus from we have $y_{\lambda}^{\beta_1} |y_{\lambda}|^{\alpha_1}_{L^{\Psi_1}} \leq \lambda \ \text{in} \ \Omega. $ Therefore $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1}z_{\lambda}&\geq& y^{\beta_1}_{\lambda} |y_{\lambda}|^{\alpha_1}_{L^{\Psi_1}}\;\;\mbox{in}\;\;\Omega,\\ \vspace{.2cm} z_{\lambda}&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ From and the fact that $0< \alpha_2 + \beta_2< l_1 -1$ we also have that $$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_2}y_{\lambda}&\geq& z^{\beta_2}_{\lambda} |z_{\lambda}|^{\alpha_2}_{L^{\Psi_2}}\;\;\mbox{in}\;\;\Omega,\\ \vspace{.2cm} y_{\lambda}&=&0\;\;\mbox{on}\;\;\partial\Omega, \end{array} \right. \end{aligned}$$ for $\lambda >0 $ large enough. Since $\partial \Omega$ is $C^2 ,$ there is a constant $\delta >0$ such that $d \in C^{2}(\overline{\Omega_{3 \delta}})$ and $|\nabla d(x)| \equiv 1,$ where $d(x):= dist(x,\partial \Omega)$ and $\overline{ \Omega_{3 \delta}}% :=\{x \in \overline{\Omega}; d(x) \leq 3 \delta\}$. For $\sigma \in (0,\delta)$ the function $\eta_i = \eta_i(k,\sigma),i=1,2$ defined by $$\eta_i(x)=\left\{% \begin{array}{lcl} e^{kd(x)}-1 & \text{ if } & d(x)<\sigma, \\ e^{k\sigma}-1+\int_{\sigma}^{d(x)}ke^{k\sigma}\Big(\frac{2\delta-t}{% 2\delta-\sigma}\Big)^{\frac{m_i}{l_i-1}}dt & \text{ if } & \sigma\leq d(x)<2\delta, \\ e^{k\sigma}-1+\int_{\sigma}^{2\delta}ke^{k\sigma}\Big(\frac{2\delta-t}{% 2\delta-\sigma}\Big)^{\frac{m_i}{l_i-1}}dt & \text{ if } & 2\delta \leq d(x)% \end{array} \right.$$ belongs to $C^{1}_{0}(\overline{\Omega})$ for $i=1,2,$ where $k>0$ is an arbitrary constant. Note that $$-\Delta_{\Phi}(\mu\eta_i)=% \begin{cases} - \mu k^2 e^{kd(x)} \frac{d}{dt} \left( \phi_i(t)t\right)\bigg\vert_{t= \mu k e^{kd(x)}} - \phi_i(\mu k e^{kd(x)})\mu k e^{kd(x)}\Delta d \;\; \mbox{ if}% \quad d(x)<\sigma, \\ \mu k e^{k \sigma}\left( \frac{m_i}{l_i-1}\right)\left(\frac{2\delta -d(x)}{% 2\delta- \sigma} \right)^{\frac{m_i}{l_i-1}-1}\left( \frac{1}{2\delta - \sigma}\right) \frac{d}{dt}\left( \phi_i(t)t\right)\bigg\vert_{t = \mu k e^{k \sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma}\right)} \\ - \phi_i\left( \mu k e^{k\sigma} \left( \frac{2\delta -d(x)}{2\delta - \sigma% }\right)^{\frac{m_i}{l_i-1}}\right) \mu k e^{k\sigma} \left( \frac{% 2\delta-d(x)}{2\delta - \sigma}\right)^{\frac{m_i}{l_i-1}}\Delta d \;\; % \mbox{ if}\quad \sigma < d(x)<2\delta, \\ 0\;\; \mbox{ if}\quad 2\delta<d(x)% \end{cases}%$$ for all $\mu >0$ and $i=1,2.$ Arguing as in we have $% -\Delta_{\Phi_i} (\mu \eta_i) \leq 0, i=1,2$ for $k>0$ large enough when $0< d(x) < \sigma$. Reasoning as in we get $$\label{midle2} - \Delta_{\Phi_1} (\mu \eta_1) \leq \max\left\{ \frac{K_1 }{2\delta-\sigma}, K_2 \right\} \max\{(\mu k e^{k\sigma})^{m_1-1}, (\mu k e^{k\sigma})^{l_1-1}\},$$ and $$\label{midle3} - \Delta_{\Phi_2} (\mu \eta_2) \leq \max\left\{ \frac{K_3 }{2\delta-\sigma}, K_4 \right\} \max\{(\mu k e^{k\sigma})^{m_2-1}, (\mu k e^{k\sigma})^{l_2-1}\},$$ for $\sigma < d(x) < 2\delta,$ where $K_i , i=1,2,3,4$ are positive constants that does not depend on $k>0.$ Consider $\sigma = \frac{\ln 2}{k}$ and $\mu = e^{-k}.$ We have $\eta_i(x) \geq e^{k\sigma} -1 \geq 1$ for all $x \in \Omega$ and $i=1,2.$ Thus there is a constant $K_5 >0$ such that $$(\mu \eta_j)^{\beta_i} |\mu \eta_j|^{\alpha_i}_{L^{\Psi_i}} \geq \mu^{\alpha_i + \beta_i} K_5, i,j=1,2, i \neq j$$ for $\sigma < d(x) < 2\delta.$ Since $0 < \alpha_i + \beta_i < l_i-1$, the L’Hospital’s rule implies that $$\lim_{k \rightarrow +\infty} \frac{k^{l_i-1}}{e^{k(l_i-1 - (\alpha_i + \beta_i))}} = 0, i=1,2.$$ Thus, it is possible to consider $k_0>0$ large enough such that $$K_5 \geq \max \left\{ K_1 \frac{1}{2\delta-\frac{\ln 2}{k}}, K_2 \right\}\max\{2^{m_1-1}, 2^{l_1-1}\} \frac{k^{l_1-1}}{e^{k(l_1-1 -(\alpha_1 + \beta_1))}}$$ and $$K_5 \geq \max \left\{ K_3 \frac{1}{2\delta-\frac{\ln 2}{k}}, K_4 \right\}\max\{2^{m_2-1}, 2^{l_2-1}\} \frac{k^{l_2-1}}{e^{k(l_2-1 -(\alpha_2 + \beta_2))}},$$ for all $k \geq k_0.$ Thus for $k>0$ large enough we have $% -\Delta_{\Phi_i}(\mu \eta_i) \leq (\mu \eta_j)^{\beta_i} |\mu \eta_j|^{\alpha_i}_{L^{\Psi_i}},i,j=1,2,$ $i \neq j.$ for $\sigma < d(x) < 2\delta. $ If $d(x) > 2\delta$ we have $-\Delta_{\Phi_i} (\mu \eta_j) = 0 \leq (\mu \eta_j)^{\beta_i} |\mu \eta_i|^{\alpha_i}_{L^{\Psi_i}}, i,j=1,2$ with $i \neq j.$ For $k >0$ large enough we also have that $% -\Delta_{\Phi_1}(\mu \eta_1) \leq - \Delta_{\Phi_1} z_\lambda , -\Delta_{\Phi_2} (\mu \eta_2) \leq - \Delta_{\Phi_2} y_\lambda$ in $\Omega$. Therefore $\mu \eta_1 \leq z_{\lambda}, \mu \eta_2 \leq y_{\lambda}$ in $% \Omega.$ The result follows. A concave-convex system ----------------------- In this section we prove the existence of solution for a concave-convex system of type $$\left\{ \begin{array}{rcl} -\Delta _{\Phi _{1}}u & = & \lambda v^{\beta _{1}}|v|_{\Psi _{1}}^{\alpha _{1}}+\theta v^{\xi _{1}}|v|_{L^{\Lambda _{1}}}^{\gamma _{1}}\ \mbox{in}\ \Omega , \\ -\Delta _{\Phi _{2}}v & = & \lambda u^{\beta _{2}}|u|_{\Psi _{2}}^{\alpha _{2}}+\theta u^{\xi _{2}}|u|_{L^{\Lambda _{2}}}^{\gamma _{2}}\ \mbox{in}\ \Omega , \\ u=v & = & 0\ \mbox{on}\ \partial \Omega ,% \end{array}% \right. \eqno{(P^{'})_{\lambda,\theta}}$$where $\alpha _{i},\beta _{i},\gamma _{i},\xi _{i},i=1,2$ are constants satisfying certain conditions. Suppose that $\alpha_i,\beta_i, \gamma_i,\xi_i , i=1,2$ are nonnegative constants and suppose that $0<\alpha_i+\beta_i<l_i-1,i=1,2$. The following assertions hold **$(i)$** If $m_2-1<\xi_1+\gamma_1$ and $m_1-1<\xi_2+\gamma_2 , $ then for each $\theta>0$ there exists $\lambda_{0}>0$ such that for each $\lambda\in(0,\lambda_{0})$ the problem $(P^{^{\prime }})_{\lambda,\theta}$ has a positive solution $u_{\lambda,\theta}.$ **$(ii)$** If $0< \alpha_1 + \beta_1 < l_2 -1, 0<\alpha_2 + \beta_2 < l_1 -1 < \xi_1 + \gamma_1 < l_2 -1$ and $\xi_2 + \gamma_2 < l_1 -1$ then for each $\lambda >0$ there exists $\theta_0 >0$ such that for each $% \theta \in (0,\theta_0)$ the problem $(P^{^{\prime }})_{\lambda,\theta}$ has a positive solution $u_{\lambda,\theta}.$ Suppose that $(i)$ occurs. Consider $z_{\lambda }\in W_{0}^{1,\Phi _{1}}({% \Omega })\cap L^{\infty }(\Omega )$ and $y_{\lambda }\in W_{0}^{1,\Phi _{2}}(% {\Omega })\cap L^{\infty }(\Omega )$ the unique solutions of , where $\lambda \in (0,1)$ will be chosen before. Lemma \[Tan-Fang\] imply that for $\lambda >0$ small enough there exists a constant $K>0$ that does not depend on $\lambda $ such that $$0<z_{\lambda }(x)\leq K\lambda ^{\frac{1}{m_{1}-1}}\;\text{in}\;\Omega , \label{desig1-p-supsol-concavo-sys}$$$$0<y_{\lambda }(x)\leq K\lambda ^{\frac{1}{m_{2}-1}}\;\text{in}\;\Omega . \label{desig2-p-supsol-concavo-sys}$$We will prove, for each $\theta >0,$ that there exists $\lambda _{0}>0$ such that$$\lambda y_{\lambda }^{\beta _{1}}|y_{\lambda }|_{L^{\Psi _{1}}}^{\alpha _{1}}+\theta y_{\lambda }^{\xi _{1}}|y_{\lambda }|_{L^{\Lambda _{1}}}^{\gamma _{1}}\leq \lambda \label{c_1}$$and $$\lambda z_{\lambda }^{\beta _{2}}|z_{\lambda }|_{L^{\Psi _{2}}}^{\alpha _{2}}+\theta z_{\lambda }^{\xi _{2}}|z_{\lambda }|_{L^{\Lambda _{2}}}^{\gamma _{2}}\leq \lambda \label{c_2}$$in $\Omega .$ Since $0<\alpha _{i}+\beta _{i},i=1,2,$ $m_{2}-1<\xi _{1}+\gamma _{1}$ and $m_{1}-1<\xi _{2}+\gamma _{2}$ there exists $\lambda _{0}>0$ such that $$\lambda ^{\frac{m_{2}-1+\alpha _{1}+\beta _{1}}{m_{2}-1}}K^{\beta _{1}}|K|_{L^{\Psi _{1}}}^{\alpha _{1}}+\theta \lambda ^{\frac{\xi _{1}+\gamma _{1}}{m_{2}-1}}K^{\xi _{1}}|K|_{L^{\Lambda _{1}}}^{\gamma _{1}}\leq \lambda \label{c_3}$$and $$\lambda ^{\frac{m_{1}-1+\alpha _{2}+\beta _{2}}{m_{1}-1}}K^{\beta _{2}}|K|_{L^{\Psi _{2}}}^{\alpha _{2}}+\theta \lambda ^{\frac{\xi _{2}+\gamma _{2}}{m_{1}-1}}K^{\xi _{2}}|K|_{L^{\Lambda _{2}}}^{\gamma _{2}}\leq \lambda \label{c_4}$$for all $\lambda \in (0,\lambda _{0}).$ From , , and we obtain and . Therefore $$-\Delta _{\Phi _{1}}z_{\lambda }\geq \lambda y_{\lambda }^{\beta _{1}}|y_{\lambda }|_{L^{\Psi _{1}}}^{\alpha _{1}}+\theta y_{\lambda }^{\xi _{1}}|y_{\lambda }|_{L^{\Lambda _{1}}}^{\gamma _{1}}$$and $$-\Delta _{\Phi _{2}}y_{\lambda }\geq \lambda z_{\lambda }^{\beta _{2}}|z_{\lambda }|_{L^{\Psi _{2}}}^{\alpha _{2}}+\theta z_{\lambda }^{\xi _{2}}|z_{\lambda }|_{L^{\Lambda _{2}}}^{\gamma _{2}}$$in $\Omega $ for all $\lambda \in (0,\lambda _{0}).$ Consider $\eta_i, \delta, \sigma$ and $\mu$ as in the proof of Theorem [teo-sublinear-sys]{}. Since $0< \alpha_i + \beta_i < l_i -1, i=1,2$ we have that exists $\mu >0$ with $\mu \eta_1 \leq z_{\lambda},$ $\mu \eta_2 \leq y_{\lambda}$ and the inequalities $$-\Delta_{\Phi_1}(\mu \eta_1) \leq \lambda, -\Delta_{\Phi_1}(\mu \eta_1) \leq \lambda (\mu \eta_2)^{\beta_1} |\mu \eta_2|^{\alpha_1}_{L^{\Psi_1}} + \theta (\mu \eta_2)^{\xi_1}|\mu \eta_2|^{\gamma_1}_{L^{\Lambda_1}}$$ and $$-\Delta_{\Phi_2}(\mu \eta_2) \leq \lambda, -\Delta_{\Phi_2}(\mu \eta_2) \leq \lambda (\mu \eta_1)^{\beta_2} |\mu \eta_1|^{\alpha_2}_{L^{\Psi_2}} + \theta (\mu \eta_1)^{\xi_2}|\mu \eta_1|^{\gamma_2}_{L^{\Lambda_2}}$$ in $\Omega.$ Thus by Lemma \[sub-supermethod-sys\] we have the first part of the result. In order to prove the second part of the result consider $\eta_i,\delta$ and $\sigma_i,i=1,2$ as in the first part of the result and let $\lambda >0$ fixed. Since $0< \alpha_i + \beta_{i}< l_{i}-1,i=1,2$ there exists $\mu>0$ depending only on $\lambda $ such that $$- \Delta_{\Phi_i}(\mu \eta_i) \leq 1 \ \text{and} \ -\Delta_{\Phi_i}(\mu \eta_i) \leq \lambda (\mu \eta_j)^{\beta_i} |\mu \eta_j|^{\alpha_i}_{L^{\Psi_i}}$$ in $\Omega $ with $i,j=1,2$ and $i \neq j.$ Let $M>0$ which will be chosen before and consider $z_{M}\in W_{0}^{1,\Phi _{1}}(\Omega )\cap L^{\infty }(\Omega )$ and $y_{M}\in W_{0}^{1,\Phi _{2}}(\Omega )\cap L^{\infty }(\Omega )$ solutions of$$\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_1}z_{M} &=&M\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} z_M&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}% \hspace{2cm}\begin{aligned} \left\{\begin{array}{rcl} -\Delta_{\Phi_2}y_{M} &=&M\;\;\mbox{in} \;\;\Omega,\\ \vspace{.2cm} y_M&=&0\;\;\mbox{on}\;\;\partial\Omega. \end{array} \right. \end{aligned}$$ If $M >0$ is large enough, then by Lemma \[Tan-Fang\] there exists a constant $K>0$ that does not depend on $M$ such that $$\label{desig3-p-supsol-concavo} 0<z_{M}(x)\leq KM^{\frac{1}{l_{1}-1}}\;\text{in}\;\Omega,$$ $$\label{desig4-p-supsol-concavo} 0<y_{M}(x)\leq KM^{\frac{1}{l_{2}-1}}\;\text{in}\;\Omega.$$ In order to construct ${\overline{u}}_i ,{\overline{u}}_i,i=1,2$ we will show that exist $\theta_0 >0$ depending on $\lambda$ with the following property: if we consider $\theta \in (0,\theta_0)$ then there will be a constant $M$ depending only on $\lambda$ and $\theta$ satisfying $$\label{c_5} M \geq \lambda {y_M}^{\beta_1} |y_M|^{\alpha_1}_{L^{\Psi_1}} + \theta {y_M}% ^{\xi_1} |y_M|^{\gamma_1}_{L^{\Lambda_1}}$$ and $$\label{c_6} M \geq \lambda {z_M}^{\beta_2} |z_M|^{\alpha_2}_{L^{\Psi_2}} + \theta {z_M}% ^{\xi_2} |z_M|^{\gamma_2}_{L^{\Lambda_2}}$$ in $\Omega.$ From and we have that and occur if $M \geq 1$ and $$\label{equiv-f2} \lambda \overline{K} M^{\rho-1} + \theta \overline{K}M^{\tau-1} \leq 1$$ where $\overline{K}:= \max \{ K^{\beta_1} |K|^{\alpha_1}_{L^{\Psi_1}}, K^{\beta_2} |K|^{\alpha_2}_{L^{\Psi_2}}, K^{\xi_1} |K|^{\gamma_1}_{L^{\Lambda_1}}, K^{\xi_2}|K|^{\gamma_2}_{L^{\Lambda_2}} \},$ $$\rho:= \max\left\{\frac{\alpha_1 + \beta_1}{l_2 -1}, \frac{\alpha_2 + \beta_2% }{l_1 -1}\right\} \ \text{and} \ \tau:= \max \left\{ \frac{\gamma_1 + \xi_1}{ l_2 -1}, \frac{\gamma_2 + \xi_2}{l_1 -1}\right\}.$$ Since $0<\rho <1$ and $\tau >1$ the function $$\Psi (t)=\lambda \overline{K}t^{\rho -1}+\theta \overline{K}t^{\tau -1},t>0,$$belongs to $C^{1}\big((0,\infty ),\mathbb{R}\big)$ and attains a global minimum at$$M_{\lambda ,\theta }:=M(\lambda ,\theta )=L\Biggl(\dfrac{\lambda }{\theta }% \Biggl)^{\frac{1}{\tau -\rho }} \label{minimum2}$$where $L:=(\frac{1-\rho }{\tau -1})^{\frac{1}{\tau -\rho }}.$ The inequality is equivalent to find $M_{\lambda ,\theta }>0$ such that $% \Psi (M_{\lambda ,\theta })\leq 1.$ By we have $\Psi (M_{\lambda ,\theta })\leq 1$ if and only if $$\lambda \overline{K}(1-\rho )^{\frac{\rho -1}{\tau -\rho }}\left( \frac{% \lambda }{\theta }\right) ^{\frac{\rho -1}{\tau -\rho }}+\theta \overline{K}% (1-\rho )^{\frac{\tau -1}{\tau -\rho }}\left( \frac{\lambda }{\theta }% \right) ^{\frac{\tau -1}{\tau -\rho }}\leq 1$$Notice that the above inequality holds if $\theta >0$ is small enough because $0<\rho <1$ and $\tau >1.$ Thus for $\lambda >0$ fixed there exists $% \theta _{0}=\theta _{0}(\lambda )$ such that for each $\theta \in (0,\theta _{0})$ there is a number $M=M_{\lambda ,\theta }>0$ such that occurs. Thus we can consider $M_{\lambda ,\theta }$ large enough such that and occur. Therefore $$-\Delta _{\Phi _{1}}z_{M}\geq \lambda y_{M}^{\beta _{1}}|y_{M}|_{L^{\Psi _{1}}}^{\alpha _{1}}+\theta y_{M}^{\xi _{1}}|y_{M}|_{L^{\Lambda _{1}}}^{\gamma _{1}}$$and $$-\Delta _{\Phi _{2}}y_{M}\geq \lambda z_{M}^{\beta _{2}}|z_{M}|_{L^{\Psi _{2}}}^{\alpha _{2}}+\theta z_{M}^{\xi _{2}}|z_{M}|_{L^{\Lambda _{2}}}^{\gamma _{2}}$$Considering if necessary a smaller $\theta _{0}>0,$ we get $$-\Delta _{\Phi _{1}}(\mu \eta _{1})\leq 1\leq M_{\lambda ,\theta _{0}}\leq M_{\lambda ,\theta }$$and $$-\Delta _{\Phi _{2}}(\mu \eta _{2})\leq 1\leq M_{\lambda ,\theta _{0}}\leq M_{\lambda ,\theta }$$in $\Omega $ for all $\theta \in (0,\theta _{0})$ because $M_{\lambda ,\theta }\rightarrow +\infty $ as $\theta \rightarrow 0^{+}$ and $\theta \longmapsto M_{\lambda ,\theta }$ is nonincreasing. Therefore $-\Delta _{\Phi _{1}}(\mu \eta _{1})\leq -\Delta _{\Phi _{1}}z_{M},$ $-\Delta _{\Phi _{2}}(\mu \eta _{2})\leq -\Delta _{\Phi _{2}}y_{M}$ in $\Omega .$ The weak comparison principle implies that $\mu \eta _{1}\leq z_{M}$ and $\mu \eta _{2}\leq y_{M}$ in $\Omega $. The proof is finished. Final comments ============== A slightly modification in the arguments of Lemma \[sub-supermethod-sys\] allow us to study a more general class of systems given by$$\left\{ \begin{array}{rcl} \label{problema-(P)}-\Delta _{\Phi _{1}}u & = & f_{1}(u,v)|v|_{L^{\Psi _{1}}}^{\alpha _{1}}+g_{1}(u,v)|v|_{L^{\Lambda _{1}}}^{\gamma _{1}}\;\;% \mbox{in}\;\;\Omega , \\ -\Delta _{\Phi _{2}}v & = & f_{2}(u,v)|u|_{L^{\Psi _{2}}}^{\alpha _{2}}+g_{2}(u,v)|u|_{L^{\Lambda _{2}}}^{\gamma _{2}}\;\;\mbox{in}\;\;\Omega , \\ u=v & = & 0\;\;\mbox{on}\;\;\partial \Omega ,% \end{array}% \right. \eqno{(\widetilde{P})}$$with $f_{i},g_{i}:[0,+\infty )\times 0,+\infty )\rightarrow 0,+\infty ),i=1,2 $ nondecreasing continuous functions in the variables $u$ and $v.$ The arguments used in this work allow us to consider results in the case for example when the functions $f_{i}$ and $g_{i}$ are power functions with convenient exponents. In order to avoid of a more technical exposition we choose to not prove results related with the case mentioned before, that is, systems involving the variables $u$ and $v$ in the local the terms of each equation of $(\widetilde{P}).$ [99]{} Adams, R. A. & Fournier, J. F., Sobolev Spaces, Academic Press, New York, (2003). A. Ambrosetti, H. Brezis and G. Cerami, Combined effects of concave and convex nonlinearities in some elliptic problems, J. Funct. Anal. 122 (2) (1994), 519-543. C.O Alves and F.J.S.A. Corrêa, On existence of solutions for a class of problem involving a nonlinear operator, Comm. Appl. Nonlinear Anal., 8 (2001), 43-56. C.O. Alves, D. P. Covei, Existence of solutions for a class of nonlocal elliptic problem via subsupersolution. Nonlinear Anal., Real World Appl. 23 (2015), 1-8. C.O. Alves, F.J.S.A. Corrêa, and T.F. Ma, Positive solutions for a quasilinear elliptic equation of Kirchhoff type, Comput. Math. Appl., 49 (2005), 85-93. Y. Chen, H. Gao, Existence of positive solutions for nonlocal and nonvariational elliptic system, Bull. Austral. Math. Soc., Vol. 72 (2005), 271-281. V. Benci, D. Fortunato, and L. Pisani, Solitons like solutions of a Lorentz invariant equation in dimension 3, Rev. Math. Phys. 10 (1998), 315-344. M. Chipot and B. Lovat, Some remarks on non local elliptic and parabolic problems, Nonlinear Anal., 30 (1997), 4619-4627. M. Chipot and B. Lovat, On the asymptotic behaviour of some nonlocal problems, Positivity (1999), 65-81. M. Chipot and J.F. Rodrigues, On a class of nonlinear elliptic problems, Mathematical Modelling and Numerical Analysis, 26, (1992), 447-468. F.J.S.A. Corrêa and G.M. Figueiredo, On the existence of positive solution for an elliptic equation of Kirchhoff-type via Moser iteration method, Boundary Value Problems, Vol. 2006 (2006), Article ID 79679, 1-10. F.J.S.A. Corrêa, G.M. Figueiredo, F.P.M. Lopes, : On the existence of positive solutions for a nonlocal elliptic problem involving the p-Laplacian and the generalized Lebesgue space $L^{p(x)}(\Omega)$. Differ. Integral Equ. 21(3-4) (2008), 305-324. F.J.S.A. Corrêa and F.P.M. Lopes, Positive solutions for a class of nonlocal elliptic systems, Comm. Appl. Nonlinear Anal., 14 (2007), 67-77. F.J.S.A. Corrêa and S.D.B. Menezes, Positive solutions for a class of nonlocal problems, Progress in Nonlinear Differential Equations and Their Applications, Volume in honor of Djairo G. de Figueiredo, 66 (2005), 195-206. W. Deng, Z. Duan, and C. Xie, The blow-up rate for a degenerate parabolic equation with a nonlocal source, J. Math. Anal. Appl., 264 (2001), 577-597. W. Deng, Y. Li, and C. Xie, Existence and nonexistence of global solutions of some nonlocal degenerate parabolic equations, Appl. Math. Lett., 16 (2003), 803-808. W. Deng, Y. Lie, and C. Xie, Blow-up and global existence for a nonlocal degenerate parabolic system, J. Math. Anal. Appl., 227 (2003), 199-217. G. Dong and X. Fang, Differential equations of divergence form in separable Musielak–Orlicz–Sobolev spaces Bound. Value Probl., 2016 (106) (2016), Article 19. B. Dacorogna, Introduction to the Calculus of Variations, ICP London (2004). D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order. Springer Verlag, Berlin 2001. X. Fan, Differential equations of divergence form in Musielak–Sobolev spaces and a sub-supersolution method J. Math. Anal. Appl., 386 (2) (2012), 593-604 N. Fukagai and K. Narukawa, Nonlinear eigenvalue problem for a model equation of an elastic surface, Hiroshima Math. J. 25 (1995), 19–41. N. Fukagai and K. Narukawa, *On the existence of multiple positive solutions of quasilinear elliptic eigenvalue problems,* Ann. Mat. Pura Appl. [ 186]{} (2007), 539-564. N. Fukagai, M. Ito and K. Narukawa, *Positive solutions of quasilinear elliptic equations with critical Orlicz-Sobolev nonlinearity on $\mathbb{R}^{N}$,* Funkcial. Ekvac. [ 49]{} (2006), 235-267. G. Kirchhoff, Mechanik, Teubner, Leipzig, 1883 G. M. Lieberman, The natural generalization of the natural conditions of Ladyzhenskaya and UralÊ${{}^1}$tseva for elliptic equations. Comm. Partial Differential Equations 16 (1991), no. 2-3, 311–361. T.F. Ma, Remarks on an elliptic equation of Kirchhoff type, Nonlinear Anal., 63 (2005), 1967-1977. K. Perera and Z. Zhang, Nontrivial solutions of Kirchhoff type problems via the Yang index, J. Differential Equations, 221 (2006), 246-255. P.H. Rabinowitz, Some global results for nonlinear eigenvalue problems, J. Funct. Anal. 7 (1971., 487–513; M. N. Rao and Z. D. Ren, Theory of Orlicz Spaces, Marcel Dekker, New York, (1985). G.C.G. dos Santos and G. M. Figueiredo, Positive solutions for a class of nonlocal problems involving Lebesgue generalized spaces: scalar and system cases. J. Elliptic Parabolic Equ. 2(1-2) (2016), 235-266 . G.C.G. dos Santos, G. M. Figueiredo and L. S. Tavares, A sub-supersolution method for a class of nonlocal problems involving the $p(x) $-Laplacian operator and applications. Acta Appl. Math. 153 (2018), 171-187. P. Souplet, Uniform blow-up pro les and boundary behavior for di usion equations with nonlocal nonlinear source, J. Di erential Equations, 153 (1999), 374-406. Z. Tan and F. Fang, *Orlicz-Sobolev versus Hölder local minimizer and multiplicity results for quasilinear elliptic equations*, J. Math. Anal. Appl., [ 402]{} (2013), 348-370. B. Yan, D. Wang, The multiplicity of positive solutions for a class of nonlocal elliptic problem. J.Math. Anal. Appl. 442(1) (2016), 72-102. Z. Zhang and K. Perera, Sign changing solutions of Kirchhoff type problems via invariant sets of descent flows, J. Math. Anal. Appl., 317 (2006), 456-463.
--- abstract: 'Taking advantage of an extended Lugiato–Lefever equation with third-order dispersion, we numerically show that dark cavity solitons formed in normal dispersion of microresonators are capable of emitting dispersive waves in both normal and anomalous dispersion regions, resembling the behavior of the commonly encountered bright cavity solitons. The generated dispersive waves can be accurately predicted by the dissipative radiation theory. In addition, we demonstrate the stability enhancement of Kerr frequency combs in normal dispersion regime in case the dispersive wave is emitted by dark solitons in presence of third-order dispersion.' author: - Shaofei Wang - Xianglong Zeng title: 'Dispersive wave emission from dark solitons in microresonator-based Kerr frequency combs' --- Kerr frequency combs generated in high-Q whispering gallery mode microresonators thanks to the cascaded four-wave mixing (FWM) effect have attracted substantial research interests over the past few years [@Kippenberg2011]. In particular, temporal cavity solitons (TCSs) were demonstrated both theoretically [@Matsko2011; @Coen20131; @Chembo2013] and experimentally [@Herr20141; @Saha2013] in microresonators, pushed one step further for research of versatile Kerr frequency combs. TCSs are localized dissipative pulses which are able to excite phase-locked, high-coherent Kerr frequency combs [@Lamont2013; @Herr20142; @Pel'Haye2015]. In fact, cavity solitons were first studied in spatial domain and passive fiber cavity structures [@Haelterman19921; @Haelterman19922; @Leo2010], taking advantage of the well-known mean-field Lugiato-Lefever (LL) equation [@Lugiato1987]. The LL equation indeed builds up a direct bridge from earlier research of cavity solitons towards current Kerr frequency combs, offering an unprecedented approach to explore more details in both time- and frequency domains of Kerr frequency combs [@Godey2014]. Therefore, considerable efforts have been dedicated recently to investigating microresonator-based TCS dynamics in presence of various perturbations, among which the effect of high-order dispersions (HODs) is an important aspect. Reminiscent of behaviors in conservative fiber geometries [@Dudley2006], HODs enable TCSs formed in microresonators to emit dispersive waves (DWs) as well [@Milian2014; @Brasch2014; @Jang2014; @Wang2014; @Parra-Rivas20141; @Milian2015]. Typically, TCSs exist in the anomalous group velocity dispersion (GVD) region in microresonators [@Herr20141]. However, recently it was found remarkably that mode-locking of Kerr frequency combs can also be realized in normal GVD regime [@Matsko20121; @Coillet2013; @Liang2014; @Xue2015; @Huang2015], namely the so-called dark TCSs in temporal domain. Even in some extreme cases, bright (dark) TCSs can be supported in normal (anomalous) GVD as well [@Tlidi2010; @Tlidi2013; @Huang2015]. Dark TCSs are being drawn grown attention in frame of Kerr frequency combs [@Liu2014; @Lobanov2015]. Yet, unlike bright TCS counterparts, studies of dark TCSs in normal GVD region in presence of perturbations are rather sporadic. Motivated by earlier results about the influence of the third-order dispersion (TOD) on solitons in nonlinear fibers [@Afanasjev1996; @Milian2009], we present some specific cases to illustrate the influence of the TOD term on dark microresonators-based TCSs in normal GVD regime, by meas of the LL equation including the TOD term. We find that dark TCSs in normal GVD region are able to completely exhibit the features of bright TCSs in anomalous GVD region, including the DW generations as well as comb stabilizations. Due to the periodic boundary nature of resonators, the generated DWs could be located in both anomalous and normal GVD regimes. The mean-field LL equation is a damped, driven Schrödinger equation which is widely utilized to describe Kerr frequency comb generation in microresonators [@Matsko2011; @Coen20131; @Chembo2013]. The normalized LL equation with the TOD term reads $$\begin{aligned} \frac{{\partial E\left( {t,\tau } \right)}}{{\partial t}} = - \left( {1 + i\Delta } \right)E + i{\left| E \right|^2}E - i\frac{{{\beta _2}}}{2}\frac{{{\partial ^2}E}}{{\partial {\tau ^2}}} + \frac{{{\beta _3}}}{6}\frac{{{\partial ^3}E}}{{\partial {\tau ^3}}} + F, \label{eq1}\end{aligned}$$ where $E$ is the intracavity field, $\tau$ represents the time within a single roundtrip, termed the *fast time*, $t$ denotes the time over successive roundtrip numbers, which is called the *slow time*. $\Delta$ is the phase detuning with respect to the continuous wave (cw) pump frequency. $F$ is the external cw driven field. $\beta_2$ and $\beta_3$ are GVD and TOD terms, respectively. Normally, the HODs become crucial and cannot be ignored when an octave-spanning Kerr frequency comb is excited in a microresonator with a low GVD value over a wide frequency range [@Del'Haye2011; @Okawachi2011; @Brasch2014], in which the waveguide dispersion is able to reshape the overall dispersion profile, giving rise to a zero dispersion point (ZDPs) at short frequency. Alternatively, when the pump cw frequency is in the vicinity of the ZDP, the role of the HODs are significant thus the DW emission is possible [@Grudinin2013]. For clarity, here we consider the dominated TOD term in the LL equation. According to the dissipative radiation theory proposed in Refs. [@Milian2014; @Malaguti2014], it is easy to obtain the following relation for the complex frequency $\Omega$ of the generated DW: $$V\Omega = \frac{{{\beta _3}}}{6}{{\Omega}^3} + i \pm \sqrt {{{\left( {2{P_0} - \Delta + \frac{{{\beta _2}}}{2}{{\Omega}^2}} \right)}^2} - P_0^2}, \label{eq2}$$ where $V$ is the soliton temporal drift velocity within a single roundtrip due to the TOD term [@Wang2014; @Parra-Rivas20141]. $P_0$ is the TCS background power (corresponds to the cw power) rather than the peak power [@Milian2014; @Jang2014]. For a dark cavity TCS, $P_0$ corresponds to the top flat power value. Further considering the real part solutions of Eq. \[eq2\], we can acquire the final expression for the resonant relation, which is referred as $R\left(\Omega\right)$, namely, $$R\left( \Omega \right) = \frac{{{\beta _3}}}{6}{{\Omega}^3} - VQ \pm \sqrt {{{\left( {2{P_0} - \Delta + \frac{{{\beta _2}}}{2}{{\Omega}^2}} \right)}^2} - P_0^2}. \label{eq3}$$ Obviously, the possible roots for $R\left( \Omega \right)= 0$ are the frequency locations of the generated DWs. Equation \[eq3\] is an odd function with respect to the center frequency $\Omega = 0$. For this reason, it differs from the classical resonant radiation in fibers since it holds at least a pair of symmetrical DWs in case of a single ZDP [@Milian2014; @Jang2014]. Now we begin to address the first case. Specifically, we employ the parameters $\Delta = 2.5$, $F^2 = 2.6$, which are confined within normal-dispersion TCS area according to the nonlinear bifurcation diagram of the LL equation, see Ref. [@Godey2014] for details. In fact, dark TCSs in normal GVD region are more difficult to be aroused since their parameter space are fairly narrow compared with that of the bright TCSs. Considering the fact that TCSs cannot be built directly from initial noise since the modulation instability area does not connect with the TCS region in normal dispersion regime, we therefore utilize an exponential-shape gap pulse instead of the random noise as the initial pump state, following commonly employed approaches, for instance Refs. [@Godey2014; @Coillet2013], namely, $${E_0}\left( {0,\tau } \right) = a - b\exp \left[ { - {{\left( {{\tau \mathord{\left/ {\vphantom {\tau c}} \right. \kern-\nulldelimiterspace} c}} \right)}^2}} \right], \label{eq4}$$ where we employ $a = 1.7$, $b = 1.0$, and $c = 0.9$ for the first case. Note that the dispersion scaling is not important to excite a resonator [@Coen20132], therefore we use the relative small dispersions, i.e., $\beta_2 = 0.005$, $\beta_3 = 0.05\beta_2$, to arise a broad spectrum enabling a considerable DW emission. The dispersion parameters yield a ZDP $\Omega_z = -20$. Simulations of dimensionless Eq. \[eq1\] are performed employing the above parameters. The *fast time* window is chosen much larger than the initial pulse duration, ensuring that boundary conditions do not influence the TCSs. The *slow time* step size is set as $0.001$ in simulations, which is small enough to avoid any possible artifacts. Simulation results are depicted in Fig. \[fig1\]. Therein, the output spectrum in presence of TOD (i.e. blue solid line in the bottom of Fig. \[fig1\](a)) clearly suggests that a DW is emitted around $\Omega = -71$, which is in excellent agreement with the corresponding $R\left( \Omega \right)$ (i.e. the blue solid line in the top of Fig. \[fig1\](a)). In contrast, without the TOD term, the final steady comb exhibits a completely symmetric spectrum profile without any DW emission (i.e. yellow dashed line in the bottom of Fig. \[fig1\](a)), as expected from the corresponding resonate relation (i.e. the yellow dashed line in the top of Fig. \[fig1\](a)). Although the theoretical analysis of Eq. \[eq3\] reveals that double symmetric DWs should be created in this case, we find in simulations the other DW is too weak to be observed, so we merely present the ’+’ branch of the $R\left(\Omega\right)$, corresponding to the powerful DW observed in Fig. \[fig1\]. We will illustrate this issue in the following case. On the other hand, from the final dark TCSs formed in temporal domain, as shown in Fig. \[fig1\](b), one can easily identify that a standard dark TCS is formed in the center of the *fast time* in absence of the TOD term, which is plotted as the yellow dotted line in Fig. \[fig1\](b). Nevertheless, the presence of the TOD gives rise to the DW emission associating with the considerable time drift as well as the soliton tail oscillation, which are exactly the same behaviors for bright TCSs [@Parra-Rivas20141] and even for solitons in classical fibers [@Afanasjev1996]. In this regard, they share the same nature due to the fact that they are all governed by the master Schrödinger equation. Interestingly, the final phase profiles of the corresponding TCSs given in Fig. \[fig1\](b) show the corresponding TCS profiles. Further focusing on the temporal and spectral evolutions along *fast time*, as shown in Figs. \[fig1\](c) and \[fig1\](d), we find the dark TCS is formed from $t = 96$, which is estimated by detecting the intracavity power (i.e. white dashed line in Fig. \[fig1\](c)). The fine fringes in the vicinity of the soliton edge denotes the DW emission process, which is highlighted more intuitively in spectral domain, as seen in Fig. \[fig1\](d). In addition, we find the DW-like emission comes into being from $t = 10$, far priors to the final TCS formation, implying that TOD disturbs the field $E$ since the beginning of the propagation. We now proceed to validate the DW emission from dark TCSs in microresonators. In the second case, we change the dissipation parameters $\Delta = 5.0$, $F^2 = 6.5$, and we increase the dispersion values $\beta_2 = 0.0125$, $\beta_3 = 0.06\beta_2$ to further boost the DW emission efficiency. The generated ZDP is $\Omega_z = -16.67$. Reasonably, the corresponding initial pump condition should be modified as well, so we set $a = 2.0$, $b = 1.5$, and $c = 0.1$ for Eq. \[eq4\]. Simulation results are given in Fig. \[fig2\], clearly showing a pair of DWs locate at $\Omega = \pm 61$, respectively. The generated DWs are accurately predicted by Eq. \[eq3\] for both branches. The left-hand DW is, however, more efficient than the other, which accounts for the single DW emission in Fig. \[fig1\] and one earlier experiment of bright TCSs [@Jang2014]. Moreover, we find that the final TCS is accompanied with a negligible temporal drift (i.e. offset from $\tau = 0$) but a strong oscillate tail (i.e. a strongly stretched but localized branch near the main dark TCSs), suggesting that powerful DW emissions are achievable. The detailed evolution dynamics along successive roundtrips of the TCS are shown in Figs. \[fig2\](c) and \[fig2\](d), in which a pair of DWs are emitted around $t = 2$ while the TCS is formed from $t = 5$. Although the double DWs are symmetrically located in either side of $\Omega = 0$, they share the identical side in temporal domain, as can be identified from Fig. \[fig2\](b). We attribute this feature by the periodic boundary conditions imposed by the resonator nature. It has been demonstrated that bright TCSs in anomalous GVD region is able to stabilize Kerr frequency combs in presence of TOD [@Parra-Rivas20141]. Next we will investigate this issue in terms of dark TCSs in normal dispersion. To do this, we increase the dissipation values, i.e., $\Delta = 6.0$, $F^2 = 8.5$, enabling the so-called *breather* solitons exist in the resonator. In fact, the *breather* is a special type of TCSs, existing above the Hopf threshold in the soliton branch of bifurcation diagrams [@Matsko20122; @Leo2013]. In the context of *breather* TCSs, the spectrum permanently oscillates along propagation, for instance our case shown in Figs. \[fig3\](a) and \[fig3\](c) (see the white dashed line) in absence of the TOD term. The same behavior is available for bright TCSs in case of anomalous GVD [@Matsko20122; @Leo2013]. Obviously, the *breather* TCSs would degrade the coherence of the final comb profile [@Matsko20122; @Erkintalo2014], however, if we incorporate the TOD term in Eq. \[eq1\], one can see clearly that the *breather* TCS vanishes, followed by a stable TCS with a constant power along propagation, as shown in Fig. \[fig3\](d). In spectral domain, as depicted in Fig. \[fig3\](b), the periodic comb feature is also replaced by a constant spectrum accompanying with the double-band DW emissions. Theoretically speaking, the TOD is able to stabilize Kerr frequency combs since it modifies the Hopf threshold and shrinks the existence area of the *breathers*, therefore giving rise to a tremendous parameter space for stable TCSs[@Parra-Rivas20142]. In other words, the DW emissions from dark TCSs in normal dispersion regime are capable of stabilizing Kerr frequency combs in normal GVD regime. In conclusion, we have demonstrated numerically the DW emission from dark TCSs in normal GVD of microresonators by incorporating the TOD term in an extended LL equation. We show that the generated DWs can be located in both normal and anomalous GVD regimes in case of a single ZDP, as expected from the dissipative radiation theory. Furthermore, stabilizing Kerr frequency combs in normal GVD regime are confirmed by involving the DW emission from dark TCSs. The result obtained here based on the simplified model will benefit for future exploring more realistic scenarios of Kerr frequency combs in normal dispersion regime. [41]{} T. J. Kippenberg, R. Holzwarth, and S. A. Diddams, “Microresonator-based optical frequency combs,” Science **332**, 555 (2011). A. B. Matsko, A. A. Savchenkov, W. Liang, V. S. Ilchenko, D. Seidel, and L. Maleki, “Mode-locked Kerr frequency combs,” Opt. Lett. **36**, 2845 (2011). S. Coen, H. G. Randle, T. Sylvestre, and M. Erkintalo, “Modeling of octave-spanning Kerr frequency combs using a generalized mean-field Lugiato-Lefever model,” Opt. Lett. **38**, 37 (2013). Y. K. Chembo and C. R. Menyuk, “Spatiotemporal Lugiato-Lefever formalism for Kerr-comb generation in whisperinggallery-mode resonators,” Phys. Rev. A **87**, 053852 (2013). T. Herr, V. Brasch, J. D. Jost, C. Y. Wang, N. M. Kondratiev, M. L. Gorodetsky, and T. J. Kippenberg, “Temporal solitons in optical microresonators,” Nat. Photon. **8**, 145 (2014). K. Saha, Y. Okawachi, B. Shim, J. S. Levy, R. Salem, A. R. Johnson, M. A. Foster, M. R. Lamont, M. Lipson, and A. L. Gaeta, “Modelocking and femtosecond pulse generation in chip-based frequency combs,” Opt. Express **21**, 1335 (2013). M. R. E. Lamont, Y. Okawachi, and A. L. Gaeta, “Route to stabilized ultrabroadband microresonator-based frequency combs,” Opt. Lett. **38**, 3478 (2013). T. Herr, V. Brasch, J. D. Jost, I. Mirgorodskiy, G. Lihachev, M. L. Gorodetsky, and T. J. Kippenberg, “Mode spectrum and temporal soliton formation in optical microresonators,” Phys. Rev. Lett. **113**, 123901 (2014). P. Del’Haye, A. Coillet, W. Loh, K. Beha, S. B. Papp, and S. A. Diddams, “Phase steps and resonator detuning measurements in microresonator frequency combs,” Nat. Commun. **6**, 5668 (2015). M. Haelterman, S. Trillo, and S. Wabnitz, “Additive-modulation-instability ring laser in the normal dispersion regime of a fiber,” Opt. Lett. **17**, 745 (1992). M. Haelterman, S. Trillo, and S. Wabnitz, “Dissipative modulation instability in a nonlinear dispersive ring cavity,” Opt. Commun. **91**, 401 (1992). F. Leo, S. Coen, P. Kockaert, S.-P. Gorza, P. Emplit, and M. Haelterman, “Temporal cavity solitons in one-dimensional Kerr media as bits in an all-optical buffer,” Nat. Photon. **4**, 471 (2010). L. A. Lugiato and R. Lefever, “Spatial dissipative structures in passive optical systems,” Phys. Rev. Lett. **58**, 2209 (1987). C. Godey, I. V. Balakireva, A. Coillet, and Y. K. Chembo, “Stability analysis of the spatiotemporal Lugiato-Lefever model for Kerr optical frequency combs in the anomalous and normal dispersion regimes,” Phys. Rev. A **89**, 063814 (2014). J. M. Dudley, G. Genty, and S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. **78**, 1135 (2006). C. Milián and D. V. Skryabin, “Soliton families and resonant radiation in a micro-ring resonator near zero group-velocity dispersion,” Opt. Express **22**, 3732 (2014). V. Brasch, T. Herr, M. Geiselmann, G. Lihachev, M. H. Pfeiffer, M. L. Gorodetsky, and T. J. Kippenberg, “Photonic chip based optical frequency comb using soliton induced Cherenkov radiation,” arXiv:1410.8598 (2014). J. K. Jang, M. Erkintalo, S. G. Murdoch, and S. Coen, “Observation of dispersive wave emission by temporal cavity solitons,” Opt. Lett. **39**, 5503 (2014). S. F. Wang, H. R. Guo, X. K. Bai, and X. L. Zeng, “Broadband Kerr frequency combs and intracavity soliton dynamics influenced by high-order cavity dispersion,” Opt. Lett. **39**, 2880 (2014). P. Parra-Rivas, D. Gomila, F. Leo, S. Coen, and L. Gelens, “Third-order chromatic dispersion stabilizes Kerr frequency combs,” Opt. Lett. **39**, 2971 (2014). C. Milián, A. V. Gorbach, M. Taki, A. V. Yulin, and D. V. Skryabin, “Solitons and frequency combs in silica microring resonators: Interplay of the Raman and higher-order dispersion effects,” Phys. Rev. A **92**, 033851 (2015). A. B. Matsko, A. A. Savchenkov, and L. Maleki, “Normal group-velocity dispersion Kerr frequency comb,” Opt. Lett. **37**, 43 (2012). A. Coillet, I. Balakireva, R. Henriet, K. Saleh, L. Larger, J. M. Dudley, C. R. Menyuk, and Y. K. Chembo, “Azimuthal turing patterns, bright and dark cavity solitons in Kerr combs generated with whisperinggallery-mode resonators,” IEEE Photon. J. **5**, 6100409 (2013). W. Liang, A. A. Savchenkov, V. S. Ilchenko, D. Eliyahu, D. Seidel, A. B. Matsko, and L. Maleki, “Generation of a coherent near-infrared Kerr frequency comb in a monolithic microresonator with normal GVD,” Opt. Lett. **39**, 2920 (2014). X. X. Xue, Y. Xuan, Y. Liu, P.-H. Wang, S. Chen, J. Wang, D. E. Leaird, M. H. Qi, and A. M. Weiner, ¡°Mode-locked dark pulse Kerr combs in normal-dispersion microresonators,¡± Nat. Photon. 9, 594 (2015). S. W. Huang, H. Zhou, J. Yang, J. F. McMillan, A. Matsko, M. Yu, D. L. Kwong, L. Maleki, C. W. Wong, “Mode-locked ultrashort pulse generation from on-chip normal dispersion microresonators,” Phys. Rev. Lett. **114**, 053901 (2015). M. Tlidi and L. Gelens, “High-order dispersion stabilizes dark dissipative solitons in all-fiber cavities,” Opt. Lett. **35**, 306 (2010). M. Tlidi, L. Bahloul, L. Cherbi, A. Hariz and S. Coulibaly, “Drift of dark cavity solitons in a photonic-crystal fiber resonator,” Phys. Rev. A **88**, 035802 (2013). Y. Liu, Y. Xuan, X. X. Xue, P.-H. Wang, S. Chen, A. J. Metcalf, J. Wang, D. E. Leaird, M. H. Qi, and A. M. Weiner, “Investigation of mode coupling in normal-dispersion silicon nitride microresonators for Kerr frequency comb generation,” Optica **1**, 137 (2014). V. E. Lobanov, G. Lihachev, T. J. Kippenberg, and M. L. Gorodetsky, “Frequency combs and platicons in optical microresonators with normal GVD,” Opt. Express **23**, 7713 (2015). V. V. Afanasjev, Y. S. Kivshar, and C. R. Menyuk, “Effect of third-order dispersion on dark solitons,” Opt. Lett. **21**, 1975 (1996). C. Milián, D. V. Skryabin, and A. Ferrando, “Continuum generation by dark solitons,” Opt. Lett. **34**, 2096 (2009). P. Del’Haye, T. Herr, E. Gavartin, M. L. Gorodetsky, R. Holzwarth, and T. J. Kippenberg, “Octave spanning tunable frequency comb from a microresonator,” Phys. Rev. Lett. **107**, 063901 (2011). Y. Okawachi, K. Saha, J. S. Levy, Y. H. Wen, M. Lipson, and A. L. Gaeta, “Octave-spanning frequency comb generation in a silicon nitride chip,” Opt. Lett. **36**, 3398 (2011). I. S. Grudinin, L. Baumgartel, and N. Yu, “Impact of cavity spectrum on span in microresonator frequency combs,” Opt. Express **21**, 26929 (2013). S. Malaguti, M. Conforti, and S. Trillo, “Dispersive radiation induced by shock waves in passive resonators,” Opt. Lett. **39**, 5626 (2014). S. Coen and M. Erkintalo, “Universal scaling laws of Kerr frequency combs,” Opt. Lett. **38**, 1790 (2013). A. B. Matsko, A. A. Savchenkov, and L. Maleki, “On excitation of breather solitons in an optical microresonator,” Opt. Lett. **37**, 4856 (2012). F. Leo, L. Gelens, Ph. Emplit, M. Haelterman, and S. Coen, “Dynamics of one-dimensional Kerr cavity solitons,” Opt. Express **21**, 9180 (2013). M. Erkintalo and S. Coen, “Coherence properties of Kerr frequency combs,” Opt. Lett. **39**, 283 (2014). P. Parra-Rivas, D. Gomila, M. A. Matias, S. Coen, L. Gelens, “Dynamics of localized and patterned structures in the Lugiato-Lefever equation determine the stability and shape of optical frequency combs,” Phys. Rev. A **89**, 043813 (2014).
--- abstract: 'The nuclear magnetic relaxation times $T_2$ and $T_1$ of $^{55}$Mn in the molecular cluster magnet Mn$_{12}$ Ac have been measured, using the spin-echo method for oriented powder sample, at low temperatures below 2.5K down to 200mK in the fields up to 9T applied along the $c$-axis. Above about 1.5K both of relaxation rates $T_{2}^{-1}$ and $T_{1}^{-1}$ exhibit remarkable decreases with decreasing temperature in zero field with the relative relation like $T_2^{-1}/T_1^{-1}\approx200$. At the lower temperatures, $T_{2}^{-1}$ tends to become constant with the value of about $10^2$s$^{-1}$, while $T_{1}^{-1}$ still exhibits an appreciable decrease down to around 0.5K. The analysis for the experimental results was made on basis of the concept that the fluctuating local field responsible for the nuclear magnetic relaxation is caused by thermal fluctuations of the Zeeman levels of the cluster spin of $S=10$ due to the spin-phonon interactions. Then the problem was simplified by considering only the thermal excitation from the ground state to the first excited state, that is, a step-wise fluctuation with respective average life times $\tau_0$ and $\tau_1$. By applying nonlinear theory for such a fluctuating local field, a general expression for $T_{2}$ was obtained. It turned out that the experimental results for $T_{2}^{-1}$ are explained in terms of the equation of $T_2^{-1}=\tau_0^{-1}$, which corresponds to the strong collision regime under the condition $\tau_0\gg\tau_1$. On the other hand, the results for $T_1$ have been well understood, on the basis of the standard perturbation method, by the equation for the the high-frequency limit like $T_1^{-1}\sim 1/\tau_0 \omega_N^2$, where $\omega_N$ is the $^{55}$Mn Larmor frequency. The experimental results for the field dependence of the $T_2^{-1}$ and $T_1^{-1}$ were also interpreted reasonably in terms of the above theoretical treatment. The quantitative comparison between the experimental results and the theoretical equations was made using hyperfine interaction tensors for each of three manganese ions determined from the analysis for the NMR spectra in zero field.' author: - Takao Goto - Takeshi Koshiba - Takeji Kubo - Kunio Awaga title: ' Studies of transverse and longitudinal relaxations of $^{55}$Mn in molecular cluster magnet Mn$_{12}$Ac' --- Introduction\[sec:Intro\] ========================= Recently there has been a great interest in nano-scale molecular cluster magnets in view of the microscopic quantum nature appearing in the macroscopic properties of the system. The synthesis of the molecular cluster magnets have solved the most serious problem of the particle size, because the cluster size of the cluster magnet is exactly same and known. As a typical candidate compound, the molecular cluster magnet Mn$_{12}$O$_{12}$(CH$_3$COO)$_{6}$(H$_2$O)$_4$ (abbreviated as Mn$_{12}$Ac), which was synthesized and whose crystal structure was studied using X-ray diffraction by Lis, [@L] has been studied so far most extensively. Magnetic properties of Mn$_{12}$Ac have been explained satisfactorily by treating the strongly-coupled cluster spins as a single quantum spin of $S$=10. The prominent features have been demonstrated in very long relaxation time of the magnetization at low temperatures, [@SGCN; @VBSP] and the step-wise recovery of the magnetization in the external field at every interval of about 0.45T, which is associated with the quantum tunneling which occurs with coincidence in the Zeeman levels of oppositely directed magnetization of the cluster spins. [@TLBGSB; @FSTZ] Such a tunneling phenomenon was interpreted as thermally-assisted and/or field-tuned processes associated with the spin-phonon interaction. [@PRHV] Another molecular compound \[(tacn)$_6$Fe$_8$O$_2$(OH)$_{12}$\]$^{8+}$ (abbreviated as Fe$_8$) with a large cluster-spin of $S$=10 has been the subject of investigations on the same viewpoint. [@SOPSG; @WSG] On contrary to Mn$_{12}$Ac and Fe$_8$ in which the tunneling is due to “spin-bath”, a molecular cluster magnet K$_8$\[V$_{15}^{{\rm IV}}$As$_6$O$_{42}$(H$_2$O)\]$\cdot$8H$_{2}$O (so called V$_{15}$) has been studied as a system with a lowest cluster spin of $S$=1/2, which shows a hysteresis curve associated with “phonon-bath”, although there is no energy barrier against the spin reversals. [@CWMBB] A great deal of experimental and theoretical works related to Mn$_{12}$Ac has been reviewed in some books and review papers. [@GB; @CT; @HPV; @TPS; @TB] In order to understand the magnetic properties of these molecular cluster magnets more thoroughly, it is worthwhile to examine the dynamical behavior as well as statistical nature of each magnetic ion which constitutes the cluster spin. One of the most promising experimental procedures for this purpose will be to use an NMR with respect to the relevant magnetic ions. In view of this, we have been studying $^{55}$Mn NMR in Mn$_{12}$Ac. As reported in our preliminary paper, we first succeeded, using powder sample, in observing all of $^{55}$Mn NMR signals belonging to Mn${^{4+}}$ and two in-equivalent Mn${^{3+}}$ ions in zero field, and measured the temperature dependence of the transverse relaxation time $T_2$ at liq. Helium temperatures. [@GKKFOATA] In the recent paper, we have determined the hyperfine interaction tensors of $^{55}$Mn nuclei in Mn$_{12}$Ac by analyzing the more detailed $^{55}$Mn NMR spectra on the basis of the ground-state spin-configuration. [@KGKTA] In the present work, we have measured, using oriented powder sample, the transverse relaxation time $T_2$ and the spin-lattice relaxation time $T_1$ of $^{55}$Mn in Mn$_{12}$Ac. The measurements have been done in the wide temperature range from 2.5K down to 200mK in zero field and at liq. Helium temperatures with an applied field up to 9T. As for the nuclear-spin lattice relaxation in Mn$_{12}$Ac, the proton spin- and muon spin-lattice relaxation times have been measured by Lascialfari [[*et al*]{}]{}. at the temperature range of 4.2-400K and at the field range of 0-9.4T, [@LGBSJC] and subsequently at the lower temperatures down to around 2K below 1.6T. [@LJBCG] In Ref.  the field dependence of the proton relaxation rate $T_{1}^{-1}$ and the temperature dependence of the muon relaxation rate have been analyzed on the basis of the standard perturbation method, that is, weak collision model. Then the fluctuating local fields at the proton or the muon site have been taken to originate from the random change between the adjacent Zeeman-energy levels of the ground-state configuration of the total spin of $S$=10, which is caused by the spin-phonon interactions. Quite recently Furukawa [[*et al*]{}]{}. have reported on the experimental results for the temperature and field dependence of $T_{1}^{-1}$ of $^{55}$Mn in Mn$_{12}$Ac above 1.2K and below 1T. [@FWKBG], and the analysis has been made on the basis of essentially the same treatment given in Ref. . The analysis for our experimental results are made on the basis of the concept as presented in Ref.  such that the fluctuating local field at each of $^{55}$Mn sites in Mn${^{4+}}$ and Mn${^{3+}}$ ions is originated from the thermal excitations in the Zeeman-energy levels due to the spin-phonon interaction. Then, in order to proceed the analytical treatment, we simplify the problem by taking into account only the lowest two levels of the cluster spin $S$=10, the ground-state and the first excited state which lies above about 12K. This simplification will be reasonable, since the data for the $^{55}$Mn NMR are available only below about 2.5K, thus the statistical weight of higher excited states being extremely small. For the interpretation for $T_2$, which was measured by observing the spin-echo decay time, we employ the general treatment based on non-linear theory. As we shall explain, it turns out that the transverse relaxation rate $T_{2}^{-1}$ is reasonably understood in terms of the strong collision regime instead of weak collision regime. On the other hand, the results of $T_1$ is interpreted satisfactorily by the standard perturbation formalism. However, our standpoint for the interpretation for $T_1$ is somewhat different from that presented in Ref.  apart from the simplification of the problem. The essential part has already been reported in our recent brief publication. [@KGKA] The constitution of the present article is as follows. In the next section, we explain briefly the magnetic structure and hyperfine coupling tensors of $^{55}$Mn in Mn$_{12}$Ac determined by analysis of $^{55}$Mn NMR spectra. The experimental results are presented in Sec. \[sec:exp\]. Section \[sec:An\] is devoted to the derivation of the theoretical equations for interpretations for the experimental results. Subsequently we show that the experimental results of $T_2$ and $T_1$ are interpreted reasonably in terms of the present theoretical treatment. In Sec. \[sec:discussions\] we discuss on the quantitative considerations. The final section yields the summaries of this article. Properties of $\mbox{Mn}_{12}\mbox{Ac}$\[sec:system\] ===================================================== Crystal Structure and magnetic properties\[subsec:crystal\] ----------------------------------------------------------- The crystal and magnetic structures of the cluster of Mn$_{12}$Ac are shown in Fig. \[fig:crystal\](a). Each cluster, which constitutes a tetragonal symmetry with the lattice constants of $a=b=17.3$Å, and $c=12.4$Å, is constructed from four Mn${^{4+}}$ ions ($S=3/2$) in a central tetrahedron (denoted by Mn(1)) and surrounding eight Mn${^{3+}}$ ($S=3/2$) with two in-equivalent sites (denoted by Mn(2) and Mn(3)) located alternately. The Mn atoms are linked by triply bridged oxo oxygens and by carboxylate bridges from acetate anions. The Mn(1) and Mn(2) have distorted octahedral coordination of oxygens due to above links. In Mn(3), one water molecule completes the octahedral environment of oxygens. [@L] There exist four kinds of exchange interactions among these manganese ions, as shown schematically in Fig. \[fig:crystal\](b). Quite recently, a set of the magnitudes and signs of these exchange couplings has been determined, using exact diagonalization of the spin Hamiltonian of the cluster spin by a Lanzcos algorism, so as to explain the high-field magnetization and the realization of the ground state of $S=10$. These values are $J_1/k_B=-119$K, $J_2/k_B=-118$K (antiferromagnetic), $J_3/k_B=8$K (ferromagnetic) and, $J_4/k_B=-23$K, [@RJSGV] and except the value of $J_1/k_B$, they are different largely from the previously available values. [@STSWVFGCH] As a combined effect of the above exchange interactions, the ground-state configuration of the total spin of $S=10$ is established [@CGSBBG] in such a way that the assemblies of outer eight Mn${^{3+}}$ ions and the inner Mn${^{4+}}$ ions, which have respectively resultant ferromagnetic spins of $S=16$ ($8\times2$) and $S=6$ ($4\times3/2$), are coupled anti-ferromagnetically to each other at low temperatures. [@CGSBBG] Because of the anisotropy due to Jahn-Teller distortion of Mn${^{3+}}$ ions, there appears a single-ion type anisotropy $D$ along the $\pm c$-axis. Thus the total magnetic moments of each cluster are either parallel or anti-parallel to the $c$-axis. Henceforth these clusters are referred to respectively as [($+$)]{}and [($-$)]{}cluster, with respect to the $c$-axis taken as the $z$-axis. The inter-cluster interaction is only due to dipolar origin of the order of 0.5K. In the presence of an external field $H_0$ applied along the $c$-axis, the effective exchange Hamiltonian for each cluster is given by $${\cal H}=-DS_z^2-BS_z^4\pm g_{\parallel}\mu_B H_0S_z, \label{eq:one}$$ where the exchange parameters have been evaluated from recent high-field ESR [@BGS] and neutron spectroscopy [@MHCAGIC] as $g_{\parallel}=1.93$, [@BGS] $D/k_B=0.67$K, [@BGS] and 0.66K, [@MHCAGIC] and $B/k_B=1.1$-$1.2\times10^{3}$K, [@BGS] and the sign $\pm$ correspond to the [($+$)]{}and [($-$)]{}clusters, respectively. According to this Hamiltonian, the discrete energy levels $|S, m\>$ are well defined along the $z$-axis in the ground-state spin configuration of $S=10$. This gives satisfactory understandings for the most of magnetic properties of Mn$_{12}$Ac such as an energy barrier of about 60K in zero field from the ground level of $m=\pm10$ up to the highest level of $m=0$, and the step-wise recovery of the magnetization at each step of about 0.45T which occurs corresponding to the coincidence of the energy levels between $S_z= m (<0)$ in the [($+$)]{}cluster and $S_z=m (>0)$ in the [($-$)]{}cluster. In addition to the above main Hamiltonian, there exists the perturbing Hamiltonian ${\cal H^{\prime}}$ including the terms such as higher-order transverse anisotropy and the transverse external field. These terms do not commute with $S_z$, thus playing a crucial role for the mechanism of the tunnelling. In particular, the transverse external field gives rise to a drastic change in the tunnel-splitting at the level-crossing fields, which may promote the tunneling appreciably. [@PS] Hyperfine interaction tensor\[subsec:HIT\] ------------------------------------------ Next we review the $^{55}$Mn hyperfine interaction tensor determined from the NMR spectra in zero field in Ref.  for the numerical evaluation of the present experimental results. Figure \[fig:spec\] shows the NMR spectra of $^{55}$Mn ($I=5/2$) in Mn$_{12}$Ac obtained at 1.45K in zero field. The three completely separated lines with the central frequencies of $\nu_N=$230, 279, and 364MHz, were identified to be due to Mn${^{4+}}$ ion (Mn(1)), and Mn$^{3+}$ ion (Mn(2) and Mn(3)), respectively. The corresponding internal fields are $H_{{\rm int}}=21.8, 26.5, 34.5$ T. From now on, these three lines are referred to as L1, L2, and L3, respectively. The resonance lines L1, L2, and L3 involve the five-fold quadrupole-splitting with $\Delta\nu_q = 0.72, 4.3, 2.9$MHz, respectively. [@KGKTA] The nuclear hyperfine Hamiltonian, which consists of Fermi-contact, dipolar, and orbital terms, is obtained by taking the expectation values for the corresponding Hamiltonians with respect to the ground-state wave-function of the magnetic ion. This Hamiltonian is expressed as, $${\cal H}_N={\bf I}\cdot A\cdot{\bf S} =-\gamma_N\hbar{\bf I}\cdot({\bf H}_F+{\bf H}_d+{\bf H}_{l}), \label{eq:two}$$ where $A$ is the hyperfine coupling tensor between the nuclear spin ${\bf I}$ and the electronic spin ${\bf S}$, and ${\bf H}_F$, ${\bf H}_d$, ${\bf H}_{l}$ are the corresponding hyperfine fields. Each of manganese ions in Mn$_{12}$Ac is subject to the crystalline field with dominant cubic symmetry due to the surrounding distorted octahedral coordination of oxygens. The ground-state of the Mn$^{4+}$ ion (3d$^3$, $^4$F) is the orbital-singlet. So the dipolar and the orbital terms in Eq. (\[eq:two\]) vanishes primarily, and only the isotropic Fermi-contact term is important. The corresponding Fermi-contact field ${\bf H}_F$ is given, using the conversion factor of 1 atomic unit ($a.u.$) = 4.17T, as $${\bf H}_F=-\frac{A_f{\bf S}}{\gamma_N\hbar}=-2\times4.17\chi {\bf S}, \label{eq:three}$$ where $A_f$ is the component of the tensor $A$, and $\chi$ is the effective field per unpaired electron in atomic unit. According to Freeman and Watson, [@FW] the value of $\chi$ is calculated to be 2.34 for the free Mn${^{4+}}$ ion, by taking into account the contributions from three inner electron spins (1$s$, 2$s$, and 3$s$). The minus sign in ${\bf H}_F$ indicates that the direction of ${\bf H}_F$ is opposite to the magnetic moment. Thus for the free Mn${^{4+}}$ ion, the values of ${\bf H}_F$ is estimated to be $H_F=29.3$T. The effect of the admixture of the higher triplet state to ground-state and distorted crystalline field may be extremely small. In fact, for the diluted ion in the trigonal symmetry, the hyperfine anisotropy is of the order of 0.1%. [@FW]. It is noted that the experimental internal field of 21.8T is smaller by 26% as compared with calculated value of 29.3T for the free ion. This is understood by considering a large amount of reduction of the magnetic moment. The recent polarized neutron diffraction measurement yields the presence of 22% reduction from the full value of 3$\mu_B$ in the magnitude of the Mn${^{4+}}$ magnetic moment. [@RBAHA] Since the reduction of the magnetic moment due to 3$d$-electrons reflects the contact term, the present result is consistent with the neutron results. Such a large amount of reduction may be ascribed to the presence of the covalence and strong exchange interactions, as observed usually in the condensed matter. On the other hand, the ground state of the Mn${^{3+}}$ ion (3d$^4$, $^5$D) is orbital doublet denoted by $E_g$ in the cubic crystalline field. As in the case of Mn$^{4+}$ ion, the Fermi-contact field $H-F$ is obtained for the free Mn$^{3+}$ to be 48.5T using the calculated value of $\chi$=2.91. [@FW] Because of the additional elongated tetragonal symmetry of the crystalline field caused by Jahn-Teller effect and low-symmetric carboxylate ligands, the orbital degeneracy is removed to the lower and higher states with the wave functions expressed as $|\Psi_1\>=|X^2-Y^2\>$ and $|\Psi_2\>=|3Z^2-r^2\>$, respectively. Here we defined the rectangular coordinate system ($XYZ$) with the tetragonal $Z$-axis and the $X$-axis along one of the principal axes in the tetragonal plane. Further, the orthorhombic distortion of the crystalline field gives rise to the admixture of $|\Psi_2\>$ to $|\Psi_1\>$, thus the ground-state wave-function being expressed as $|\Psi_g\>= \cos\phi|\Psi_1\> +\sin\phi|\Psi_2\>$, where $\cos\phi$ and $\sin\phi$ are normalization factors. In Mn(2) and Mn(3), the $Z$-axis tilts from the $c(z)$-axis by the angles of $\theta=11.7^\circ$ and $36.2^\circ$, respectively. In the case of Mn${^{3+}}$ ion, the dipolar term in Eq. \[eq:two\] contributes appreciably to the hyperfine tensor in addition to the dominant isotropic Fermi-contact term. By using the wave function $|\Psi_g\>$ in the standard operator-equivalent method, the dipolar Hamiltonian ${\cal H}_d$=$-\gamma_N\hbar{\bf I}$$\cdot$${\bf H}_d$, which is expressed in terms of the principal terms with respect to the ($XYZ$) coordinate system, is given as, $${\cal H}_d=h_d\gamma_N\hbar\cos(2\phi) \left[S_ZI_Z-\frac{1}{2}(S_XI_X+S_YI_Y)\right] \label{eq:four}$$ with $$h_d=\frac{4}{7}\mu_B\<r^{-3}\>_d,$$ where $\<r^{-3}\>_d$ is the average for the orbital radius $r$ with respect to 3$d$ shell. It is useful to define the ($xyz$) rectangular coordinate system with the $z$-axis along the magnetic moment and the $x$-axis is taken in the $Zz$-plane. (See Fig. 4 in Ref. .) Then the above equation is expressed as [@KGKTA] $${\cal H}_d={\bf I}\cdot D\cdot{\bf S}\nonumber$$ with $$D=\frac{h_d}{4}\gamma_N\hbar\cos(2\phi) \begin{pmatrix} 2(2-3\cos^2\theta)&0&-3\sin2\theta\\ 0&-2&0\\ -3\sin2\theta&0&2(3\cos^2\theta-1) \end{pmatrix}. \label{eq:five}$$ The principal terms are anisotropic, and there appear the off-diagonal terms. For the free Mn${^{3+}}$ ion which has $\<r^{-3}\>_d=4.8\, a.u.$, [@FW] $h_d$ is evaluated to be $+$17.1T. The orbital contribution is in general evaluated from the equation, $H_{l}=-2\mu_B\<r^{-3}\>_{d}\Delta g$, where $\Delta g$ represents the deviation of the $g$-value from $g_s=2.0023$ due to admixture of the higher excited orbital state. Applying the value of $g_z =1.95$ and $g_{\perp} =1.97$ for Mn${^{3+}}$ ion evaluated from high field EPR measurement, [@BGS] the value of $H_d$ is calculated, using $\<r^{-3}\>_d=4.8\,a.u.$ for the free ion, to be about 2T. So, we may neglect the orbital contribution as compared with the other terms. By taking into account the dipolar contribution to the internal field, which is given by the $D_{zz}$ component in the dipolar tensor $D$ (Eq. ), the total internal field is given by $$H_{{\rm int}}=|{\bf H}_F|-D_{zz}/\gamma_N\hbar. \label{eq:six}$$ The identification of the L2 and L3 lines was made in view of Eq. (\[eq:six\]). By considering that the increase in $\theta$ plays a role to deduce $|{\bf H}_F|$ as far as $\theta < 53^\circ$, it turns out that the line L2 with lower $\omega_N$ should be due to Mn(2), and thus the L3 line being due to Mn(3). As for the quadrupole splitting, the dominant term is expressed, for the axial-symmetry case, as $ \Delta\nu_q=\frac{1}{2}(3\cos^2\theta-1)\nu_q, $ where $\nu_q$ is the quadrupole-splitting parameter. The larger value of $\theta$ yields the larger splitting as far as $\theta < 53^\circ$. The above identification is then consistent with the observed difference in quadrupole splitting between $^{55}$Mn NMR for the L2 and L3 lines. The determination of the components of the hyperfine tensors $A$ for Mn(2) and Mn(3) of Mn${^{3+}}$ ion was made in the following way. There are three unknown factors, the reduction factor for the contact term, the value of $\<r^{-3}\>_d$, and the amount of the mixing of the two wave functions in the ground-state $E_g$, which is represented by the factor $\cos2\phi$ in Eq. (\[eq:five\]). According to Ref. , the reductions of the magnetic moments for the Mn(2) and Mn(3) ions are obtained to be 8% and 6%, respectively. First, by applying these reduction factors to the value of $\<r^{-3}\>_d$ in Eq.  for each ion, we evaluated that $h_d$=$+$15.7 and $+$ 16.0T, respectively. Secondly, referring to the crystal parameters given in Ref. , we assumed the tetragonal and orthorhombic symmetries of the crystalline field for Mn(2) and Mn(3), respectively. That is, we put $\cos2\phi=1$ for Mn(2), and the coefficient $\cos2\phi$ for Mn(3) was remained as an unknown factor. Then using $\theta =11^\circ$ for Mn(2), we obtain the dipolar contribution $D_{zz}/\gamma_N\hbar=$+$1.6$T. Using the experimental value of 26.5T for $H_{{\rm int}}$ in Eq. , we find $H_F$(Mn(2))=24.8T. This corresponds to 85% of $H_{{\rm int}}$ for the free ion, thus reduction factor for the contact term being evaluated to be 15%. Next we adopted the same value of the contact field for Mn(3). Then using $\theta=36^\circ$, the mixing parameter for Mn(3) was estimated to be $\cos2\phi=0.89$. From the above considerations, we finally determined the following numerical values of the components of the $^{55}$Mn hyperfine-interaction tensors for each of three manganese ions, which are expressed in unit of MHz with respect to the ($xyz$)-coordinate frame with the $z$-axis of the $c$-axis; \[eq:Atensor\] $$\begin{aligned} A({\rm Mn(1) })&={\rm diag}(153,153,153),\\ A({\rm Mn(2)})&=\begin{pmatrix} 254&0&-24.7\\ 0&176&0\\ -24.7&0&140 \end{pmatrix},\\ A({\rm Mn(3)})&=\begin{pmatrix} 221&0&-53.0\\ 0&181&0\\ -53.0&0&182 \end{pmatrix}.\end{aligned}$$ Experimental results for the relaxation rates\[sec:exp\] ======================================================== The transverse and longitudinal relaxation times $T_2$ and $T_1$ of the $^{55}$Mn were measured for the three resonance lines at the liq.Helium temperatures with the external field $H_0$ up to 9T applied along the $c$-axis, and for the L1 line the measurement was extended down to 200mK in zero field. Figure \[fig:specfield\] shows the field dependence of the resonance frequencies of the central peaks of the three resonance lines, which was obtained at 1.65K by applying the external field $H_0$ along the $c$-axis ($z$-axis) after zero-field cooling. Within the experimental error, the slopes of these lines coincide with the gyromagnetic ratio $\gamma_N$ of the free manganese nuclei. As explained in Sec. \[subsec:HIT\], the internal field $H_{{\rm int}}$ of $^{55}$Mn, which is mainly due to the Fermi-contact term, appears along the spin direction, that is, opposite to the magnetic moment. So when $H_{{\rm int}}\gg H_0$ as in the present case, the resonance conditions for Mn${^{4+}}$ ion belonging to the [($+$)]{}and [($-$)]{}clusters are given by $\omega_N=\gamma_N(H_{{\rm int}}\pm H_0)$, and these are referred to as upper and lower branches, respectively. In the case of Mn$^{3+}$ ion, the above relations are vice versa. At lower temperatures than the blocking temperature of $T_B\approx3$K, we can observe the NMR signals and measure $T_2$ and $T_1$ for the [($-$)]{}cluster in addition to those for the [($+$)]{}cluster, as far as the relaxation time of the reorientation of the [($-$)]{}clusters to the $z$-direction is enough longer as compared with $T_1$. The transverse relaxation time $T_2$ were obtained by measuring the decay of spin-echo amplitude as a function of the time interval between two $rf$-pulses. The decay was of single-exponential type. The longitudinal relaxation time $T_1$ is obtained in general by measuring the recovery of the nuclear magnetization after the saturation of the nuclear magnetization of the central line. Under the ideal condition of the complete saturation, the magnetization recovery for the nucleus of $I=5/2$ is obtained by the equation [@AT] $$\begin{aligned} \label{eq:T1recovery} m(t)&=1-\frac{M(t)}{M_0}\nonumber\\ &=a\exp(-t/T_1)+b\exp(-6t/T_1)+c\exp(-15t/T_1)\end{aligned}$$ with the condition such that $a+b+c=1$. However, in the present case, it was difficult to attain complete saturation of the NMR signal because of the broadness of each quadrupole splitting resonance line. Then the relaxation rate $T_1$ was determined by doing the best-fitting of the experimental recovery curve to the Eq. (\[eq:T1recovery\]). The value of $T_1$ obtained in such a way was almost the same as the value determined from the fitting of the slowest recovery region to the single exponential equation $\exp(-t/T_1)$. Figure  \[fig:T1fitting\] shows a typical example of the best-fit recovery curve of the nuclear magnetization for the Mn${^{4+}}$ ion. Figure. \[fig:Tdep\] show the temperature dependence of the transverse relaxation rate $T_{2}^{-1}$ and the longitudinal relaxation rate $T_{1}^{-1}$ measured in zero field for each central peak of the three resonance lines. As is seen, both rates exhibit qualitatively the same remarkable decrease with decreasing temperature above about 1.4K, and the values of $T_{1}^{-1}$ are smaller than that of $T_{2}^{-1}$ by almost two orders of magnitudes. Below 1.4K, $T_{2}^{-1}$ becomes rather moderate, and it becomes almost constant around the value of 100s$^{-1}$ below about 0.5K. The values of $T_{2}^{-1}$ for Mn${^{4+}}$ ion and Mn${^{3+}}$ ions are almost the same, though the former is somewhat smaller only at the temperatures above about 2K. On the other hand, the value of $T_{1}^{-1}$ continues to decrease remarkably down to around 0.5K, Thus, at very low temperatures there appears, between $T_{2}^{-1}$ and $T_{1}^{-1}$, a difference extending over four orders of magnitude. The values of $T_{1}^{-1}$ for Mn$^{3+}$ ion are larger by about twice than that for Mn$^{4+}$ ion. The field dependence of $T_{2}^{-1}$ for Mn(1), Mn(2), and Mn(3). Mn${^{4+}}$ ion was measured at 1.65K for the [($+$)]{}cluster (upper branch) up to 9T and for the [($-$)]{}cluster (lower branch) up to 1.2T. The data were taken also at 1.45K only for the [($+$)]{}cluster up to 9T. The field dependence of $T_{1}^{-1}$ for Mn$^{4+}$ ion was measured at 1.65K for the [($+$)]{}cluster (upper branch) up to 5T, and for the [($-$)]{}cluster (lower branch) up to 1.2T. The data were taken also at 1.45K for the [($+$)]{}cluster up to 3T. The experimental results are shown in Fig. \[fig:Fdep\] and  \[fig:branch\]. As is seen, $T_{2}^{-1}$ for the [($+$)]{}cluster (upper branch) decrease monotonously with increasing field down to the field at which $T_{2}^{-1}$ reaches the value of around 150 s$^{-1}$. This value is nearly close to the constant value obtained at very low temperature in the temperature dependence of $T_{2}^{-1}$. The field dependence at 1.45K is slightly remarkable than that for 1.65K. The anomalous peak around $H_0=6.8$T in $T_{2}^{-1}$ may be due to cross-relaxation with $^1$H NMR. It should be noted that the field dependence of $T_{1}^{-1}$ is slightly more remarkable than that of $T_{2}^{-1}$. For the [($-$)]{}cluster (lower branch), on the other hand, the values of $T_{2}^{-1}$ increase with increasing field. However, the change is rather monotonous as in the case of the [($+$)]{}cluster. No any appreciable change was observed at the level-crossing fields around $H_0$=0, 0.45, and 0.9T. Figure \[fig:compFdep\] represents the field dependence of $T_{2}^{-1}$ for Mn${^{4+}}$ ion (Mn(1)) and for Mn${^{3+}}$ ion (Mn(2)) for the [($+$)]{}cluster obtained at 1.45K. No appreciable difference was found. Analysis\[sec:An\] ================== In this section, we shall analyze the experimental results. The nuclear magnetic relaxations in Mn$_{12}$Ac will be primarily caused by the fluctuating component $\delta {\bf S}(t)$ of the on-site manganese ion via the hyperfine interaction $\delta{\cal H}_N(t)={\bf I}\cdot A\cdot\delta {\bf S}(t)$, where $A$ is the hyperfine interaction tensor given by Eq. \[eq:Atensor\]. In view of the fact that an assembly of the strongly-coupled manganese spins in a cluster is established as the cluster spin of $S=10$, we may assume that each manganese spin is subject to the same fluctuation corresponding to thermal fluctuation of the cluster spin along the z-axis, which is caused by the spin-phonon interaction. Then, the effective perturbing hyperfine interaction is expressed as $$\delta{\cal H}_N(t)=(I_xA_{xz}+I_yA_{yz}+I_zA_{zz})\delta S_z(t).$$ Lascialfari [[*et al*]{}]{}. treated the fluctuating local field due to the spin-phonon interaction in analyzing the proton and muon spin-lattice relaxation rates in Mn$_{12}$Ac by considering all of Zeeman levels with statistical weight. [@LJBCG] Here, we simplify the problem by taking into account only the lowest two energy-levels of $S=10$ within each of the double-well potential, that is, the ground-state $S_z=m=-10$ and the first excited state $m=-9$ for the (+) cluster, and $m=+10$ and $m=+9$ for the (-) cluster. Such a simplification is reasonable since the present $^{55}$Mn NMR is available only at low temperatures below about 2.5K where the statistical weights of the higher excited states are quite small. Then the average life-times $\tau_0$ and $\tau_1$ of the ground-state and the excited state are given as follows within the framework of the lowest two levels; \[eq:ratio\] $$\frac{1}{\tau_0}=\frac{C\Delta^{3}}{\exp (\Delta/T)-1}$$ and $$\frac{1}{\tau_1}=\frac{C\Delta^{3}}{1-\exp (-\Delta/T)},$$ where $C$ is the coupling constant for the spin-phonon interaction, [@VBSR] and $\Delta$ is the energy difference between the ground-state and the first excited state, which is given by $$\Delta=19D_I/k_B \pm g_{\parallel}\mu_BH_0/k_B.{\nonumber}$$ Here the signs $\pm$ correspond to the upper and lower branches for Mn${^{4+}}$ ion, respectively, and vice versa for Mn${^{3+}}$ ion. By using the values of $D/k_B=0.67$K, [@BGS] and $g_{\parallel}=1.93$, [@BGS] $\Delta$ is given as $(12.7\pm 1.30\times H_0)$K, $H_0$ being expressed in T. In our present experimental condition for the applied field, the attained value of $\Delta$ is at the least about 11K. So, for the low temperatures below 2.5K, it turns out, from the expressions given by Eqs. (\[eq:ratio\]), that $\tau_0$ exhibits very remarkable temperature dependence, whereas $\tau_1$ remains almost constant with the relation such that $\tau_0\gg\tau_1$. Then, the effective fluctuation at each of the $^{55}$Mn sites is regarded to be step-wise, and it is characterized by random sudden jumps between the ground-sate and the excited state, as shown schematically in Fig. \[fig:fluc\]. Here $h_{\alpha}$ ($\alpha=z$ or $\perp$) is the average magnitude of the effective fluctuating field along or perpendicular to the $z$-axis. These effective fluctuating fields are related to the components of the hyperfine interaction tensor $A$ as $h_z=-A_{zz}/\gamma_N \hbar$ and $h_{\perp}=-A_{xz}/\gamma \hbar$. In the followings we find expressions for the nuclear magnetic relaxation rates on the basis of the above model to look at the experimental results. Transverse relaxation rate\[subsec:Analsys T2\] ----------------------------------------------- First we consider the transverse relaxation rate. In our experiment, the relaxation time $T_2$ was determined by measuring the time constant of the decay of the spin-echo amplitude $E(2\tau)$, that is, the macroscopic transverse nuclear magnetic moment, as a function of time interval $\tau$ between the two *rf*-pulses. This decay, which corresponds to the phase disturbance of the Larmor precessions caused by the longitudinal fluctuating local field, is represented as [@KA] $$E(2\tau)=E_0\left< \exp \bigl[i\int_0^\tau \delta\omega(t)dt-i\int _\tau^{2\tau} \delta\omega(t)dt]\right> \label{eq:AAA}$$ The spin-echo amplitude can be calculated from Eq.  by considering all possible pulse sequences of e fluctuations, the phase deviations and the statistical weight of the pulse sequence. This problem has been treated by Kohmoto [[*et al*]{}]{}. for the interpretation of $^{133}$Cs relaxation times $T_2$ and $T_1$ in the $S=1/2$ Ising-like linear chain antiferromagnet CsCoCl$_3$.[@KGMFFKM] According to the procedure presented in Ref. , we obtain, for $\tau_0\gg\tau_1$, the following final expression $$E(2\tau)=E_0\exp(-\frac{2\tau}{T_2})$$ with $$\frac{1}{T_2}=\frac{1}{\tau_0}\cdot \frac{(\gamma_Nh_z\tau_1)^2}{1+(\gamma_Nh_z\tau_1)^2}.$$ Thus the relaxation rate depends on the number of the fluctuation pulse per second, $\tau_0^{-1}$ and average phase change $\gamma_N h_z$ for one fluctuating field.The above equation yields for $\gamma_Nh_z\tau_1\ll1$ \[eq:T2\] $$\frac{1}{T_2}=\frac{1}{\tau_0}(\gamma_Nh_z\tau_1)^2\sim \frac{\tau_1^2}{\tau_0}, \label{eq:T2-A}$$ and for $\gamma_Nh_z\tau_1\gg1$ $$\frac{1}{T_2}=\frac{1}{\tau_0}.\label{eq:T2-B}$$ As we see in the following section, Eq.  corresponds to the expression obtained on the basis of the standard perturbation method, that is, the weak collision regime. On the other hand, Eq.  means that the relaxation rate is determined solely by the average number of the appearance of the thermal excitation per one second, and further it does not depend on the magnitude of the fluctuating field. This is the strong collision regime. As evaluated above, in the present experimental conditions, the temperature dependence of $T_{2}^{-1}$ results predominantly through the term of $\tau_0$. Accordingly, as far as the temperature dependence is concerned, there is no appreciable difference between both regimes for $T_{2}^{-1}$. On the other hand, since the term of $\Delta^3$ in $\tau_0$ and $\tau_1$ contributes to the field dependence of $T_{2}^{-1}$, the qualitative difference between Eqs.  and   is expected to appear in the field dependence of $T_{2}^{-1}$. Thus it is worthwhile to look at the qualitative field dependence of $T_2$ to find the plausible regime. The field-dependence of the relevant terms of $\tau_0$ and $\tau_1^2 \tau_0^{-1}$ calculated for T=1.45K are shown in Fig. \[fig:rates-Hdep\] by the solid and dashed lines, respectively. Clearly the solid line in Fig. \[fig:rates-Hdep\], which represents $\tau_0^{-1}$, explains well the corresponding experimental curve given in Fig. \[fig:Fdep\]. This means that the strong-collision regime is valid. The solid line in Fig. \[fig:Tdep\] represents the result for qualitative fitting of the calculated curve of Eq.  for zero field to the experimental temperature dependence of $T_{2}^{-1}$. It should be noted that there appears no definite dependence of $T_{2}^{-1}$ on the site of the manganese nuclei, although there exists rather large difference in the $A_{zz}$ component of the hyperfine tensor as estimated in the previous section (see Eqs. (\[eq:Atensor\])). This is well understood if we adopt Eq. . The results for the similar fittings for the field dependence of $T_{2}^{-1}$ are shown in Fig. \[fig:Fdep\] by the solid and dotted lines. The solid line in Fig. \[fig:compFdep\] represents the fitted curve of Eq. . In both cases of the temperature and field dependence of $T_{2}^{-1}$, the agreements are satisfactory. Thus it is concluded that the transverse relaxation is essentially determined by the phase disturbance associated with the average number of the appearance of the first excited state per one second (strong collision regime). Longitudinal relaxation rate\[subsec:Analsys T1\] ------------------------------------------------- Now let us turn to the longitudinal relaxation rate. if we pay attention to the experimental fact that $T_2^{-1}\gg T_1^{-1}$ together with the validity of Eq. , it is reasonable to assume that the longitudinal relaxation time $T_1$ is much longer than the characteristic times $\tau_0$ and $\tau_1$. Then, the longitudinal relaxation rate $T_{1}^{-1}$ should be obtained, following to the conventional perturbation theory, by the spectral component at the $^{55}$Mn Larmor frequency $\omega_N$ of the time correlation function for the step-wise fluctuating field as given in Fig. \[fig:fluc\]. The time correlation function for such a fluctuating field is easily calculated to be $$\<\{h_+(t)h_-(0)\}\>=h_{\rm eff}^2\exp(-\frac{t}{\tau_c})\label{eq:hfluc}$$ with $$h_{\rm eff}^2=\frac{\tau_o\tau_1}{(\tau_o+\tau_1)^2}h_{\perp}^2\approx \frac{\tau_1}{\tau_0}h_{\perp}^2\label{eq:heff}$$ and $$\frac{1}{\tau_c}=\frac{1}{\tau_0} +\frac{1}{\tau_1}\approx\frac{1}{\tau_1},\label{eq:tauc}$$ where $\tau_c$ is the correlation time. The approximation used in Eqs. (\[eq:heff\]) and (\[eq:tauc\]) is valid under the condition that $\tau_0\gg\tau_1$, which is the relevant case. Taking the Fourier transform of Eq. , we obtain for $\tau_0\gg\tau_1$, $$\frac{1}{T_1}=\frac{\tau_1}{\tau_0}(\gamma_N h_{\perp})^2 \frac{2\tau_1}{1+(\omega_N\tau_1)^2}\label{eq:T1gen}.$$ This equation yields for $\omega_N\tau_1\ll1$ \[eq:T1\] $$\frac{1}{T_1}=\frac{2(\gamma_Nh_\perp\tau_1)^2}{\tau_0} \sim \frac{\tau_1^2}{\tau_0},\label{eq:T1-A}$$ and for $\omega_N\tau_1\gg1$ $$\frac{1}{T_1}=\frac{2(\gamma_Nh_\perp)^2}{\tau_0\omega_N2}\sim \frac{1}{\tau_0\omega_N^2}.\label{eq:T1-B}$$ Here the resonance frequency is given by $\omega_N=\gamma_N(H_{{\rm int}}\pm H_0)$, where the signs $\pm$ correspond to the upper and lower branches for Mn${^{4+}}$ ion and vice versa for Mn${^{3+}}$ ion. As in the case of $T_{2}^{-1}$, the temperature dependence of $T_{1}^{-1}$ is almost determined by $\tau_0$, while the field dependence depends not only on $\tau_0$ but also on $\tau_1^2$ and $\omega_N^2$. The field dependence of $T_{1}^{-1}$ is determined by $\tau_1^2\tau_0^{-1}$ for the low-frequency limit $\omega_Nh_z\tau_1\ll1$, and by $\tau_0^{-1}\omega_N^{-2}$ for the high-frequency limit $\omega_Nh_z\tau_1\gg1$. The dotted line in Fig. \[fig:rates-Hdep\] represents the field dependence of $\tau_0^{-1}\omega_N^{-2}$ calculated for $T=1.45$K. It is found that the experimental field dependence obtained for $T=1.45$K fits well the dotted line, but not the dashed line, thus suggesting the validity of the equation for the high-frequency limit instead of the other. The dot-dashed and dotted lines in Fig. \[fig:Tdep\] represent the results of the best fitting of the curve $\tau_0^{-1}$ for zero field to the experimental results for temperature dependence for the L1-line (Mn${^{4+}}$ ion) and the L2-line (Mn${^{3+}}$ ion). The agreement is reasonable down to around 0.7K. The results for the similar fittings for the field dependence of $T_{1}^{-1}$ are shown in Fig. \[fig:Fdep\] by the solid and dotted lines. The agreement is also satisfactory. Thus it turns out that the longitudinal relaxation is governed by perturbing effect of the fluctuating field $h_{\perp}(t)$ (weak collision model), and then the high-frequency limit for the $\omega_N$ component of the correlation function of $h_{\perp}(t)$ holds. It should be noted that the slight difference in the field dependence between $T_{2}^{-1}$ and $T_{1}^{-1}$ results from the presence of the factor $\omega_N^{-2}$ in the latter, which is approximated as ($\gamma_NH_{{\rm int}})^{-2}(1-2H_0/H_{{\rm int}}$) for $H_{{\rm int}} \gg H_0$. Discussions\[sec:discussions\] ============================== First we examine the above treatment numerically. The use of $T_2$ yields directly the value of $\tau_0$. Then we obtain the constant for the spin-phonon interaction, $C\approx 5\times 10^3$s$^{-1}$K$^{-3}$, which lies reasonably in the range of the value of 10$^3 \sim10^5$s$^{-1}$K$^{-3}$predicted in Ref. . Using this value, we obtain $\tau_1\approx1.1\times10^{-7}$s. Here if we tentatively assume that the deviation of the cluster spin of $\delta S_z$=1 during $\tau_1$ is shared by the 12 manganese spins, the average deviation of each spin is taken to be $\delta S_z =1/4$. Then, for instance, the use of $A_{zz}/\hbar= 154$MHz for Mn${^{4+}}$ ion yields $\gamma_Nh_z\tau_1\approx$25, thus the condition for the strong collision regime for $T_{2}^{-1}$ relaxation process being satisfied. The reason why there appears above about 2K the slight difference in the value of $T_{2}^{-1}$ between Mn${^{4+}}$ and Mn${^{3+}}$ ions might be due to the cross-over from the strong-collision regime to the weak collision regime. In fact, in the latter, the difference in the coupling constant term, that is, $\gamma_Nh_{z}$ or $A_{zz}/\hbar$ should reflects in the value of $T_2$. Nevertheless, as far as we are confined ourselves within the present simplified treatment, it is not realized because $\tau_1$ is almost temperature-independent. As for $T_1$, for instance, using the evaluated value of $A_{xz}/\hbar =24.7$MHz (Mn(2)) and 53MHz (Mn(3)), we obtain $T_2^{-1}/T_1^{-1} \approx 500$ and $100$, respectively, which agrees reasonably with the experimental result in relative order of magnitude. However, the following points remain not understood. First, as far as the hyperfine interaction for Mn${^{4+}}$ ion is taken to be isotropic and not to have off-diagonal terms as determined in the present analysis of the NMR spectra, it is difficult to understand that the relaxation rate for Mn${^{4+}}$ ion is of comparable order with that for Mn${^{3+}}$ ion. In order to understand the experimental results for $T_1$ for Mn${^{4+}}$ ion, the assumption of the presence of any off-diagonal term of the hyperfine interaction tensor will be necessary. Unless the anisotropic term is not effective, the possible relaxation mechanism may be inevitably ascribed to the isotropic interaction term like $A I^{+}$$\delta$$S^{-}(t)$ as in the case of the three-magnon relaxation process in usual magnetic system. In this case, the relevant activation energy which determines the temperature dependence of $T_1$ should be at least twice of the gap energy so as to guarantee the energy conservation between the nuclear spin and the electronic spin system. However, in the cluster which involves only twelve electronic spins coupled strongly to each other by exchange interactions, it may be quite unreliable. Anyhow, the origin of the effective coupling constant for $T_1$ for Mn${^{4+}}$ ion is uncertain at the present. Next we discuss on the present concept for obtaining the relevant equations for $T_2$ and $T_1$. As a starting point we considered only the two lowest levels, the ground-state and the first excited state. Then, if we follow the standard stochastic theory, the fluctuating local field responsible for the nuclear magnetic relaxation, which is caused by the thermal excitation to the excited state from the ground-state, is treated as a perturbation with respect to the nuclear quantization axis. In the cases of zero field and where the external field is applied along the c-axis, the nuclear quantization axis is taken to be along the internal field at the $^{55}$Mn site, that is, along the $c$-axis. The longitudinal and transverse relaxation rates are given by the Fourier spectrum of correlation function of such a fluctuating local field $h_{\alpha}(t)$ ($\alpha=z$ or $\perp$) at the resonance frequency $\omega_N$ as follows: $$\frac{1}{T_1}=F_{\perp}(\omega_N) \quad \text{and} \quad \frac{1}{T_2}=\frac{1}{2T_1}+F_z(0)$$ with $$F(\omega_N)=\frac{\gamma_N^2}{2}\int_{-\infty}^{\infty} \<\{\delta h_{\alpha}(t)\delta h_{\alpha}(t)\}\>\exp(-i\omega_Nt)dt.$$ As shown in Sec. \[sec:An\], the correlation function for the pulse-like field given in Fig. \[fig:fluc\], is calculated to have the exponential-typed form given by Eq. . The longitudinal relaxation rate has already been given by Eq. . On the other hand, we have found experimentally that $T_2^{-1}\gg T_1^{-1}$. So $T_2^{-1}$ should be ascribed to the zero-frequency term $F(0)$, which is given by $$\frac{1}{T_2}=F(0)=\frac{\tau_1^2}{\tau_0}(\gamma_Nh_z)^2.$$ According to this equation, the temperature dependence of $T_{2}^{-1}$ results only from $\tau_0$ as far as $\tau_0\gg\tau_1$ because $\tau_1$ is taken to be almost independent on the temperature. However, the value of $T_{2}^{-1}$ should have a large site-dependence through the coupling constant term $(\gamma_Nh_z)^2$, which is proportional to the hyperfine interaction term of $A_{zz}^2$. Furthermore, as already shown in Fig. \[fig:rates-Hdep\], the field dependence of $\tau_1^2\tau_0^{-1}$ differs from that of only $\tau_0^{-1}$. Such features contradict with our experimental results. Instead, as we have already mentioned, the equation like $T_2^{-1}=\tau_0^{-1}$ obtained as a strong collision regime on the basis of more basic treatment explains satisfactorily our experimental results for $T_2$. Finally, with respect to the longitudinal relaxation, we compare our present formalism with the equation adopted by Furukawa [[*et al*]{}]{}. in Ref. . They have taken into account all of the 21 energy levels ($m=-10\sim+10$) of $S=10$ as the candidates for the fluctuating field $\delta h_{\perp}$ responsible for the nuclear magnetic relaxation. Then, it has been assumed that the correlation functions associated with each of these energy levels are the exponential-typed one with the correlation time equal to the corresponding average life time, which is determined by the spin-phonon interaction as discussed in Ref. . The contributions from each level have been summed up with the statistical weight. However, in considering the low temperature results, only the term related to the lowest ground-state with $m=-10$ with the predominant statistical weight was remained as an effective one, thus yielding the expression like $T_1^{-1}\propto A_{\pm}^2/\tau_0\omega_N^2$ in the high-frequency limit for $\tau_0\omega_N\gg1$. As is seen, our equation   has the same form as this equation. However, the origins of each term are different. First, in Eq. , the term $\tau_0$ results from the effective amplitude of the fluctuating field $(\tau_1/\tau_0)(\gamma_Nh_{\perp})^2$ in the correlation function. Secondly the criterion for the high-frequency limit, which brings the term $\omega_N$ in the equation, is taken with respect to $\tau_1\omega_N$ instead of $\tau_0\omega_N$. Thirdly, as for the coupling constant, Eq.  involves the transverse fluctuating field $h_{\perp}$, that is, the off-diagonal term $A_{xz}$, since it is assumed that the the anisotropic perturbing interaction like $I^{+}\delta S_z(t)$ is responsible for the nuclear spin lattice relaxation. While, in Ref. , the coupling constants for Mn(1) , Mn(2), and Mn(3) have been taken to be proportional to the square of the internal static field in zero field, [[*i. e.*]{}]{}, square of $A_{zz}$ in our notation. Then, in view of the large difference of the ratio of the coupling constants of $T_1^{-1}$ for the three ions between experimental values and the above estimation, the presence of the coupling constants for non-zero mode have been suggested. Unfortunately the $^{55}$Mn NMR signal is observable only at low temperatures. So it is difficult to find experimentally the maximum point of the relaxation rate which appears in the BPP-typed equation under the condition $\tau_1\omega_N=1$. It may be possible for other nuclei with much lower resonance frequency. However, such a condition is realized at rather high temperatures. In this case, the present two-level model may fail. Further extension of the present treatment will be necessary by taking into account the higher excited levels. Conclusion ========== The nuclear magnetic relaxation times $T_2$ and $T_1$ of $^{55}$Mn for Mn${^{4+}}$ and Mn${^{3+}}$ ions in Mn$_{12}$Ac have been measured using the oriented powder sample at low temperatures below 2.5K down to 200mK in the fields up to 9T applied along $c$-axis. The relaxation rates $T_{2}^{-1}$ and $T_{1}^{-1}$ in zero field exhibited remarkable decreases with decreasing temperature with the relative relation like $T_2^{-1}/T_1^{-1}=200$ at the temperatures above about 1.5K. At the lower temperatures the difference was more pronounced. The field dependence of $T_2^{-1}$ and $T_1^{-1}$ showed decrease with increasing field for the cluster whose magnetic moment is parallel to the $c$-axis. On the contrary, those for the cluster whose magnetic moment is antiparallel to the $c$-axis increase with increasing field. The analysis for the experimental results was made on the concept that the nuclear magnetic relaxation is caused by thermal excitations of the cluster spin. We simplified the problem by considering only the excitation from the ground-state to the first excited state within each well of the double-well potential with the average life times determined by the spin-phonon interaction. On basis of the nonlinear theory, we obtained general expression for the transverse relaxation rate $T_2^{-1}$ for such a pulse-like fluctuating field. It turned out that $T_{2}^{-1}$ is well understood in terms of the equation $T_2^{-1}=\tau_0^{-1}$, which corresponds to the strong collision regime. On the other hand, the experimental results for $T_1^{-1}$ was interpreted well by the standard perturbation formalism. On the other hand, the results for $T_1$ have been well understood, on the basis of the standard perturbation method, by the equation for the the high-frequency limit like $T_1^{-1}\sim 1/\tau_0 \omega_N^2$, where $\omega_N$ is the $^{55}$Mn Larmor frequency. The quantitative comparison between the experiment and the theoretical calculation was made by using the $^{55}$Mn hyperfine interaction tensors, which resulted in reasonable agreement. Acknowledgments {#acknowledgments .unnumbered} =============== The authors wish to thank Professor T. Kohmoto for valuable discussions. Discussions with Professor F. Borsa and Dr. A. Lascialfari are greatfully appreciated. Thanks are also due to Dr. Y. Furukawa for giving us useful informations. This work is supported by the Grant-in-Aid for Scientific Research from Ministry of Education, Culture, Sports, and Technology. [99]{} T. Lis, Acta. Cryst. **B36**, 2042 (1980). R. Sessoli, D. Gatteschi, A. Canesschi, and M. A. Novak, Nature **365**, (1993). J. Villain, F. Hartoman-Boutron, R. Sessoli, and A. Pettori, Europhys. Lett. **27**, 159 (1994). L. Thomas, F. L. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and B. Barbara, Nature **383**, 145 (1996). J. R. Friedman, M. P. Sarachik, J. Tejada, and R. Ziolo, Phys. Rev. Lett. **76**, 3830 (1996). P. Politi, A. Rettori, F. Hartmann-Boutron, and J. Villain, Phys. Rev. Lett. **75**, 537 (1995). C. Sangregorio, T. Ohm, C. Paulsen, R. Sessoli, and D. Gatteschi, Phys. Rev. Lett. **78**, 4645 (1997). W. Wernsdorfer, R. Sessoli, and D. Gatteschi, Science **184**, 133 (1999). I. Chiorescu, W. Wernsdorfer, A. Müller, H. Bogge, and B. Barbara, Phys. Rev. Lett. **84**, 3454 (2000). L. Gunther and B. Barbara, *Quantum Tunneling of Magnetization* (Kluwer, Dordrecht, 1995). E. M. Chudnovsky and J. Tejada, *Macroscopic Quantum Tunneling of Magnetic Moment* (Cambridge University Press, Cambridge, 1997). F. Hartman-Boutron, P. Politi, and J. Villain, J. Mod. Phys. **B21**, 2577 (1996). L. Tupitsyn and N. V. P. and P. C. E. Stamp, J. Mod. Phys. **B11**, 2901 (1997). I. Tupitsyn and B. Barbara, *Quantum tunneling of magnetization in molecular complexes with large spins. Effect of the environment.* , `cond-mat/0002180`. T. Goto, T. Kubo, T. Koshiba, Y. Fujii, A. Oyamada, J. Arai, K. Takeda, and K. Awaga, Physica B **284**, 1277 (2000). T. Kubo, T. Goto, T. Koshiba, K. Takeda, and K. Awaga, Phys. Rev. B **65**, 224425 (2002). A. Lascialfari, D. Gatteschi, F. Borsa, A. Shastri, Z. H. Jang, and P. Carretta, Phys. Rev. B **57**, 514 (1998-I). A. Lascialfari, Z. H. Jang, F. Borsa, P. Carretta, and D. Gatteschi, Phys. Rev. Lett. **81**, 3773 (1998). Y. Furukawa, K. Watanabe, K. Kumagai, F. Borsa, and D. Gatteschi, Phys. Rev. B **64**, 104401 (2001). T. Koshiba, T. Goto, T. Kubo, and K. Awaga, Prog. Theor. Phys. Suppl. **145**, 2002 (2002). R. Sessoli, H. L. Tsai, A. R. Schake, S. Wang, J. B. Vincent, K. Folting, D. Gatteschi, G. Christou, and D. N. Hendricson, J. Am. Chem. Soc. **115**, 1804 (1993). A. Caneschi, D. Gatteschi, R. Sessoli, A. L. Barra, L.-D. Brunel, and M. Guillot, J. Am. Chem. Soc. **113**, 5873 (1991). N. Regnault, T. Jolicœur, R. Sessoli, D. Gatteschi, and M. Verdaguer, *Exchange couplings in the magnetic molecular cluster* Mn$_{12}$[A]{}c, `cond-mat/0203480`. A. L. Barra, D. Gatteschi, and R. Sessoli, Phys. Rev. B **56**, 8192 (1997). I. Mirebeau, M. Hennion, H. Casalta, H. Andres, H. U. Gudel, A. V. Irodova, and A. Caneschi, Phys. Rev. Lett. **83**, 628 (1999). T. Pohjola and H. Schoeller, Phys. Rev.B **62**, 15026 (2000-II). A. J. Freeman and R. E. Watson, *Magnetism IIA*, edited by T. Rado and H. Shull (Academic Press, New York, 165). A. Robinson, P. J. Brown, D. N. Argyriou, D. N. Hendrikson, and M. J. Aubin, J. Phys. B **12**, 2805 (2000). P. W. Andrew and D. P. Tunstall, Proc. Phys. Soc. **78**, 1 (1961). J. Villan, F. H. Boutron, R. Sessoli, and A. Rettori, Euro. Phys. Lett. **27**, 159 (1994). J. R. Klander and P. W. Anderson, Phys. Rev. **125**, 912 (1962). T. Kohmoto, T. Goto, S. Maegawa, N. Fujiwara, Y. Fukuda, M. Kunitomo, and M. Mekata, Phys. Rev. B **49**, 6028 (1994). ![\[fig:crystal\] Crystal Structure of molecular cluster of Mn$_{12}$Ac. The crystalline $c$-axis (the $z$-axis), along which the structure is viewed, is taken to tilt upward from the the surface of the text. The deeply and lightly shaded large circles represent Mn${^{4+}}$ ion and Mn${^{3+}}$ ions, respectively. The shaded and open circles, and the small closed circles represent carbon, oxygen, and proton of water molecule, respectively. The arrows on each of the manganese sites indicate the directions of the magnetic moments which are parallel or antiparallel to the $c$-axis. (b) Schematic drawing of the exchange interactions among the manganese ions, $J_i$ ($i=1,\cdots, 4$), whose values are referred to the text.](fig1a.eps "fig:"){width="8.5cm"} ![\[fig:crystal\] Crystal Structure of molecular cluster of Mn$_{12}$Ac. The crystalline $c$-axis (the $z$-axis), along which the structure is viewed, is taken to tilt upward from the the surface of the text. The deeply and lightly shaded large circles represent Mn${^{4+}}$ ion and Mn${^{3+}}$ ions, respectively. The shaded and open circles, and the small closed circles represent carbon, oxygen, and proton of water molecule, respectively. The arrows on each of the manganese sites indicate the directions of the magnetic moments which are parallel or antiparallel to the $c$-axis. (b) Schematic drawing of the exchange interactions among the manganese ions, $J_i$ ($i=1,\cdots, 4$), whose values are referred to the text.](fig1b.eps "fig:"){height="5.5cm"} ![\[fig:T1fitting\] A typical example of the recovery of the $^{55}$Mn nuclear magnetization $M(t)$ measured for Mn(2) at $T=1.55$K in zero field as a function of the time $t$ between the end of the saturation $rf$-pulse and the beginning of the searching $rf$-pulse, which is normalized with the equilibrium value $M_0$. This is a fitting curve shown by the solid line yields $T_1=44.7$ms using the best fit values of the coefficients $a=0.11$, $b=0.24$, and $c=0.51$ in Eq. (\[eq:T1recovery\]).](fig4.eps){width="8.5cm"} ![\[fig:Fdep\]Field dependence of $T_{2}^{-1}$ and $T_{1}^{-1}$ for the L1 line (Mn${^{4+}}$ ion). The external field is applied along the $c$-axis. The open and closed squares represent the experimental results obtained at 1.65K for the upper branch ([($+$)]{}cluster) and the lower branch ([($-$)]{}cluster), respectively. The open circles represent the experimental results for the upper branch obtained at 1.45K. The solid and dashed lines drawn for the results of $T_2^{-1}$ represent the theoretical equations of $\tau_0^{-1}$ corresponding at 1.65 and 1.45K,respectively. Those for the results of $T_1^{-1}$ represent the best-fit of the theoretical equations of $\tau_0^{-1}\omega_N^{-2}$ to the experimental results. ](fig6.eps){width="8.5cm"} ![\[fig:branch\]Field dependence of the relaxation rates $T_2^{-1}$ and $T_1^{-1}$ of $^{55}$Mn in Mn${^{4+}}$ ion for the the [($-$)]{}cluster (lower branch) measured at 1.65K.](fig7.eps){width="8.5cm"} ![\[fig:compFdep\]Field dependence of $T_{2}^{-1}$ of the $^{55}$Mn in Mn(1) and Mn(2) for the[($+$)]{}cluster obtained at 1.45K.](fig8.eps){width="8.5cm"} ![\[fig:fluc\]Schematic drawing of the step-wise fluctuating local field associated with the excitation from the ground-state to the first excited state with respective average life-times of $\tau_0$ and $\tau_1$. $h_\alpha$ represents the fluctuating field longitudinal ($\alpha=z$) or transverse ($\alpha =\perp$) with respect to the nuclear quantization axis, which coincides to the $c(z)$-axis.](fig9.eps){width="8.5cm"} ![\[fig:rates-Hdep\] The qualitative field dependence of the relevant terms in the theoretical equations of the relaxation rates $T_2^{-1}$ and $T_1^{-1}$ calculated for $T=1.45$K. The solid and dashed lines represent the field dependence of $T_2^{-1}$ corresponding to the strong- and week-collision regimes, respectively. The dotted and dashed lines represent the field dependence of $T_1^{-1}$ corresponding to the high- and low-frequency limits, respectively. These curves are normalized at $H_0=0$. ](fig10.eps){width="8.5cm"}
--- abstract: 'The goal of this article is to classify unramified covers of a fixed tropical base curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with an action of a finite abelian group $G$ that preserves and acts transitively on the fibers of the cover. We introduce the notion of dilated cohomology groups for a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, which generalize simplicial cohomology groups of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with coefficients in $G$ by allowing nontrivial stabilizers at vertices and edges. We show that $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with a given collection of stabilizers are in natural bijection with the elements of the corresponding first dilated cohomology group of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$.' author: - 'Yoav Len, Martin Ulirsch, and Dmitry Zakharov' bibliography: - 'biblio.bib' title: Abelian tropical covers --- Introduction ============ Class field theory is a pillar of algebraic number theory; it is mostly concerned with classifying finite abelian extensions of a fixed local or global field $K$. Similarly, abelian covers of a fixed Riemann surface $X$ can be classified in terms of its first homology group $H_1(X,{\mathbb{Z}})$, or in terms of its Jacobian $J(X)\simeq H_1(X,{\mathbb{R}}/{\mathbb{Z}})$. André Weil, in a letter to his sister from 1940 (see [@Weil_lettertosister]), pointed out the analogy between these two situations, as well as a potential bridge: the theory of abelian extensions of function fields over finite fields, an area that is now known as geometric class field theory (see ). More recently, another analogy has entered the mathematical stage: between a Riemann surface $X$ and a metric graph ${\Gamma}$, or more generally a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. Many classical geometric constructions for Riemann surfaces, such as the theory of divisors, linear equivalence, Jacobians, theta functions, and moduli spaces, have natural analogues for tropical curves, as beautifully illustrated in [@MikhalkinZharkov]. The success of this analogy is, of course, not a coincidence. A tropical curve naturally arises as the dual graph $\Gamma_X$ of a semistable degeneration ${\mathcal{X}}$ of an algebraic curve $X$ (with the metric encoding the deformation parameters at the nodes of ${\mathcal{X}}$). Geometric constructions on $\Gamma_X$ then naturally arise as combinatorial specializations of their classical counterparts on $X$. We refer the reader for example to [@BakerJensen] for a survey of this story in the case of linear series. In this article, we develop a theory of $G$-covers of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, where $G$ is a finite abelian group. A $G$-cover of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is an unramified harmonic morphism ${\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ (such morphisms were studied in [@CavalieriMarkwigRanganathan_tropadmissiblecovers] under the name of tropical admissible covers), together with an action of $G$ on ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ that preserves and acts transitively on the fibers. We show that such covers are classified by two objects. The first is a [*dilation stratification*]{} ${\mathcal{S}}$ of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, indexed by the subgroups of $G$, that encodes the local stabilizer subgroups (see Def. \[def:stratificationtropical\]). The second is an element of a [*dilated cohomology group*]{} $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}},S)$ associated to ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ and a dilation stratification ${\mathcal{S}}$ of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ (see Def. \[def:dilatedcohomologytropical\]). In the spirit of the above analogies, one may think of this work as the starting point for a tropical version of class field theory. Our principal result is the following (see Thm. \[thm:main5\]): Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve, let $G$ be a finite abelian group, and let ${\mathcal{S}}$ be an admissible dilation stratification of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. Then there is a natural bijection between the set of unramified $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ having dilation stratification ${\mathcal{S}}$ and the dilated cohomology group $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}},{\mathcal{S}})$. The main technical ingredient in the classification of $G$-covers of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a theory of dilated cohomology groups of a graph marked by subgroups of $G$. This theory generalizes simplicial cohomology with coefficients in $G$ and satisfies a number of natural properties such as functoriality and pullback, and admits a long exact sequence. It seems natural to generalize dilated cohomology to arbitrary simplicial complexes, but this is beyond the scope of our paper. Since our methods are cohomological, they do not readily generalize to non-abelian groups. In a future paper, we plan to treat the non-abelian case by relating dilated cohomology to Bass–Serre theory [@Serre_trees; @Bass] and developing a Galois theory for non-abelian unramified covers of tropical curves. Earlier and related works {#earlier-and-related-works .unnumbered} ------------------------- A number of authors study graphs and tropical curves with a group action. The simplest example is the case of tropical hyperelliptic curves, which are ${\mathbb{Z}}/2{\mathbb{Z}}$-covers of a tree ([@2009BakerNorine], [@2013Chan], [@Caporaso_gonality], [@ABBRII], [@2016Panizzut], [@2017BologneseBrandtChua], [@2017Len]). Brandt and Helminck [@2017BrandtHelminck] consider arbitrary cyclic covers of a tree, while Helminck [@2017Helminck] looks at the tropicalization of arbitrary abelian covers of algebraic curves from a non-Archimedean perspective. Jensen and Len [@2018JensenLen] classify unramified ${\mathbb{Z}}/2{\mathbb{Z}}$-covers of arbitrary tropical curves in terms of dilation cycles, which is a special case of our dilation stratification; with this article we aim to generalize this aspect of their work. While we do not pursue this direction here, $G$-covers of curves may be used to produce interesting loci of special divisors and linear series. For instance, Jensen and Len [@2018JensenLen] and Len and Ulirsch [@LenUlirsch] develop a theory of tropical Prym varieties associated to ${\mathbb{Z}}/2{\mathbb{Z}}$-covers of tropical curves, with applications to algebraic Prym–Brill–Noether theory. In a similar vein, Song [@Song_Ginvariantlinearsystems] considers $G$-invariant linear systems with the goal of studying their descent properties to the quotient. From a moduli-theoretic perspective, studying degenerations of $G$-covers of algebraic curves is equivalent to studying the compactification of the moduli space of $G$-covers in terms of the moduli space of $G$-admissible covers, as constructed in [@AbramovichCortiVistoli] and [@BertinRomagny]. In [@BertinRomagny Section 7] the authors have already introduced a graph-theoretic gadget to understand the boundary strata of this moduli space: so-called *modular graphs* with an action of a finite (not necessarily abelian) group $G$. This idea seems to have appeared independently in other works as well: Chiodo and Farkas [@ChiodoFarkas] study the boundary of the moduli space of level curves, which is equivalent to a component of the moduli space of $G$-admissible covers for a cyclic group $G$, and look at cyclic covers of an arbitrary graph. Their work has been extended to an arbitrary finite group $G$ by Galeotti in [@2019GaleottiB; @2019GaleottiA]. Finally, in [@SchmittvanZelm], Schmitt and van Zelm apply a graph-theoretic approach to the boundary of the moduli space of $G$-admissible covers (for an arbitrary finite group $G$) to study their pushforward classes in the tautological ring of $\overline{\mathcal{M}}_{g,n}$. In [@CavalieriMarkwigRanganathan_tropadmissiblecovers] Cavalieri, Markwig, and Ranganathan develop a moduli-theoretic approach to the tropicalization of the moduli space of admissible covers (without a fixed group operation). We extend this aspect of their article to the moduli space of $G$-admissible covers in Section \[sec:tropicalization\] below. In [@CaporasoMeloPacini], Caporaso, Melo, and Pacini study the tropicalization of the moduli space of spin curves, which, in view of the results in [@2018JensenLen], is closely related to to our story in the case $G={\mathbb{Z}}/2{\mathbb{Z}}$. The problem of classifying covers of a graph with an action of a given group (not necessarily abelian) was studied by Corry in [@2011Corry; @2012Corry; @2015Corry]. However, Corry considered a different category of graph morphisms, allowing edge contraction but not dilation. To the best of our knowledge, no author has considered the problem of classifying all unramified covers of a given graph with an action of a fixed group. Analogies in topology and algebraic geometry {#analogies-in-topology-and-algebraic-geometry .unnumbered} -------------------------------------------- It is instructive to recall the theory of abelian covers in two categories, both directly related to tropical geometry: topological covering spaces and algebraic étale covers. ### Topological spaces {#topological-spaces .unnumbered} Let $X$ be a path-connected, locally path-connected and semi-locally simply connected topological space, let $x_0\in X$ be a base point, and let $G$ be a group. A [*regular $G$-cover*]{} of $(X,x_0)$ is a based covering space $(Y,y_0)\to (X,x_0)$ together with an $G$-action on $Y$ such that $G$ acts freely and transitively on fibers. Based regular $G$-covers of $(X,x_0)$ are classified by monodromy homomorphisms $\pi_1(X,x_0)\to G$ (the cover is connected if and only if the homomorphism is surjective). If $G$ is a finite abelian group, then we can identify the set of such homomorphisms, canonically and independently of $x_0$, with the cohomology group $H^1(X,G)$. We note that a $G$-cover is rigidified by the $G$-action: for example, if $p$ is a prime number, there is a single connected degree $p$ covering space $S^1\to S^1$, but there are $p-1$ connected ${{\mathbb{Z}}/p{\mathbb{Z}}}$-covers of $S^1$ corresponding to the non-trivial elements of $H^1(S^1,{{\mathbb{Z}}/p{\mathbb{Z}}})\simeq {{\mathbb{Z}}/p{\mathbb{Z}}}$. If $X$ is the underlying topological space of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, then any regular $G$-cover $X'\to X$ can be given the structure of an unramified $G$-cover ${\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ of tropical curves by pulling back the genus function from ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ to ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$. These $G$-covers, which we call [*topological*]{}, have the property that $G$-action on the fibers is free (see Ex. \[ex:topologicalcovers\] and Ex. \[example:topologicaladmissiblecovers\]). The corresponding dilation stratification ${\mathcal{S}}$ on ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is trivial, and the dilated cohomology group $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}},{\mathcal{S}})$ reduces to $H^1(X,G)$. ### Algebraic varieties {#algebraic-varieties .unnumbered} Let $X$ be an algebraic variety over a field $k$ and $x_0$ a geometric base point of $X$. Like its topological counterpart, the étale fundamental group $\pi_1^{\'et}(X,x_0)$ of $X$ classifies finite étale covers of $X$. For a finite abelian group $G$ the set of continuous homomorphisms $\operatorname{Hom}(\pi_1^{\'et}(X,x_0),G)$ is equal to the set of Galois coverings of $X$ with Galois group $G$. If $X$ is a smooth projective curve, the abelian coverings of $X$ naturally arise as pullbacks of (always abelian) coverings of its Jacobian $J$ along the Abel-Jacobi map $X\rightarrow J$. In particular, we have an induced isomorphism $\pi_1^{\'et}(X,x_0)^{ab}\simeq\pi_1^{\'et}(J,x_0)$. In Section \[sec:tropicalization\] we will see how our *a priori* purely combinatorial construction can be thought of a tropical limit of this well-known story. Organization of the paper {#organization-of-the-paper .unnumbered} ------------------------- Our paper is organized as follows. In Sec. \[sec:definitions\], we review the necessary definitions from graph theory and tropical geometry. In Sec. \[sec:dilated\], we introduce $G$-covers, $G$-dilation data, and dilated cohomology groups. We are primarily interested in classifying abelian covers of tropical curves, however, our constructions are purely graph-theoretic in nature and may be of interest to specialists in graph theory and topology. For this reason, we first develop the theory of $G$-covers for unweighted graphs. In Sec. \[sec:classification\] we prove our main classification results, and then extend them to weighted graphs, weighted metric graphs, and tropical curves. Finally, in Sec. \[sec:tropicalization\] we relate our constructions to the tropicalization of the moduli space of admissible $G$-covers. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors would like to thank Matthew Baker, Madeline Brandt, Renzo Cavalieri, Gavril Farkas, Paul Helminck, David Jensen, Andrew Obus, Sam Payne, Matthew Satriano, Johannes Schmitt, and Jason van Zelm for useful discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie-Skłodowska-Curie Grant Agreement No. 793039. ![image](flag_yellow_low.jpeg){height="1.7ex"} We also acknowledge support from the LOEWE-Schwerpunkt “Uniformisierte Strukturen in Arithmetik und Geometrie”. Definitions and notation {#sec:definitions} ======================== We develop the theory of $G$-covers of graphs on several levels successively: graphs, weighted graphs, metric graphs, and tropical curves. In this section, we recall the necessary definitions from graph theory. Graphs ------ We first consider unweighted graphs without a metric. A [*graph with legs*]{} ${\Gamma}$, or simply a [*graph*]{}, consists of the following: 1. A finite set $X({\Gamma})$. 2. An idempotent [*root map*]{} $r:X({\Gamma})\to X({\Gamma})$. 3. An involution $\iota:X({\Gamma})\to X({\Gamma})$ whose fixed set contains the image of $r$. The image $V({\Gamma})$ of $r$ is the set of [*vertices*]{} of ${\Gamma}$, and its complement $H({\Gamma})=X({\Gamma})\backslash V({\Gamma})$ is the set of [*half-edges*]{} of ${\Gamma}$. The involution $\iota$ preserves $H({\Gamma})$ and partitions it into orbits of size 1 and 2; we call these respectively the [*legs*]{} and [*edges*]{} of ${\Gamma}$ and denote the corresponding sets by $L({\Gamma})$ and $E({\Gamma})$. The root map assigns one root vertex to each leg and two root vertices to each edge. A [*loop*]{} is an edge whose root vertices coincide. We note that, from a graph-theoretic point of view, there is essentially no difference between a leg and an extremal edge. This distinction is important, however, from a tropical viewpoint: legs are the tropicalizations of marked points, while an extremal edge represents a rational tail. Note that, unlike an extremal edge, a leg does not have a vertex at its free end. The [*tangent space*]{} $T_v {\Gamma}$ and [*valency*]{} $\operatorname{val}(v)$ of a vertex $v\in V({\Gamma})$ are defined by $$T_v{\Gamma}=\big\{h\in H({\Gamma})\big|r(h)=v\big\} \textrm{ and } \operatorname{val}(v)=\#(T_v{\Gamma}).$$ Let ${\Gamma}$ be a graph. A [*subgraph*]{} ${\Delta}$ of ${\Gamma}$ is a subset of $X({\Gamma})$ closed under the root and involution maps. Given a subgraph ${\Delta}\subset {\Gamma}$ and a vertex $v\in V({\Delta})$, we denote $\operatorname{val}_{{\Delta}}(v)$ the valency of $v$ viewed as a vertex of ${\Delta}$. A subgraph ${\Delta}\subset {\Gamma}$ is called a [*cycle*]{} if $\operatorname{val}_{{\Delta}}(v)$ is even for every $v\in V({\Delta})$. A subgraph ${\Delta}\subset {\Gamma}$ is called [*edge-maximal*]{} if every edge $e\in E({\Gamma})$ having both root vertices in ${\Gamma}$ lies in ${\Delta}$. It is clear that a subgraph of ${\Gamma}$ is edge-maximal if and only if it is the largest subgraph of ${\Gamma}$ with a given set of vertices. Let ${\Gamma}$ be a graph. An [*orientation*]{} on ${\Gamma}$ is a choice of order $(h,h')$ on each edge $e=\{h,h'\}\in E({\Gamma})$. We call $s(e)=r(h)$ and $t(e)=r(h')$ the [*source*]{} and [*target*]{} vertices of $e$. A [*finite morphism*]{} of graphs ${\varphi}:{\Gamma}'\to {\Gamma}$, or simply a [*morphism*]{}, is a map of sets ${\varphi}:X({\Gamma}')\to X({\Gamma})$ which commutes with the root and involution maps, such that edges map to edges and legs map to legs. An [*automorphism*]{} ${\varphi}:{\Gamma}\to {\Gamma}$ of a graph ${\Gamma}$ is a morphism with an inverse. We denote the group of automorphisms of ${\Gamma}$ by $\operatorname{Aut}({\Gamma})$. We remark that a nontrivial graph automorphism may act trivially on the vertex and edge sets. For example, the graph ${\Gamma}$ consisting of one vertex $v$ and one loop $e=\{h,h'\}$ has a nontrivial automorphism fixing $v$ and exchanging $h$ and $h'$. To form quotients of graphs by group actions, we need to exclude such automorphisms from consideration. Let ${\Gamma}$ be a graph, and let $G$ be a group. A [*$G$-action*]{} on ${\Gamma}$ is a homomorphism of $G$ to the automorphism group $\operatorname{Aut}({\Gamma})$ such that for every $g\in G$, the corresponding automorphism does not flip edges. In other words, for every edge $e=\{h,h'\}\in E({\Gamma})$ either ${\varphi}(e)\neq e$, or ${\varphi}(h)=h$ and ${\varphi}(h')=h'$. Given a $G$-action on ${\Gamma}$, we define the [*quotient graph*]{} ${\Gamma}/G$ by setting $X({\Gamma}/G)=X({\Gamma})/G$. The root and involution maps on ${\Gamma}$ are $G$-invariant and descend to ${\Gamma}/G$. It is clear that $V({\Gamma}/G)=V({\Gamma})/G$ and $H({\Gamma}/G)=H({\Gamma})/G$, and the no-flipping assumption implies that the $G$-action does not identify the two half-edges of any edge of ${\Gamma}$. Therefore $E({\Gamma}/G)=E({\Gamma})/G$ and $L({\Gamma}/G)=L({\Gamma})/G$, and the quotient map $\pi:{\Gamma}\to {\Gamma}/G$ is a finite morphism. \[def:Gquotient\] Weighted graphs, harmonic morphisms, and ramification ----------------------------------------------------- We now consider graphs with vertex weights. Heuristically, one may think of a vertex of weight $g$ as an infinitesimally small graph with $g$ loops (cf. [@AC Section 5]). A [*weighted graph*]{} $({\Gamma},g)$ is a pair consisting of a graph ${\Gamma}$ and a vertex weight function $g:V({\Gamma})\to {\mathbb{Z}}_{\geq 0}$. We will usually suppress $g$ and denote weighted graphs by ${\Gamma}$. We define the [*Euler characteristic*]{} $\chi(v)$ of a vertex $v\in V({\Gamma})$ on a weighted graph ${\Gamma}$ as $$\chi(v)=2-2g(v)-\operatorname{val}(v).$$ The [*genus*]{} of a connected graph ${\Gamma}$ is defined to be $$g({\Gamma})=\#(E({\Gamma}))-\#(V({\Gamma}))+1+\sum_{v\in V({\Gamma})}g(v).$$ We define the [*Euler characteristic*]{} $\chi({\Gamma})$ of a graph ${\Gamma}$ by $$\chi({\Gamma})=\sum_{v\in V({\Gamma})} \chi(v);$$ this is not to be confused with the topological Euler characteristic of ${\Gamma}$. An easy calculation shows that, if ${\Gamma}$ is connected, then $$\chi({\Gamma})=2-2g({\Gamma})-\#(L(V)).$$ A subgraph ${\Delta}\subset {\Gamma}$ of a weighted graph ${\Gamma}$ is naturally given the structure of a weighted graph by restricting the weight function $g$. In this case, denote $\chi_{{\Delta}}(v)=2-2g(v)-\operatorname{val}_{{\Delta}}(v)$ the Euler characteristic of a vertex $v$ of ${\Delta}$. We say that a vertex $v\in {\Gamma}$ of a weighted graph ${\Gamma}$ is [*unstable*]{} if $\chi(v)\geq 1$, [*semistable*]{} if $\chi(v)\leq 0$, and [*stable*]{} if $\chi(v)\leq -1$. An unstable vertex has genus zero and is either isolated or extremal. A semistable vertex that is not unstable is either an isolated vertex of genus one or a valency two vertex of genus zero, in which case we call it [*simple*]{}. We say that a graph ${\Gamma}$ is [*semistable*]{} if all of its vertices are semistable, and [*stable*]{} if all of its vertices are stable. Let ${\Gamma}$ be a connected weighted graph with $\chi({\Gamma})<0$. Following [@ACP Section 8.2], we construct a stable graph ${\Gamma}_{st}$, called the [*stabilization*]{} of ${\Gamma}$, as follows. First, we construct the [*semistabilization*]{} ${\Gamma}_{sst}$ of ${\Gamma}$ by inductively removing all extremal edges ending at an extremal vertex of genus zero (but not the legs). The graph ${\Gamma}_{sst}$ is a semistable subgraph of ${\Gamma}$, and it is clear that $\chi({\Gamma}_{sst})=\chi({\Gamma})$, and that any vertices of ${\Gamma}_{sst}$ that are not stable are simple. We then construct ${\Gamma}_{st}$ by gluing together the two half-edges at each simple vertex $v$ of ${\Gamma}_{sst}$. Specifically, if $v$ is an endpoint of two edges $e_1$ and $e_2$, we replace $v$, $e_1$, and $e_2$ with a new edge connecting the other endpoints of $e_1$ and $e_2$. If $v$ is an endpoint of an edge $e$ and a leg $l$, we replace $v$, $e$, and $l$ with a new leg rooted at the other endpoint of $e$. The result is a stable graph ${\Gamma}_{st}$ with $\chi({\Gamma}_{st})=\chi({\Gamma}_{sst})=\chi({\Gamma})$. Let ${\Gamma}$ and ${\Gamma}'$ be graphs. A [*finite harmonic morphism*]{} ${\varphi}:{\Gamma}'\to {\Gamma}$, or simply a [*harmonic morphism*]{}, consists of a finite morphism ${\Gamma}'\to {\Gamma}$ and a map $d_{{\varphi}}:X({\Gamma}')\to {\mathbb{Z}}_{>0}$, called the [*degree*]{} of ${\varphi}$, such that the following properties are satisfied: 1. If $e'=\{h'_1,h'_2\}\in E({\Gamma}')$ is an edge then $d_{{\varphi}}(h'_1)=d_{{\varphi}}(h'_2)$. We call this number the [*degree*]{} of ${\varphi}$ along $e'$ and denote it $d_{{\varphi}}(e')$. 2. For every vertex $v'\in V({\Gamma}')$ and every tangent direction $h\in T_{{\varphi}(v)}{\Gamma}$, we have $$d_{{\varphi}}(v')=\sum_{\substack{h'\in T_{v'}{\Gamma}', \\ {\varphi}(h')=h}}d_{{\varphi}}(h').$$ In particular, this sum does not depend on the choice of $h$. Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a harmonic morphism of graphs, where ${\Gamma}$ is connected. The sum $$\deg({\varphi})=\sum_{\substack{v'\in V({\Gamma}'),\\ {\varphi}(v')=v}}d_{{\varphi}}(v')=\sum_{\substack{e'\in E({\Gamma}'), \\ {\varphi}(e')=e}}d_{{\varphi}}(e')= \sum_{\substack{l'\in L({\Gamma}'),\\ {\varphi}(l')=l}}d_{{\varphi}}(l')$$ does not depend on the choice of $v\in V({\Gamma})$, $e\in E({\Gamma})$ or $l\in L({\Gamma})$ and is called the [*degree*]{} of ${\varphi}$ (see Section 2 of [@ABBRI]). Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a harmonic morphism of weighted graphs. The [*ramification degree*]{} $\operatorname{Ram}_{{\varphi}}(v')$ of ${\varphi}$ at a vertex $v'\in V({\Gamma}')$ is equal to $$\operatorname{Ram}_{{\varphi}}(v')=d_{{\varphi}}(v')\chi({\varphi}(v'))-\chi(v').$$ We say that ${\varphi}$ is [*effective*]{} if $$\operatorname{Ram}_{{\varphi}}(v')\geq 0$$ for all $v'\in V({\Gamma}')$, and [*unramified*]{} if $$\operatorname{Ram}_{{\varphi}}(v')=0 \label{eq:localRH}$$ for all $v'\in V({\Gamma}')$. Unramified morphisms were studied extensively in [@CavalieriMarkwigRanganathan_tropadmissiblecovers], where they were called [*tropical admissible covers*]{}. We partly preserve this terminology: for example, we call a dilation stratification admissible if it corresponds to an unramified cover. A simple calculation shows that our definition of ramification degree agrees with the standard one in the literature (see, for example, Sec. 2.2 in [@ABBRII] or Def. 16 in [@CavalieriMarkwigRanganathan_tropadmissiblecovers]): $$\operatorname{Ram}_{{\varphi}}(v')=d_{{\varphi}}(v')\left(2-2g({\varphi}(v'))\right)-\left(2-2g(v')\right)-\sum_{h'\in T_{v'}{\Gamma}'} \left(d_{{\varphi}}(h')-1\right).$$ For an unramified harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$, we call equation  the [*local Riemann–Hurwitz condition*]{} at $v'\in V({\Gamma}')$. Adding together these conditions at all $v'\in V({\Gamma}')$, we obtain the [*global Riemann–Hurwitz condition*]{} $$\chi({\Gamma}')=\deg({\varphi})\chi({\Gamma}). \label{eq:globalRH}$$ Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be an unramified harmonic morphism, and suppose that $d_{{\varphi}}(v')=1$ for some $v'$. By the harmonicity condition, each $h\in T_{{\varphi}(v')}{\Gamma}$ has a unique preimage in $T_{v'}{\Gamma}'$, hence $\operatorname{val}(v')=\operatorname{val}({\varphi}(v'))$. Furthermore, we have $\chi({\varphi}(v'))=\chi(v')$, which implies that $g(v')=g({\varphi}(v'))$. It follows that ${\varphi}$ is a local isomorphism of weighted graphs in a neighborhood of $v'$. In particular, an unramified harmonic morphism of degree one is a graph isomorphism, and vice versa. We observe that if ${\varphi}:{\Gamma}'\to {\Gamma}$ is an effective harmonic morphism and ${\Delta}\subset {\Gamma}$ is a subgraph with preimage ${\Delta}'={\varphi}^{-1}({\Delta})$, then the induced map ${\varphi}|_{{\Delta}'}:{\Delta}'\to {\Delta}$ is also an effective harmonic morphism, since the ramification degree does not decrease when a half-edge and its preimages are removed. However, if ${\varphi}$ is unramified, then ${\varphi}|_{{\Delta}'}$ is not necessarily unramified. We now show that unramified morphisms naturally restrict to stabilizations. Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be an unramified harmonic morphism of connected graphs, and assume that $\chi({\Gamma})<0$ (or, equivalently by , that $\chi({\Gamma}')<0$). For any two vertices $v'\in V({\Gamma}')$ and $v={\varphi}(v')\in V({\Gamma})$, Eq.  implies that $v'$ is unstable if and only if $v$ is unstable, in which case $\chi(v')=\chi(v)=1$ and $d_{{\varphi}}(v')=1$. Let $v\in V({\Gamma})$ be an extremal vertex of genus 0, let $e\in E({\Gamma})$ be the unique edge rooted at $v$, and let $u\in V({\Gamma})$ be the other root vertex of $v$. By the above, we see that $v\in V({\Gamma})$ has $\deg({\varphi})$ preimages $v'_i$ in ${\Gamma}'$, each of which is a root vertex of a unique extremal edge $e'_i$ mapping to $e$ with local degree 1. For any $u'\in {\varphi}^{-1}(u)$, $d_{{\varphi}}(u')$ of the edges $e'_i$ are rooted at $u'$. Therefore, removing $v'_i$, $e'_i$, $v$, and $e$ increases $\chi(u)$ by 1 and increases each $\chi(u')$ by $d_{{\varphi}}(u')$, hence does not change the local Riemann–Hurwitz condition at $u'$. Proceeding in this way, we remove all unstable vertices of ${\Gamma}'$ and ${\Gamma}$ and obtain an unramified harmonic morphism ${\varphi}_{sst}:{\Gamma}'_{sst}\to {\Gamma}_{sst}$. Similarly, we see that for two vertices $v'\in V({\Gamma}'_{sst})$ and $v={\varphi}(v')\in V({\Gamma}_{sst})$, Eq.  implies that one is simple if and only if the other is. Furthermore, by the harmonicity condition, the degrees of ${\varphi}$ at the two half-edges at $v'$ are equal, hence we can remove $v'$ and $v$, glue together the free half-edges, and extend ${\varphi}$; this does not change the local Riemann–Hurwitz condition at any remaining vertex of ${\Gamma}'_{sst}$. Proceeding in this way, we obtain an unramified morphism ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$. Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be an unramified harmonic morphism of connected weighted graphs, such that $\chi({\Gamma})<0$ (or, equivalently, $\chi({\Gamma}')<0$). The unramified morphism ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$ constructed above is called the [*stabilization*]{} of ${\varphi}$. \[def:coverstabilization\] Finally, we define the contraction of a graph along a subset of its edges; this can be viewed as a non-finite harmonic morphism of degree one. Let ${\Gamma}$ be a weighted graph, and let $S\subset E({\Gamma})$ be a set of edges of ${\Gamma}$. We define the [*weighted edge contraction*]{} ${\Gamma}/S$ of ${\Gamma}$ along $S$ as follows. Let ${\Delta}$ be the minimal subgraph of ${\Gamma}$ whose edge set contains $S$, and let ${\Delta}_1,\ldots,{\Delta}_k$ be the connected components of ${\Delta}$. We obtain ${\Gamma}/S$ from ${\Gamma}$ by contracting each ${\Delta}_i$ to a vertex $v_i$ of genus $g({\Delta}_i)$. \[def:edgecontraction\] Given a harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ of weighted graphs, we can contract a subset of edges $S\subset E({\Gamma})$ of ${\Gamma}$, and their preimages in ${\Gamma}'$. Connected components of graphs map to connected components, and degree is constant when restricted to a connected component, so there is a natural harmonic morphism ${\varphi}_{S}:{\Gamma}'/{\varphi}^{-1}(S)\to {\Gamma}/S$. A simple calculation shows that if ${\varphi}$ is unramified, then so is ${\varphi}_S$: Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be an unramified harmonic morphism of unweighted graphs, let $S\subset E({\Gamma})$ be a subset of the edges of ${\Gamma}$, and let ${\Gamma}'/{\varphi}^{-1}(S)$ and ${\Gamma}/S$ be the weighted edge contractions. Then ${\varphi}_{S}:{\Gamma}'/{\varphi}^{-1}(S)\to {\Gamma}/S$ is unramified. \[prop:edgecontraction\] Metric graphs and tropical curves --------------------------------- Finally, we consider weighted graphs with a metric, as well as tropical curves. A [*weighted metric graph*]{} consists of a weighted graph $({\Gamma},g)$ and a function $\ell:E({\Gamma})\to {\mathbb{R}}_{>0}$. A [*finite harmonic morphism*]{} of weighted metric graphs ${\varphi}:({\Gamma}',\ell')\to ({\Gamma},\ell)$, or simply a [*harmonic morphism*]{}, is a finite harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ of the underlying weighted graphs such that for every edge $e'\in E({\Gamma}')$ we have $$\ell({\varphi}(e'))=d_{{\varphi}}(e')\ell'(e'). \label{eq:length}$$ In other words, ${\varphi}$ dilates each edge $e'\in E({\Gamma}')$ by a factor of $d_{{\varphi}}(e')$. A harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ of weighted metric graphs is called [*effective*]{} or [*unramified*]{} if it is so as a map of weighted graphs. Given a finite harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ of weighted graphs and a length function $\ell$ on ${\Gamma}$, there is a unique length function $\ell'$ on ${\Gamma}'$ satisfying the dilation condition . Similarly, a length function on ${\Gamma}'$ uniquely induces a length function on ${\Gamma}$. It follows that the classification of unramified covers of weighted metric graphs, in particular abelian covers, is independent of the choice of metric. For this reason, in this paper we mostly work with graphs and weighted graphs without metrics. \[rem:lengths\] Given a connected weighted metric graph ${\Gamma}$ with $\chi({\Gamma})<0$, we give ${\Gamma}_{st}$ the structure of a weighted metric graph in the obvious way, by setting $\ell(e)=\ell(e_1)+\ell(e_2)$ whenever we replace two edges $e_1$ and $e_2$ with a new edge $e$. It is clear that an unramified morphism of weighted metric graphs ${\varphi}:{\Gamma}'\to {\Gamma}$ induces an unramified morphism ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$. Let $({\Gamma},\ell)$ be a weighted metric graph. We define a metric space $|{\Gamma}|$, called the [*metric realization*]{} of $({\Gamma},l)$, as follows. Consider a closed interval $I_e\subset {\mathbb{R}}$ of length $\ell(e)$ for each edge $e\in E({\Gamma})$, and a half-open interval $I_l=[0,\infty)$ for each leg $l\in L({\Gamma})$. We obtain $|{\Gamma}|$ from the $I_e$ and the $I_l$ by treating their endpoints as the root vertices and gluing accordingly. We then give $|{\Gamma}|$ the path metric. A harmonic morphism ${\varphi}:({\Gamma}',\ell')\to ({\Gamma},\ell)$ of weighted metric graphs naturally induces a continuous map $|{\varphi}|:|{\Gamma}'|\to |{\Gamma}|$ where, for a pair of edges $e={\varphi}(e')$, the map is given by dilation by a factor of $d_{{\varphi}}(e')$, and similarly for a pair of legs $l={\varphi}(l')$. The map is piecewise-linear with integer slope with respect to the metric structure. A basic inconvenience of tropical geometry is that different weighted metric graphs may have the same metric realizations. This motivates the following definition. A [*tropical curve*]{} $({\scalebox{0.8}[1.3]{$\sqsubset$}},g)$ is a pair consisting of a metric space ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ and a weight function $g:{\scalebox{0.8}[1.3]{$\sqsubset$}}\to {\mathbb{Z}}_{\geq 0}$ such that there exists a weighted metric graph $({\Gamma},g,\ell)$ and an isometry $m:|{\Gamma}|\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ of its metric realization with ${\Gamma}$ such that the weight functions agree: $$g(x)=\left\{\begin{array}{cc}g(v)& \textrm{ if }x=m(v) \textrm{ for a } v\in V({\Gamma}), \\ 0 & \mbox{otherwise}.\end{array}\right.$$ We call a quadruple $({\Gamma},g,\ell,m)$ satisfying these properties a [*model*]{} for ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. The [*genus*]{} of a connected tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is given by $$g({\scalebox{0.8}[1.3]{$\sqsubset$}})=b_1({\scalebox{0.8}[1.3]{$\sqsubset$}})+\sum_{x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}} g(x)$$ and is equal to the genus of any model of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. For a point $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$ on a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with model $({\Gamma},g,l,m)$, we define its [*valency*]{} $\operatorname{val}(x)$ to be $\operatorname{val}(v)$ if $x=m(v)$ for some $v\in V({\Gamma})$ and $2$ otherwise. We similarly define the Euler characteristic as $\chi(x)=2-2g(x)-\operatorname{val}(x)$; these numbers do not depend on the choice of model. We define the Euler characteristic of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ to be $\chi({\scalebox{0.8}[1.3]{$\sqsubset$}})=\chi({\Gamma})$ for any model ${\Gamma}$. It is clear that $$\chi({\scalebox{0.8}[1.3]{$\sqsubset$}})=\sum_{x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}}\chi(x),$$ where $\chi(x)=0$ for all but finitely many $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$. Our definition differs from Def. 2.14 in [@ABBRII], where a tropical curve was defined as an equivalence class of weighted metric graphs up to tropical modifications. Given a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with model ${\Gamma}$, we can form another model ${\Gamma}'$ by splitting any edge or leg of ${\Gamma}$ at a new vertex. Conversely, any tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ (other than ${\mathbb{R}}$ and $S^1$) has a unique [*minimal model*]{} ${\Gamma}_{min}$ having no simple vertices. We say that a connected tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is [*stable*]{} if $\chi(x)\leq 0$ for all $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$, or, equivalently, if its minimal model is a stable graph. We define the [*stabilization*]{} of a connected tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with $\chi({\scalebox{0.8}[1.3]{$\sqsubset$}})<0$ by removing all trees of edges having no vertices of positive genus, or, equivalently, as the geometric realization of the stabilization of any model of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. Any tropical curve other than the real line has a well-defined set of maximal legs. A morphism of tropical curves is a continuous, piecewise-linear map that sends legs to legs and is eventually linear on each leg. A [*morphism*]{} $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ of tropical curves is a continuous, piecewise-linear map with integer slopes such that for any leg $l'\subset {\scalebox{0.8}[1.3]{$\sqsubset$}}'$, there exists a leg $l\subset {\scalebox{0.8}[1.3]{$\sqsubset$}}$ and numbers $a\in {\mathbb{Z}}_{>0}$ and $b\in {\mathbb{R}}$ such that, identifying $l'$ and $l$ with $[0,+\infty)$, we have $\tau(x)=ax+b\in l$ for $x\in l'$ sufficiently large. We note that $\tau$ may map a finite section of $l'$ to ${\scalebox{0.8}[1.3]{$\sqsubset$}}\backslash l$. Let $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a morphism of tropical curves. A [*model*]{} for $\tau$ is a pair of models $({\Gamma}',g',\ell',m')$ and $({\Gamma},g,\ell,m)$ for ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ and ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, respectively, and a morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ of weighted metric graphs such that $m\circ|{\varphi}|=\tau\circ m'$. We say that $\tau$ is [*harmonic*]{}, [*effective*]{} or [*unramified*]{} if ${\varphi}$ has the corresponding property. Given a morphism $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ of tropical curves, we construct a model ${\varphi}:{\Gamma}'\to {\Gamma}$ by choosing the vertex set $V({\Gamma}')$ to contain the finite set of points where $\tau$ changes slope, and then enlarging $V({\Gamma}')$ and $V({\Gamma})$ to ensure that the image and the preimage of a vertex is a vertex. We let the degree of ${\varphi}$ on each edge and leg be the slope of $\tau$. Given a model ${\varphi}:{\Gamma}'\to {\Gamma}$ of $\tau$, we can produce another model by adding more vertices to ${\Gamma}'$ and ${\Gamma}$. Conversely, any morphism $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ to a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with $\chi({\scalebox{0.8}[1.3]{$\sqsubset$}})<0$ has a unique [*minimal model*]{} ${\varphi}_{min}:{\Gamma}'_{min}\to {\Gamma}_{min}$ with the property that every simple vertex $v\in V({\Gamma}_{min})$ has at least one preimage that is not simple. Let $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ be an unramified morphism of tropical curves of local degree one. Then ${\varphi}$ is a topological covering space of degree $\deg \tau$. Conversely, if ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a tropical curve and $f:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a covering space of finite degree, then there is a unique way to give ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ the structure of a tropical curve such that $f$ is unramified: we define the genus function on ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ as the pullback of the genus function on ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. Dilated cohomology {#sec:dilated} ================== In the following two sections, we fix a finite abelian group $G$ and classify the $G$-covers of a given unweighted graph ${\Gamma}$. These are defined as surjective finite morphisms ${\varphi}:{\Gamma}'\to {\Gamma}$ together with an $G$-action on ${\Gamma}'$ that preserves and acts transitively on the fibers. We will see that a $G$-cover of ${\Gamma}$ is uniquely determined by two objects. The first is a [*$G$-dilation datum*]{} $D$ on ${\Gamma}$ (equivalently, a [*$G$-stratification*]{} ${\mathcal{S}}$ of ${\Gamma}$), recording the fibers of ${\varphi}$ in terms of local stabilizer subgroups of $G$. The second is an element of a [*dilated cohomology group*]{} $H^1({\Gamma},D)$ (or $H^1({\Gamma},{\mathcal{S}})$), which generalizes the first simplicial cohomology group $H^1({\Gamma},G)$ by taking the local stabilizers into account. We introduce $G$-covers, $G$-dilation data and $G$-stratifications in Sec. \[sec:dilation\]. In Sec. \[sec:cohomology\], we introduce the dilated cohomology groups $H^i({\Gamma},D)$ of a pair $({\Gamma},D)$, where ${\Gamma}$ is a graph and $D$ is a $G$-dilation datum on ${\Gamma}$. In Sec. \[sec:relative\] we introduce the long exact sequence in dilated cohomology and study the cohomology groups of a subgraph ${\Delta}\subset {\Gamma}$. Once all the relevant definitions have been established, we reach Sec. \[sec:classification\], which is mostly dedicated to proving our classification results. $G$-covers, dilation data, and stratifications {#sec:dilation} ---------------------------------------------- Throughout this section, we only consider unweighted graphs with legs. We now give the main definition of our paper. Let ${\Gamma}$ be a graph. A [*$G$-cover*]{} of ${\Gamma}$ is a finite surjective morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ together with an action of $G$ on ${\Gamma}'$, such that the following properties are satisfied: 1. The action is invariant with respect to ${\varphi}$. 2. For each $x\in X({\Gamma})$, the group $G$ acts transitively on the fiber ${\varphi}^{-1}(x)$. Let ${\Gamma}$ be a graph with a $G$-action (see Def. \[def:Gquotient\]), then the quotient map $\pi:{\Gamma}\to {\Gamma}/G$ is a $G$-cover. Let ${\Gamma}$ be a graph. Viewing ${\Gamma}$ as a topological space, an element of $H^1({\Gamma},G)$ determines a covering space ${\varphi}:{\Gamma}'\to {\Gamma}$ with a $G$-action. It is clear that we can equip ${\Gamma}'$ with the structure of a graph such that ${\varphi}$ is a $G$-cover of graphs. Such $G$-covers, which we call [*topological $G$-covers*]{}, are distinguished by the property that $G$ acts freely on each fiber ${\varphi}^{-1}(x)$. For such covers, the $G$-dilation datum is trivial, while the the dilated cohomology group is $H^1({\Gamma},G)$. An example with $G$ the Klein group is given below in Fig. \[subfig:cover1\]. \[ex:topologicalcovers\] Our goal is to describe all $G$-covers ${\varphi}:{\Gamma}'\to {\Gamma}$ of a given graph ${\Gamma}$. We begin our description by considering the local stabilizer subgroups. Let ${\Gamma}$ be a graph. A [*$G$-dilation datum*]{} $D$ on ${\Gamma}$ is a choice of a subgroup $D(x)\subset G$ for every $x\in X({\Gamma})$, such that $D(h)\subset D(r(h))$ for every half-edge $h\in H({\Gamma})$, and such that $D(h)=D(h')$ for each edge $e=\{h,h'\}\in E({\Gamma})$. Given $G$-dilation data $D$ and $D'$ on ${\Gamma}$, we say that $D$ is a [*refinement*]{} of $D'$ if $D(x)\subset D'(x)$ for all $x\in X({\Gamma})$. A [*$G$-dilated graph*]{} is a pair $({\Gamma},D)$ consisting of a graph ${\Gamma}$ and a $G$-dilation datum $D$ on ${\Gamma}$. We call $D(x)$ the [*dilation group*]{} of $x\in X({\Gamma})$, and for an edge $e=\{h,h'\}\in E({\Gamma})$ we call $D(e)=D(h)=D(h')$ the [*dilation group*]{} of $e$. If $e$ is an edge with root vertices $u$ and $v$ (which may be the same), then $D(e)\subset D(u)\cap D(v)$. We call $C(e)=D(u)+D(v)$ the [*vertex dilation group*]{} of the edge $e$. \[rem:graphOfGroups\] A $G$-dilation datum on a graph ${\Gamma}$ is an example of a [*graph of groups*]{}, as defined by Bass (see Def. 1.4 in [@Bass]). In a future paper, we plan to explore the relationship between the cohomology groups $H^i({\Gamma},D)$ and the fundamental group of the graph of groups defined by $D$, with the goal of extending our theory to the non-abelian case. Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a $G$-cover. We define the [*$G$-dilation datum*]{} $D_{{\varphi}}$ of ${\varphi}$ by setting $D_{{\varphi}}(x)$ for $x\in X({\Gamma})$ to be the stabilizer group of any $x'\in {\varphi}^{-1}(x)$. The group $G$ is assumed to be abelian, therefore the stabilizer group of $x'\in {\varphi}^{-1}(x)$ does not depend on the choice of $x'$. If $D_{{\varphi}}$ is the $G$-dilation datum of a $G$-cover ${\varphi}$ that is the tropicalization of a $G$-cover of algebraic curves, then the dilation subgroup of every half-edge is cyclic (this follows, for instance, from [@SchmittvanZelm Lemma 3.1]). As a result, many of the covers described throughout this paper are not algebraically realizable, e.g. the cover \[subfig:cover6\] below. Our approach is to develop, as far as possible, an independent theory of $G$-covers of graphs, so we do not impose this condition from the start. In any case, as we shall see, the dilation groups of the half-edges play a secondary role in the classification of $G$-covers. \[rem:edgecyclic\] For any $x\in X({\Gamma})$, the fiber ${\varphi}^{-1}(x)$ of a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ is a $G/D_{{\varphi}}(x)$-torsor. If $h\in T_v({\Gamma})$ is a half-edge rooted at $v\in V({\Gamma})$, then the root map $r:{\varphi}^{-1}(h)\to {\varphi}^{-1}(v)$ is an equivariant map of transitive $G$-sets, which implies that $D_{{\varphi}}(h)\subset D_{{\varphi}}(v)$. Furthermore, it is clear that $D_{{\varphi}}(h)=D_{{\varphi}}(h')$ for any edge $e=\{h,h'\}\in E({\Gamma})$. Therefore, $D_{{\varphi}}$ is a $G$-dilation datum. The cardinality of each fiber ${\varphi}^{-1}(x)$ equals the index of $D_{{\varphi}}(x)$ in $G$: $$\#({\varphi}^{-1}(x))=[G:D_{{\varphi}}(x)].$$ Furthermore, for a half-edge $h\in H({\Gamma})$ rooted at $r(h)=v\in V({\Gamma})$, the $[G:D_{{\varphi}}(h)]$ half-edges in the fiber ${\varphi}^{-1}(h)$ are partitioned by their root vertices into $\#({\varphi}^{-1}(v))=[G:D_{{\varphi}}(v)]$ subsets, each containing $[D_{{\varphi}}(v):D_{{\varphi}}(h)]$ elements. \[ex:Klein1\] We now give several of examples of $G$-covers in the simplest non-cyclic case, when $G={{\mathbb{Z}}/{2}{\mathbb{Z}}}\oplus{{\mathbb{Z}}/{2}{\mathbb{Z}}}$ is the Klein group. The base graph ${\Gamma}$ consists of two vertices $u$ and $v$ joined by two edges $e$ and $f$. We use the following notation to describe a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$. We denote the elements of $G$ by $00$, $10$, $01$, and $11$, and denote the subgroups generated by $10$, $01$, and $11$ by respectively $H_1$, $H_2$, and $H_3$. The vertices of ${\Gamma}'$ lying above $u$ and $v$ are labeled (non-uniquely if the corresponding stabilizer is non-trivial) $u_{ij}$ and $v_{ij}$ for $ij\in G$, and the action of $G$ on ${\varphi}^{-1}(u)$ and ${\varphi}^{-1}(v)$ is the natural additive action on the indices. We color the edges ${\varphi}^{-1}(e)$ and ${\varphi}^{-1}(f)$ red and blue, respectively, and label them with indices $ij$ in such a way that $e_{ij}$ and $f_{ij}$ are attached to $u_{ij}$. The sizes of the vertices and the thickness of the edges of ${\Gamma}'$ denote the size of the dilation subgroup. In the caption, we indicate the nontrivial dilation groups. In Ex. \[ex:Klein2\], we will enumerate all Klein covers of ${\Gamma}$. (-1,4.5) to (1,3.5); (-1,3.5) to (1,4.5); (-1,2.5) to (1,1.5); (-1,1.5) to (1,2.5); (-0.5,4.25) circle (1.5mm); (-0.5,4.25) node [$00$]{}; (-0.5,3.75) circle (1.5mm); (-0.5,3.75) node [$10$]{}; (-0.5,2.25) circle (1.5mm); (-0.5,2.25) node [$01$]{}; (-0.5,1.75) circle (1.5mm); (-0.5,1.75) node [$11$]{}; (-1,4.5) to (1,4.5); (-1,3.5) to (1,3.5); (-1,2.5) to (1,2.5); (-1,1.5) to (1,1.5); (0,4.5) circle (1.5mm); (0,4.5) node [$00$]{}; (0,3.5) circle (1.5mm); (0,3.5) node [$10$]{}; (0,2.5) circle (1.5mm); (0,2.5) node [$01$]{}; (0,1.5) circle (1.5mm); (0,1.5) node [$11$]{}; (-1,1.5) circle(.6mm); (-1,2.5) circle(.6mm); (-1,3.5) circle(.6mm); (-1,4.5) circle(.6mm); (1,1.5) circle(.6mm); (1,2.5) circle(.6mm); (1,3.5) circle(.6mm); (1,4.5) circle(.6mm); (-1,4.5) node [$u_{00}$]{}; (-1,3.5) node [$u_{10}$]{}; (-1,2.5) node [$u_{01}$]{}; (-1,1.5) node [$u_{11}$]{}; (1,4.5) node [$v_{00}$]{}; (1,3.5) node [$v_{10}$]{}; (1,2.5) node [$v_{01}$]{}; (1,1.5) node [$v_{11}$]{}; (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; (-1,4.5) to (1,2); (-1,3.5) to (1,2); (-1,2.5) to (1,4); (-1,1.5) to (1,4); (-0.6,4) circle (1.5mm); (-0.6,4.) node [$00$]{}; (-0.6,3.2) circle (1.5mm); (-0.6,3.2) node [$10$]{}; (-0.6,2.8) circle (1.5mm); (-0.6,2.8) node [$01$]{}; (-0.6,2) circle (1.5mm); (-0.6,2) node [$11$]{}; (-1,4.5) to (1,4); (-1,3.5) to (1,4); (-1,2.5) to (1,2); (-1,1.5) to (1,2); (0,4.25) circle (1.5mm); (0,4.25) node [$00$]{}; (0,3.75) circle (1.5mm); (0,3.75) node [$10$]{}; (0,2.25) circle (1.5mm); (0,2.25) node [$01$]{}; (0,1.75) circle (1.5mm); (0,1.75) node [$11$]{}; (-1,1.5) circle(.6mm); (-1,2.5) circle(.6mm); (-1,3.5) circle(.6mm); (-1,4.5) circle(.6mm); (1,2) circle(.8mm); (1,4) circle(.8mm); (-1,4.5) node [$u_{00}$]{}; (-1,3.5) node [$u_{10}$]{}; (-1,2.5) node [$u_{01}$]{}; (-1,1.5) node [$u_{11}$]{}; (1,4) node [$v_{00}=v_{10}$]{}; (1,2) node [$v_{01}=v_{11}$]{}; (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; (-1,1.5) to (1,3); (-1,3) to (1,1.5); (-1,2.5) node [$00=01$]{}; (-1,2) node [$10=11$]{}; (-1,1.5) to (1,1.5); (-1,1.5) .. controls (-1,1) and (1,1) .. (1,1.5); (0,1.5) circle (1.5mm); (0,1.13) circle (1.5mm); (0,1.13) node [$11$]{}; (0,1.5) node [$10$]{}; (-1,3) .. controls (-1,3.5) and (1,3.5) .. (1,3); (-1,3) to (1,3); (0,3) circle (1.5mm); (0,3.37) circle (1.5mm); (0,3) node [$01$]{}; (0,3.37) node [$00$]{}; (-1,3) circle(.8mm); (1,3) circle(.8mm); (-1,3) node [$u_{00}=u_{01}$]{}; (1,3) node [$v_{00}=v_{01}$]{}; (-1,1.5) node [$u_{10}=u_{11}$]{}; (1,1.5) node [$v_{10}=v_{11}$]{}; (-1,1.5) circle(.8mm); (1,1.5) circle(.8mm); (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; (-1,1.5) .. controls (-1,2) and (1,2) .. (1,1.5); (-1,3) .. controls (-1,2.5) and (1,2.5) .. (1,3); (0,2.5) node [$00=01$]{}; (0,2) node [$10=11$]{}; (-1,1.5) to (1,1.5); (-1,1.5) .. controls (-1,1) and (1,1) .. (1,1.5); (0,1.5) circle (1.5mm); (0,1.13) circle (1.5mm); (0,1.13) node [$11$]{}; (0,1.5) node [$10$]{}; (-1,3) .. controls (-1,3.5) and (1,3.5) .. (1,3); (-1,3) to (1,3); (0,3) circle (1.5mm); (0,3.37) circle (1.5mm); (0,3) node [$01$]{}; (0,3.37) node [$00$]{}; (-1,3) circle(.8mm); (1,3) circle(.8mm); (-1,3) node [$u_{00}=u_{01}$]{}; (1,3) node [$v_{00}=v_{01}$]{}; (-1,1.5) node [$u_{10}=u_{11}$]{}; (1,1.5) node [$v_{10}=v_{11}$]{}; (-1,1.5) circle(.8mm); (1,1.5) circle(.8mm); (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; (-1,1.5) .. controls (-1,2) and (0,2.5) .. (1,2.5); (-1,3.5) .. controls (-1,3) and (0,2.5) .. (1,2.5); (-1,2.8) node [$00=11$]{}; (-1,2.2) node [$10=01$]{}; (-1,1.5) to (1,2.5); (-1,1.5) .. controls (0,1.5) and (1,2) .. (1,2.5); (0,1.63) circle (1.5mm); (0,2) circle (1.5mm); (0,2) node [$01$]{}; (0,1.65) node [$10$]{}; (-1,3.5) .. controls (0,3.5) and (1,3) .. (1,2.5); (-1,3.5) to (1,2.5); (0,3) circle (1.5mm); (0,3.37) circle (1.5mm); (0,3) node [$11$]{}; (0,3.37) node [$00$]{}; (-1,3.5) circle(.8mm); (1,2.5) circle(1mm); (-1,3.5) node [$u_{00}=u_{11}$]{}; (1,2.5) node [$v_{ij}$]{}; (-1,1.5) node [$u_{01}=u_{10}$]{}; (-1,1.5) circle(.8mm); (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; (-1,1.5) .. controls (-1,1) and (1,1) .. (1,1.5); (-1,1.5) to (1,1.5); (-1,1.5) .. controls (-1,2) and (1,2) .. (1,1.5); (-1,1.5) .. controls (-1,2.5) and (1,2.5) .. (1,1.5); (-1,1.5) .. controls (-1,3) and (1,3) .. (1,1.5); (0,1.5) circle (1.5mm); (0,1.87) circle (1.5mm); (0,2.24) circle (1.5mm); (0,2.61) circle (1.5mm); (0,1.5) node [$11$]{}; (0,1.87) node [$01$]{}; (0,2.24) node [$10$]{}; (0,2.61) node [$00$]{}; (-1,1.5) node [$u_{ij}$]{}; (1,1.5) node [$v_{ij}$]{}; (-1,1.5) circle(1mm); (1,1.5) circle(1mm); (-1,0) .. controls (-1,0.3) and (1,0.3) .. (1,0); (-1,0) .. controls (-1,-0.3) and (1,-0.3) .. (1,0); (-1,0) circle(.6mm); (1,0) circle(.6mm); (-1,0) node [$u$]{}; (1,0) node [$v$]{}; (0,0.25) node [$e$]{}; (0,-0.25) node [$f$]{}; at (0,-1.5) ; We now give an alternative way to record a $G$-dilation datum on ${\Gamma}$, by means of a stratification of ${\Gamma}$ indexed by the subgroups of $G$. This description is often easier to visualize, and generalizes more naturally to tropical curves. Let ${\Gamma}$ be a graph. A [*$G$-stratification*]{} ${\mathcal{S}}=\big\{{\Gamma}_H\big\vert H\in S(G)\big\}$ on ${\Gamma}$ is a collection of subgraphs ${\Gamma}_H\subset {\Gamma}$ indexed by the set $S(G)$ of subgroups of $G$, such that $$\label{eq:strat} \begin{split} {\Gamma}_0&={\Gamma},\\ {\Gamma}_K&\subset {\Gamma}_H\mbox{ if }H\subset K, \textrm{ and} \\ {\Gamma}_H\cap {\Gamma}_K&={\Gamma}_{H+K}\mbox{ for all }H,K\in S(G). \end{split}$$ We allow the ${\Gamma}_H$ to be empty or disconnected for $H\neq 0$. The union of the ${\Gamma}_H$ for $H\neq 0$ is called the [*dilated subgraph*]{} of ${\Gamma}$ and is denoted ${\Gamma}_{dil}$. \[def:stratification\] We can associate a $G$-stratification of ${\Gamma}$ to a $G$-dilation datum $D$, and vice versa. Let ${\Gamma}$ be a graph, and let $D$ be a $G$-dilation datum on ${\Gamma}$. We define the [*$G$-stratification ${\mathcal{S}}(D)=\{{\Gamma}_H:H\in S(G)\}$ associated to $D$*]{} as follows: $${\Gamma}_H=\big\{x\in X({\Gamma})\big\vert H\subset D(x)\big\}.$$ We observe that for any half-edge $h\in H({\Gamma})$ we have $D(h)\subset D(r(h))$, therefore each ${\Gamma}_H$ is indeed a subgraph of ${\Gamma}$. Let $D_{{\varphi}}$ be the $G$-dilation datum associated to a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$. Then for any $H\in S(G)$, ${\Gamma}_H$ is the image under ${\varphi}$ of the subgraph of ${\Gamma}'$ fixed under the action of $H$. A $G$-dilation datum $D$ can be uniquely recovered from a $G$-stratification ${\mathcal{S}}$ as follows. Condition  implies that the set $X({\Gamma})$ is partitioned into disjoint subsets (which are not subgraphs in general) $$X({\Gamma})=\coprod_{H\in S(G)}{\Gamma}_H\backslash {\Gamma}^0_H,\mbox{ where } {\Gamma}_{H}^0=\bigcup_{H\subsetneq K} {\Gamma}_K.$$ For any $x\in X({\Gamma})$ we set $D(x)=H$, where $H$ is the unique subgroup of $G$ such that $x\in {\Gamma}_H\backslash {\Gamma}^0_H$. We also define a dual stratification associated to a $G$-dilation datum. Let $D$ be a $G$-dilation datum on ${\Gamma}$. The [*dual stratification*]{} ${\mathcal{S}}^*(D)=\{{\Gamma}^H:H\in S(G)\}$ of ${\mathcal{S}}$ is defined as follows. For $H\in S(G)$, we define ${\Gamma}^H$ to be the edge-maximal subgraph of ${\Gamma}$ whose vertex set is $$V({\Gamma}^H)=\bigcup_{K\subset H}V({\Gamma}_K\backslash{\Gamma}^0_K)=\big\{v\in V({\Gamma})\big\vert D(v)\subset H\big\}.$$ In other words, a leg of ${\Gamma}$ with root vertex $v$ lies in ${\Gamma}^H$ if and only if $D(v)\subset H$, and an edge $e\in E({\Gamma})$ with root vertices $u$ and $v$ lies in ${\Gamma}^H$ if and only if $C(e)=D(u)+D(v)\subset H$. The dual stratification satisfies the following properties: $$\begin{split} {\Gamma}^G&={\Gamma},\\ {\Gamma}^H&\subset {\Gamma}^K\mbox{ if }H\subset K, \textrm{ and }\\ {\Gamma}_H\cap{\Gamma}_K&={\Gamma}_{H\cap K}\mbox{ for all }H,K\in S(G). \end{split}$$ Unlike ${\mathcal{S}}(D)$, the dual stratification ${\mathcal{S}}^*(D)$ of a $G$-dilation datum does not uniquely determine $D$. For a vertex $v\in V({\Gamma})$, we can recover $D(v)$ as the smallest subgroup $H\subset G$ such that $v\in V({\Gamma}^H)$, but the dilation groups $D(h)$ of the edges cannot be determined. For example, let ${\Gamma}$ be the graph consisting of a vertex $v$ and a loop $e$, let $D(v)=H$ be a subgroup of $G$, and let $D(e)$ be any subgroup of $H$. The dual stratification is $${\Gamma}^K=\left\{\begin{array}{cc} {\Gamma}& \textrm{ if } \ H\subseteq K,\\ \emptyset & \textrm{ if } \ H\subsetneq K,\end{array}\right.$$ so we can recover $H$ but not $D(e)$. Finally, we define morphisms of $G$-covers of ${\Gamma}$. Let ${\varphi}_1:{\Gamma}_1'\to {\Gamma}$ and ${\varphi}_2:{\Gamma}_2'\to {\Gamma}$ be $G$-covers. A [*morphism of $G$-covers*]{} from ${\varphi}_1$ to ${\varphi}_2$ is a $G$-equivariant morphism $\psi:{\Gamma}_1'\to {\Gamma}_2'$ such that ${\varphi}_1={\varphi}_2\circ \psi$. \[def:morphismofGcovers\] We observe that if $\psi:{\Gamma}_1'\to {\Gamma}_2'$ is a morphism of $G$-covers from ${\varphi}_1:{\Gamma}_1'\to {\Gamma}$ to ${\varphi}_2:{\Gamma}_2'\to {\Gamma}$, then for any $x\in X({\Gamma})$ the restriction of $\tau$ to the fiber ${\varphi}_1^{-1}(x)$ is a $G$-equivariant surjective map onto ${\varphi}_2^{-1}(x)$, which implies that $D_{{\varphi}_1}(x)\subset D_{{\varphi}_2}(x)$, in other words $D_{{\varphi}_1}$ is a refinement of $D_{{\varphi}_2}$. In this paper, we only consider $G$-covers of a fixed base graph ${\Gamma}$ (except that we do consider restrictions of covers to a subgraph). It is also possible to define morphisms of $G$-covers of graphs that are related by a morphism. For example, given a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ and a morphism $\psi:{\Delta}\to {\Gamma}$, we define the pullback $G$-cover ${\varphi}':{\Delta}'\to {\Delta}$ by taking ${\Delta}'$ to be the fiber product ${\Gamma}'\times_{{\Gamma}} {\Delta}'$ (defined by $X({\Delta}')=X({\Gamma}')\times_{X({\Gamma})}X({\Delta}')$ with coordinatewise involution and root maps), and letting $G$ act on the first factor. All of the constructions of this chapter are functorial with respect to such operators, so for example the $G$-dilation datum $D_{{\varphi}'}$ on ${\Delta}$ is equal to the pullback $G$-dilation datum $\psi^*D_{{\varphi}}=D_{{\varphi}}\circ \psi$. \[rem:fixedbase\] Cohomology of $G$-data {#sec:cohomology} ---------------------- In this subsection, we define the cohomology groups $H^0({\Gamma},D)$ and $H^1({\Gamma},D)$ of a $G$-dilated graph $({\Gamma},D)$. These groups generalize the simplicial cohomology groups $H^i({\Gamma},G)$ of ${\Gamma}$ with coefficients in $G$. The groups $H^i({\Gamma},D)$ do not depend on the legs of ${\Gamma}$, so we assume for simplicity that ${\Gamma}$ has no legs. The legs of ${\Gamma}$ will again play a role in Sec. \[sec:admissible\], when we classify unramified $G$-covers of weighted graphs. Rather than only considering $G$-dilation data on a graph ${\Gamma}$, we work in a larger category of [*$G$-data*]{} on $\Gamma$, a $G$-datum being simply a choice of a $G$-group at every vertex and every edge of ${\Gamma}$ that is consistent with the root maps (see Definition \[rem:graphOfGroups\]). A $G$-datum $D_\varphi$ arising from a $G$-cover $\varphi$ is always a $G$-dilation datum. However, cohomology groups of the more general $G$-data appear in the long exact sequence  that relates the cohomology groups $H^i({\Gamma},D)$ of a $G$-dilation datum $D$ on ${\Gamma}$ to the cohomology groups $H^i({\Delta},D|_{{\Delta}})$ of the restriction of $D$ to a subgraph ${\Delta}\subset {\Gamma}$. We begin by recalling the simplicial cohomology groups of a graph ${\Gamma}$ with coefficients in $G$. Choose an orientation on the edges, and let $s,t:E({\Gamma})\to V({\Gamma})$ be the source and target maps. The simplicial chain complex of ${\Gamma}$ is $$\begin{tikzcd} 0\arrow[r] & {\mathbb{Z}}^{E({\Gamma})}\arrow[r,"\delta"]& {\mathbb{Z}}^{V({\Gamma})} \arrow[r]& 0, \end{tikzcd}$$ with the boundary map defined on the generators of ${\mathbb{Z}}^{E({\Gamma})}$ by $\delta(e)=t(e)-s(e)$. Applying the functor $\operatorname{Hom}(-, G)$ and identifying $$\operatorname{Hom}({\mathbb{Z}}^{V({\Gamma})},G)=G^{V({\Gamma})}\quad \textrm{ and } \quad\operatorname{Hom}({\mathbb{Z}}^{E({\Gamma})},G)=G^{E({\Gamma})},$$ we obtain the simplicial cochain complex of ${\Gamma}$ with coefficients in $G$: $$\begin{tikzcd} 0 \arrow[r] & G^{V({\Gamma})} \arrow[r,"\delta^*"] & G^{E({\Gamma})} \arrow[r] & 0.\label{eq:simplicialcomplex} \end{tikzcd}$$ We identify elements of $G^{V({\Gamma})}$ and $G^{E({\Gamma})}$ with functions $\xi:V({\Gamma})\to G$ and $\eta:E({\Gamma})\to G$, respectively. Under this identification, the duals $s^*,t^*:G^{V({\Gamma})}\to G^{E({\Gamma})}$ of the maps $s$ and $t$ are $$s^*(\xi)(e)=\xi(s(e))\quad \textrm{ and }\quad t^*(\xi)(e)=\xi(t(e)),$$ and the coboundary map is equal to $$\delta^*=t^*-s^*.\label{eq:coboundarygroup}$$The *simplicial cohomology groups* of ${\Gamma}$ with coefficients in $G$ are $$H^0({\Gamma},G)=\operatorname{Ker}\delta^* \quad\textrm{ and }\quad H^1({\Gamma},G)= \operatorname{Coker}\delta^*.$$ We now generalize this construction by replacing every copy of $G$ in the cochain complex  with an arbitrary $G$-group. We recall that a [*$G$-group*]{} is a map of abelian groups $f:G\to H$, and a [*morphism of $G$-groups*]{} from $f_1:G\to H_1$ to $f_2:G\to H_2$ is a group homomorphism $g:H_1\to H_2$ such that $f_2=g\circ f_1$. \[def:Gdatum\] A [*$G$-datum*]{} $A$ on an oriented graph ${\Gamma}$ consists of the following: 1. For every vertex $v\in V({\Gamma})$, a $G$-group $f_v:G\to A(v)$. 2. For every edge $e\in E({\Gamma})$, a $G$-group $f_e:G\to A(e)$ and morphisms of $G$-groups $s_e:A\big(s(e)\big)\to A(e)$ and $t_e:A\big(t(e)\big)\to A(e)$ such that $s_e\circ f_{s(e)}=t_e\circ f_{t(e)}=f_e$, i.e. for which the diagram $$\begin{tikzcd} & G \ar[dl,"f_{s(e)}"'] \ar[d,"f_e"] \ar[dr,"f_{t(e)}"] & \\ A(s(e)) \ar[r,"s_e"'] & A(e) & A(t(e)) \ar[l,"t_e"] \\ \end{tikzcd}$$ commutes. In other words, a $G$-datum on ${\Gamma}$ is a functor to the category of $G$-groups from the category whose objects are $V({\Gamma})\cup E({\Gamma})$, and whose non-trivial morphisms are the source and target maps. In contrast with Remark \[rem:graphOfGroups\], a $G$-datum is not necessarily a graph of groups in the sense of [@Bass], since the maps $f_{s(e)}$ and $f_{t(e)}$ are not required to be injective. To verify that $G$-data, in fact, generalize the notion of $G$-dilation data, we associate a $G$-datum $A^D$ to each $G$-dilation datum $D$. First, let $H_1$ and $H_2$ be subgroups of $G$, let $f_i:G\to G/H_i$ be the projections, and let $\iota_i:G/H_i\to G/H_1\oplus G/H_2$ be the embeddings. The coproduct of $f_1$ and $f_2$ is the $G$-group $$G/H_1\sqcup_G G/H_2=(G/H_1\oplus G/H_2)/({\operatorname{Im}}f_1\oplus -f_2).$$ The natural map $f_1\sqcup f_2:G\to G/H_1\sqcup_G G/H_2$ is equal to $\pi\circ \iota_1\circ f_1=\pi\circ \iota_2\circ f_2$, where $\pi:G/H_1\oplus G/H_2\to G/H_1\sqcup_G G/H_2$ is the projection. It is clear that $f_1\sqcup f_2$ is surjective and that $\operatorname{Ker}f_1\sqcup f_2=H_1+H_2$, hence the $G$-group $G\to G/H_1\sqcup_G G/H_2$ can be identified with the quotient $G\to G/(H_1+H_2)$. Let ${\Gamma}$ be an oriented graph, and let $D$ be a $G$-dilation datum on ${\Gamma}$. We define the [*associated $G$-datum*]{} $A^D$ as follows. For each $v\in V({\Gamma})$, we set $A^D(v)=G/D(v)$, and let $f_v$ be the natural projection map: $$f_v:G\to A^D(v)=G/D(v).$$ For an edge $e\in E({\Gamma})$, we let $f_e=f_{s(e)}\sqcup f_{t(e)}$ be the coproduct. In other words, we let $$A^D(e)=[G/D(s(e))\oplus G/D(t(e))]/({\operatorname{Im}}f_{s(e)}\oplus -f_{t(e)})\simeq G/C(e),$$ where $C(e)=D(s(e))+D(t(e))$ is the edge dilation group. We let $$f_e:G\to A^D(e)\simeq G/C(e)$$ be the quotient map, and we let $$s_e:A^D(s(e))\to A^D(e) \quad\textrm{ and } \quad t_e:A^D(t(e))\to A^D(e)$$ be the natural quotient maps $G/D(s(e))\to G/C(e)$ and $G/D(t(e))\to G/C(e)$. We now define the cochain complex and cohomology groups of a $G$-datum $A$ on an oriented graph ${\Gamma}$. Let $G$ be an oriented graph, and let $A$ be a $G$-datum on ${\Gamma}$. We define the [*cochain groups*]{} of the pair $({\Gamma},A)$ as follows: $$C^0({\Gamma},A)=\prod_{v\in V({\Gamma})}A(v)=\Big\{\xi:V({\Gamma})\to \coprod_{v\in V({\Gamma})} A(v):\xi(v)\in A(v)\Big\},$$ $$C^1({\Gamma},A)=\prod_{e\in E({\Gamma})}A(e)=\Big\{\eta:E({\Gamma})\to \coprod_{e\in E({\Gamma})} A(e):\eta(e)\in A(e)\Big\}.$$ We define the morphisms $s^*,t^*:C^0({\Gamma},A)\to C^1({\Gamma},A)$ by $$s^*(\xi)(e)=s_e(\xi(s(e)))\quad\textrm{ and } \quad t^*(\xi)(e)=t_e(\xi(t(e))).$$ We define the [*cochain complex*]{} of the pair $({\Gamma},A)$ as $$\begin{tikzcd} 0 \ar[r] & C^0({\Gamma},A)\ar[r,"\delta_{{\Gamma},A}^*"] & C^1({\Gamma},A) \ar[r] & 0, \end{tikzcd}$$ where the [*coboundary map*]{} $\delta_{{\Gamma},A}^*$ is $$\delta_{{\Gamma},A}^* =t^*-s^*. \label{eq:coboundary}$$ We define the [*cohomology groups*]{} of the pair $({\Gamma},A)$ as $$H^0({\Gamma},A)=\operatorname{Ker}\delta_{{\Gamma},A}^*\quad\textrm{ and } \quad H^1({\Gamma},A)=\operatorname{Coker}\delta_{{\Gamma},A}^*.$$ Specializing to $G$-dilation data, we obtain the main definition of this section. Let $({\Gamma},D)$ be a $G$-dilated graph, and let $A^D$ be the $G$-datum associated to $D$. The [*cochain complex*]{} of $({\Gamma},D)$ is the cochain complex of the pair $({\Gamma},A^D)$: $$\begin{tikzcd} 0 \ar[r] & C^0({\Gamma},D)\ar[r,"\delta_{{\Gamma},D}^*"] & C^1({\Gamma},D) \ar[r] & 0, \end{tikzcd}$$ where $$C^i({\Gamma},D)=C^i({\Gamma},A^D) \quad\textrm{ and } \quad \delta_{{\Gamma},D}^*=\delta_{{\Gamma},A^D}^*.$$ The [*dilated cohomology groups*]{} $H^i({\Gamma},D)$ are the cohomology groups of $({\Gamma},A^D)$: $$H^0({\Gamma},D)=\operatorname{Ker}\delta_{{\Gamma},D}^*=H^0({\Gamma},A^D)\quad \textrm{ and }\quad H^1({\Gamma},D)=\operatorname{Coker}\delta_{{\Gamma},D}^*=H^1({\Gamma},A^D).$$\[def:dilatedcohomology\] For the sake of clarity, and for future use, we give an explicit description of $H^1({\Gamma},D)$ as a quotient. The cochain group $C^1({\Gamma},D)$ is the direct product of $A^D(e)$ over all $e\in E({\Gamma})$, where each $A^D(e)$ is the coproduct $G/C(e)$ of $G\to G/D(s(e))$ and $G\to G/D(t(e))$. In other words, each $\eta\in C^1({\Gamma},D)$ is given by choosing a pair of elements $(\eta_s(e),\eta_t(e))\in G/D(s(e))\oplus G/D(t(e))$ for each $e\in E({\Gamma})$. A tuple $(\eta_s(e),\eta_t(e))_{e\in E({\Gamma})}$ is equivalent to $({\widetilde{\eta}}_s(e),{\widetilde{\eta}}_t(e))_{e\in E({\Gamma})}$ if and only if there exist elements $\omega(e)\in G$ for all $e\in E({\Gamma})$ such that $$\begin{split} \eta_s(e)&={\widetilde{\eta}}_s(e)+\omega(e){\operatorname{mod}}D(s(e))\\ \eta_t(e)&={\widetilde{\eta}}_t(e)-\omega(e){\operatorname{mod}}D(t(e)). \end{split}$$ Note that, instead of assuming that $\omega(e)\in G$, we may assume that $\omega(e)$ lies in any quotient group between $G$ and $G/(D(s(e))\cap D(t(e)))$, and it is natural to assume that in fact $\omega(e)\in G/D(e)$. An element of $C^0({\Gamma},D)$ is given by choosing $\xi(v)\in G/D(v)$ for each $v\in V({\Gamma})$. Putting everything together, we see that an element $[\eta]\in H^1({\Gamma},D)$ is given by choosing a pair of elements $(\eta_s(e),\eta_t(e))\in G/D(s(e))\oplus G/D(t(e))$ for each $e\in E({\Gamma})$, and that two choices $(\eta_s(e),\eta_t(e))_{e\in E({\Gamma})}$ and $({\widetilde{\eta}}_s(e),{\widetilde{\eta}}_t(e))_{e\in E({\Gamma})}$ represent the same element of $H^1({\Gamma},D)$ if and only if there exist elements $\omega(e)\in G/D(e)$ for all $e\in E({\Gamma})$ and elements $\xi(v)$ for all $v\in V({\Gamma})$ such that $$\begin{split} \eta_s(e)&={\widetilde{\eta}}_s(e)-\xi(s(e))+\omega(e){\operatorname{mod}}D(s(e))\\ \eta_t(e)&={\widetilde{\eta}}_t(e)+\xi(t(e))-\omega(e){\operatorname{mod}}D(t(e)) \end{split}\label{eq:explicitH1}$$ for all $e\in E({\Gamma})$. The dilated cochain complex of $({\Gamma},D)$, and hence the cohomology groups $H^i({\Gamma},D)$, depend only on the dilation groups $D(v)$ of the vertices $v\in V({\Gamma})$, and do not depend on the edge groups $D(e)$. Specifically, given a graph ${\Gamma}$, we can choose the dilation groups $D(v)$ of the vertices $v\in V({\Gamma})$ arbitrarily, and for each edge $e\in E({\Gamma})$ choose $D(e)$ to be any subgroup of $D(s(e))\cap D(t(e))$. The resulting groups $H^0({\Gamma},D)$ and $H^1({\Gamma},D)$ are independent of the choice of the $D(e)$. In other words, the dilated cohomology groups of $({\Gamma},D)$ only depend on the dual stratification ${\mathcal{S}}^*(D)$. For the remainder of the paper, with the exception of Sec. \[sec:relative\] below, we restrict our attention to $G$-dilation data and their cohomology groups. Before we proceed, we calculate our first example, showing that we have in fact generalized simplicial cohomology. Let ${\Gamma}$ be a graph, and let $A_G$ be the [*trivial $G$-datum*]{}, namely $A_G(v)=G$ and $A_G(e)=G$ for all $v\in V({\Gamma})$ and all $e\in E({\Gamma})$, with all structure maps being the identity. Alternatively, $A_G$ is the $G$-datum associated to the [*trivial $G$-dilation datum*]{} $D_0$ given by $D_0(x)=0$ for all $x\in X({\Gamma})$. It is clear that $C^i({\Gamma},A_G)=C^i({\Gamma},G)$, and that the coboundary map $\delta^*_{{\Gamma},A_G}$ given by  is equal to $\delta^*$ given by . Hence $H^i({\Gamma},A_G)=H^i({\Gamma},D_0)=H^i({\Gamma},G)$. \[ex:trivialdilation\] We now work out several explicit examples of the cohomology groups $H^i({\Gamma},D)$ of $G$-dilated graphs $({\Gamma},D)$. In the previous example, we saw that the cohomology of the trivial $G$-dilation datum on ${\Gamma}$ is the simplicial cohomology of ${\Gamma}$ with coefficients in $G$. In particular, $H^1({\Delta},G)$ is trivial for any tree ${\Delta}$. We now show that $H^1({\Delta},D)=0$ for any $G$-dilation datum $D$ on a tree ${\Delta}$. Let ${\Delta}$ be a tree. Then $H^1({\Delta},D)=0$ for any $G$-dilation datum $D$ on ${\Delta}$. \[prop:tree\] Let ${\Gamma}$ be an arbitrary graph, and suppose that $D$ and $D'$ are two $G$-dilation data on ${\Gamma}$, such that $D$ is a refinement of $D'$. In this case, we can define natural surjective maps $\pi^i:C^i({\Gamma},D)\to C^i({\Gamma},D')$ by taking coordinatewise quotients. These maps commute with the coboundary maps and induce maps $\pi^i:H^i({\Gamma},D)\to H^i({\Gamma},D')$, and furthermore the map $\pi^1$ is surjective. Now suppose that ${\Delta}$ is a tree, and $D$ is a $G$-dilation datum on ${\Delta}$. Let $D_0$ be the trivial $G$-dilation datum on ${\Delta}$. Then $D_0$ is a refinement of $D$, so there is a surjective map $H^1({\Delta},D_0)\to H^1({\Delta},D)$. But by Ex. \[ex:trivialdilation\] we know that $H^1({\Delta},D_0)=H^1({\Delta},G)=0$, hence $H^1({\Delta},D)=0$. We now work out an example of $H^i({\Gamma},D)$ for a topologically non-trivial graph ${\Gamma}$. Let ${\Gamma}$ be the graph consisting of two vertices $v_1$ and $v_2$ joined by $n$ edges $e_1,\ldots,e_n$, oriented such that $s(e_i)=v_1$ and $t(e_i)=v_2$. Let $H_1$ and $H_2$ be two subgroups of $G$, and consider the following $G$-dilation datum on ${\Gamma}$: $$D(v_1)=H_1,\quad D(v_2)=H_2\quad\textrm{ and }\quad D(e_i)\mbox{ are arbitrary subgroups of }H_1\cap H_2.$$ We see that $C(e_i)=H_1+ H_2$ for all $i$, therefore $$C^0({\Gamma},D)=G/H_1\oplus G/H_2\quad\textrm{ and }\quad C^1({\Gamma},D)=[G/(H_1+ H_2)]^n.$$ The coboundary map $\delta^*_{{\Gamma},D}$ is the composition of the projection $$\pi:G/H_1\oplus G/H_2\to G/H_1\sqcup G/H_2\simeq G/(H_1+H_2)$$ and the diagonal map. Therefore $$H^0({\Gamma},D)=\operatorname{Ker}\pi\simeq G/(H_1\cap H_2)\quad\textrm{ and }\quad H^1({\Gamma},D)\simeq [G/(H_1+ H_2)]^{n-1}.$$ \[ex:nedges\] We also show that cohomology of $G$-dilation data can be used to compute simplicial cohomology of edge-maximal subgraphs of ${\Gamma}$, with coefficients in any quotient group of $G$. Let ${\Gamma}$ be a graph, let ${\Delta}\subset {\Gamma}$ be an edge-maximal subgraph, and let $H\subset G$ be a subgroup. Consider the following $G$-dilation datum on ${\Gamma}$: $$D_{{\Delta},H}(x)=\left\{\begin{array}{cc} H,& x\in X({\Delta}), \\ G, & x\notin X({\Delta}).\end{array}\right.$$ By definition, an edge $e\in E({\Gamma})$ lies in ${\Delta}$ if and only if both of its root vertices do. It follows that the dilated cochain complex $C^*({\Gamma},D_{{\Delta},H})$ is equal to the simplicial cochain complex $C^*({\Delta},G/H)$, and hence $$H^i({\Gamma},D_{{\Delta},H})=H^i({\Delta},G/H)\mbox{ for }i=0,1.$$ \[ex:subgraph\] We will show in Sec. \[sec:Gcoversofgraphs\] that the group $H^1({\Gamma},D)$ classifies $G$-covers of ${\Gamma}$ with dilation datum $D$. We do not know of a similar geometric interpretation of the group $H^0({\Gamma},D)$. For the trivial dilation datum $D=0$, the group $H^0({\Gamma},D)=H^0({\Gamma},G)$ is equal to $G$ for any connected graph ${\Gamma}$. In general, the group $H^0({\Gamma},D)$ can be quite large, even on a connected graph. For example, let ${\Gamma}$ be a chain of $2n$ vertices, let $p$ and $q$ be distinct prime numbers, let $G={\mathbb{Z}}/pq{\mathbb{Z}}$, and let $H_1={\mathbb{Z}}/p{\mathbb{Z}}$ and $H_2={\mathbb{Z}}/q{\mathbb{Z}}$ be the nontrivial subgroups of $G$. Label the vertices of ${\Gamma}$ by $H_1$ and $H_2$ in an alternating fashion. Then $C(e)=G/(H_1+H_2)=0$ for any edge of $e$, hence $C^1({\Gamma},D)=0$ and therefore $H^0({\Gamma},D)=C^0({\Gamma},D)=H_1^n\oplus H_2^n=G^n$. We have already noted (see Rem. \[rem:fixedbase\]) that we restrict our attention to a fixed base graph ${\Gamma}$. It is possible to define morphisms between pairs consisting of a graph and a $G$-datum on it (it is necessary to require that the graph morphism be finite). Such morphisms define natural pullback maps on the cochain and cohomology groups. In the next section, we work out these pullback maps for a single example, namely the relationship (Prop. \[prop:contraction\]) between the cohomology groups $H^i({\Gamma},A)$ of a $G$-datum $A$ on ${\Gamma}$, and the cohomology groups $H^i({\Delta},A|_{{\Delta}})$ of the restriction of $A$ to a subgraph ${\Delta}\subset {\Gamma}$. Relative cohomology and reduced cohomology {#sec:relative} ------------------------------------------ This section is somewhat technical in nature, and deals with a single question: how to relate the cohomology groups $H^i({\Gamma},D)$ of a $G$-dilated graph $({\Gamma},D)$ to the cohomology groups $H^i({\Delta},D|_{{\Delta}})$ of the restriction of $D$ to a subgraph ${\Delta}\subset {\Gamma}$. This question is natural from the point of view of tropical geometry: we often study tropical curves by contracting edges and forming simpler graphs, and hence we may need to understand the classification of $G$-covers of a graph in terms of $G$-covers of its contractions. We fix a graph ${\Gamma}$, a subgraph ${\Delta}\subset {\Gamma}$, and a $G$-datum $A$ on ${\Gamma}$. We will see that the cohomology groups $H^i({\Gamma},A)$ and $H^i({\Delta}, A|_{{\Delta}})$ fit into an exact sequence, which is the analogue of the long exact sequence of the cohomology groups of a pair of topological spaces. The relative cohomology groups occurring in this sequence can be computed as reduced cohomology groups of an induced $G$-datum $A_{{\Gamma}/{\Delta}}$ on the quotient graph ${\Gamma}/{\Delta}$. Unfortunately, the $G$-datum $A_{{\Gamma}/{\Delta}}$ is not in general the $G$-datum associated to a $G$-dilation datum on ${\Gamma}/{\Delta}$, even when $A$ is associated to a $G$-dilation datum on ${\Gamma}$. Let ${\Gamma}$ be an oriented graph, let ${\Delta}\subset {\Gamma}$ be a subgraph, and let $A$ be a $G$-datum on ${\Gamma}$. Then $A|_{{\Delta}}$ is a $G$-datum on ${\Delta}$. Viewing $C^0({\Gamma},A)$ and $C^0({\Delta},A|_{{\Delta}})$ as sets of $A(v)$-valued maps from $V({\Gamma})$ and $V({\Delta})$, respectively, we define a surjective map $\iota^0:C^0({\Gamma},A)\to C^0({\Delta},A|_{{\Delta}})$ by restricting from $V({\Gamma})$ to $V({\Delta})$. We similarly define a surjective restriction map $\iota^1:C^1({\Gamma},A)\to C^1({\Delta},A|_{{\Delta}})$, and define the [*relative cochain complex*]{} of the triple $({\Gamma},{\Delta},A)$: $$\begin{tikzcd}0\arrow[r] & C^0({\Gamma},{\Delta},A) \arrow[r,"\delta^*_{{\Gamma},{\Delta},A}"] & C^1({\Gamma},{\Delta},A) \arrow[r] & 0,\end{tikzcd}$$ by setting $C^i({\Gamma},{\Delta},A)=\operatorname{Ker}\iota^i$ for $i=0,1$ and $\delta^*_{{\Gamma},{\Delta},A}$ to be the restriction of $\delta^*_{{\Gamma},A}$ to $C^0({\Gamma},{\Delta},A)$. The [*relative cohomology groups*]{} of the triple $({\Gamma},{\Delta},A)$ are $$H^0({\Gamma},{\Delta},A)=\operatorname{Ker}\delta^*_{{\Gamma},{\Delta},A}\quad\textrm{ and } \quad H^1({\Gamma},{\Delta},A)=\operatorname{Coker}\delta^*_{{\Gamma},{\Delta},A}.$$ We note that $\delta_{{\Gamma},A}^*\circ \iota^0=\iota^1\circ \delta^*_{{\Delta},A|_{{\Delta}}}$, in other words the $\iota^i$ form a cochain map. Hence we have a short exact sequence of cochain complexes: $$\begin{tikzcd} & 0 \ar[d]& 0 \ar[d]& \\ 0\arrow[r] & C^0({\Gamma},{\Delta},A) \arrow[r,"\delta^*_{{\Gamma},{\Delta},A}"] \arrow[d] & C^1({\Gamma},{\Delta},A) \arrow[r] \arrow[d] & 0 \\ 0\arrow[r] & C^0({\Gamma},A) \arrow[r,"\delta^*_{{\Gamma},A}"] \arrow[d,"\iota^0"] & C^1({\Gamma},A) \arrow[r] \arrow[d,"\iota^1"] & 0 \\ 0\arrow[r] & C^0({\Delta},A|_{{\Delta}}) \arrow[r,"\delta^*_{{\Delta},A|_{{\Delta}}}"] \arrow[d] & C^1({\Delta},A|_{{\Delta}}) \arrow[r] \arrow[d] & 0 \\ & 0 & 0 & \end{tikzcd}\label{eq:ses}$$ By the snake lemma, the cohomology groups of $({\Gamma},A)$, $({\Delta},A|_{{\Delta}})$ and the triple $({\Gamma},{\Delta},A)$ fit into an exact sequence $$\begin{tikzcd} 0 \arrow[r]& H^0({\Gamma},{\Delta},A)\arrow[r] & H^0({\Gamma},A)\arrow[r] & H^0({\Delta},A|_{{\Delta}}) \arrow[r] & \, \\ \,\arrow[r] &H^1({\Gamma},{\Delta},A) \arrow[r] & H^1({\Gamma},A)\arrow[r] & H^1({\Delta},A|_{{\Delta}})\arrow[r] &0.\end{tikzcd} \label{eq:les}$$ We now show that the relative cohomology groups of the triple $({\Gamma},{\Delta},A)$ are equal to the reduced cohomology groups of the contracted graph ${\Gamma}/{\Delta}$ with a certain induced $G$-datum. First, we define the reduced cohomology groups of a pair $({\Gamma},A)$. Let ${\Gamma}$ be an oriented graph and let $A$ be a $G$-datum on ${\Gamma}$. Let $d:G\to C^0({\Gamma},A)$ be the diagonal morphism given by $d(g)=\xi_g\in C^0({\Gamma},A)$, where $$\xi_g(v)=f_v(g)\in A(v)\mbox{ for all }v\in V({\Gamma})\mbox{ and all }g\in G.$$ For any $g\in G$ and any $e\in E({\Gamma})$ we have $$\delta_{{\Gamma},A}^*(d(g))(e)=t^*(\xi_g)(e)-s^*(\xi_g)(e)=t_e(f_{t(e)}(g))-s_e(f_{s(e)}(g))=f_e(g)-f_e(g)=0,$$ hence ${\operatorname{Im}}d\subset \operatorname{Ker}\delta_{{\Gamma},A}^*$. Therefore we can define the [*reduced cochain complex*]{} of the pair $({\Gamma},A)$ $$\begin{tikzcd} 0 \ar[r] & \widetilde{C}^0({\Gamma},A)\ar[r,"\widetilde{\delta}_{{\Gamma},A}^*"] & \widetilde{C}^1({\Gamma},A) \ar[r] & 0 \end{tikzcd}$$ by $$\widetilde{C}^0({\Gamma},A)=C^0({\Gamma},A)/{\operatorname{Im}}d\quad\textrm{ and } \quad\widetilde{C}^1({\Gamma},A)=C^1({\Gamma},A),$$ and the [*reduced cohomology groups*]{} of $({\Gamma},A)$: $$\widetilde{H}^0({\Gamma},A)=\operatorname{Ker}\widetilde{\delta}_{{\Gamma},A}^*=H^0({\Gamma},A){\operatorname{mod}}G\quad \textrm{ and }\quad \widetilde{H}^1({\Gamma},A)=\operatorname{Coker}\widetilde{\delta}_{{\Gamma},A}^*=H^1({\Gamma},A).$$ We define the quotient of a graph ${\Gamma}$ by a subgraph ${\Delta}\subset {\Gamma}$ by contracting ${\Delta}$ to a single vertex. Note that this definition comes from topology, and differs from weighted edge contraction (see Def. \[def:edgecontraction\]), wherein each connected component of ${\Delta}$ is contracted to a separate vertex. Let ${\Gamma}$ be an oriented graph and let ${\Delta}$ be a subgraph. We define the graph ${\Gamma}/{\Delta}$ as follows: $$V({\Gamma}/{\Delta})=V({\Gamma})\backslash V({\Delta})\cup \{w\}\quad\textrm{ and }\quad E({\Gamma}/{\Delta})=E({\Gamma})\backslash E({\Delta}),$$ as well as $$s(e)=\left\{\begin{array}{cc} w & \textrm { if }s(e)\in V({\Delta}), \\ s(e) & \textrm{ if }s(e)\notin V({\Delta}), \end{array}\right.\quad t(e)=\left\{\begin{array}{cc} w & \textrm{ if } t(e)\in V({\Delta}), \\ t(e) & \textrm{ if }t(e)\notin V({\Delta}). \end{array}\right.$$ Now let ${\Gamma}$ be an oriented graph, let ${\Delta}$ be a subgraph, and let $A$ be a $G$-datum on ${\Gamma}$. We define the $G$-datum $A_{{\Gamma}/{\Delta}}$ on ${\Gamma}/{\Delta}$ by restricting $A$ to all vertices except $w$ and all edges, and by placing the trivial $G$-datum at $w$. Specifically, the $G$-groups $f'_v:G\to A_{{\Gamma}/{\Delta}}(v)$ corresponding to the vertices $v\in V({\Gamma}/{\Delta})$ are $$f'_v:G\to A_{{\Gamma}/{\Delta}}(v)=\left\{\begin{array}{ll} f_v:G\to A(v)&\textrm{ if } v\in V({\Gamma})\backslash V({\Delta}), \\ \operatorname{Id}:G\to G& \textrm{ if } v=w.\end{array}\right.$$ The $G$-groups $f'_e:G\to A_{{\Gamma}/{\Delta}}(e)$ corresponding to $e\in E({\Gamma}/{\Delta})$ are the same as $f_e:G\to A(e)$. Finally, the source and target maps $s'_e:A_{{\Gamma}/{\Delta}}\big(s(e)\big)\to A_{{\Gamma}/{\Delta}}(e)$ and $t'_e:A_{{\Gamma}/{\Delta}}\big(t(e)\big)\to A_{{\Gamma}/{\Delta}}(e)$ are $$s'_e=\left\{\begin{array}{ll} s_e:A\big(s(e)\big)\to A(e) & \textrm{ if } s(e)\neq w, \\ f_e:G\to A(e) & \textrm{ if } s(e)=w,\end{array}\right.$$ and $$t'_e=\left\{\begin{array}{ll} t_e:A\big(t(e)\big)\to A(e) & \textrm{ if } t(e)\neq w, \\ f_e:G\to A(e) & \textrm{ if } t(e)=w.\end{array}\right.$$ If $A=A^D$ is the $G$-datum associated to a $G$-dilation datum $D$, then so is $A|_{{\Delta}}$, but not, in general, $A_{{\Gamma}/{\Delta}}$. Specifically, the edge groups of $A^D$ are the coproducts of the vertex groups, which is no longer the case for $A_{{\Gamma}/{\Delta}}$. In other words, the relationship between the dilated cohomology groups $H^i({\Gamma},D)$ and $H^i({\Delta},D|_{{\Delta}})$ cannot be expressed without using the more general framework of $G$-data and their cohomology. The edge groups of $A_{{\Gamma}/{\Delta}}$ retain a record of the dilation groups $D(v)$ of the edges $v\in V({\Delta})$ that are contracted in ${\Gamma}/{\Delta}$. Let ${\Gamma}$ be an oriented graph, let ${\Delta}$ be a subgraph, and let $A$ be a $G$-datum on ${\Gamma}$. The relative cohomology groups of the triple $({\Gamma},{\Delta},A)$ are equal to the reduced cohomology groups of $({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$: $$H^i({\Gamma},{\Delta},A)=\widetilde{H}^i({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}}).$$ \[prop:contraction\] The group $G$ acts diagonally on $C^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$, and the action is free and transitive on the $w$-coordinate, since by definition $A_{{\Gamma}/{\Delta}}(w)=G$. Therefore any element $[\xi]$ in the quotient group $\widetilde{C}^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ has a unique representative $\xi\in C^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ satisfying $\xi(w)=0$. Since $A_{{\Gamma}/{\Delta}}(v)=A(v)$ for $v\in V({\Gamma}\backslash {\Delta})$, we can define an extension by zero map $j^0:\widetilde{C}^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})\to C^0({\Gamma},A)$ by $$j^0([\xi])(v)=\left\{\begin{array}{ll}\xi(v) & \textrm{ if } v\in V({\Gamma})\backslash V({\Delta}), \\ 0 & \textrm{ if }v\in V({\Delta}).\end{array}\right.$$ Similarly, since $A_{{\Gamma}/{\Delta}}(e)=A(e)$ for all $e\in E({\Gamma}/{\Delta})$, we can define an extension by zero map $j^1:\widetilde{C}^1({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})=C^1({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})\to C^1({\Gamma},A)$ by $$j^1(\eta)(e)=\left\{\begin{array}{ll}\eta(e) & \textrm{ if }e\in E({\Gamma})\backslash E({\Delta}), \\ 0 & \textrm{ if }e\in E({\Delta}).\end{array}\right.$$ We claim that the $j^i$ form a chain map. Denote for simplicity $\delta^*=t^*-s^*=\delta^*_{{\Gamma},{\Delta}}$ and $\widetilde{\delta}^*=\widetilde{\delta}^*_{{\Gamma}/{\Delta},\alpha_{{\Gamma}/{\Delta}}}$. For $[\xi]\in \widetilde{C}^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ let $\xi\in C^0({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ be the representative satisfying $\xi(w)=0$. Let $e\in E({\Gamma})$ be an edge. If $e\in E({\Delta})$ then $(j^1\circ \widetilde{\delta}^*)\big([\xi]\big)(e)=0$. If $e\in E({\Gamma})\backslash E({\Delta})$ has root vertices $u=s(e)$ and $v=t(e)$, then, using $$(j^1\circ\widetilde{\delta}^*)\big([\xi]\big)(e)=t'_e\big(\xi(v)\big)-s'_e\big(\xi(u)\big)$$ we find $$(j^1\circ\widetilde{\delta}^*)\big([\xi]\big)(e)=\left\{\begin{array}{ll} t_e(\xi(v))-s_e(\xi(u)) & \textrm{ if } v\in V({\Gamma})\backslash V({\Delta}) \textrm{ and } u\in V({\Gamma})\backslash V({\Delta}) , \\ t_e\big(\xi(v)\big) & \textrm{ if } v\in V({\Gamma})\backslash V({\Delta}) \textrm{ and } u\in V({\Delta}), \\ -s_e\big(\xi(u)\big) & \textrm{ if } v\in V({\Delta}) \textrm{ and } u\in V({\Gamma})\backslash V({\Delta}), \\ 0 &\textrm{ if } v\in V({\Delta}) \textrm{and} u\in V({\Delta}),\end{array}\right. \label{eq:long}$$ because $\xi(w)=0$. On the other hand, $j^0([\xi])$ is the element of $C^0({\Gamma},{\Delta})$ obtained by setting $j^0([\xi])(v)=\xi(v)$ for all vertices $v\in V({\Gamma})\backslash V({\Delta})$ and $\xi(v)=0$ for all $v\in V({\Delta})$. It is clear that $(\delta^*\circ j^0)([\xi])(e)$ is given by  for any $e\in E({\Gamma})\backslash E({\Delta})$, and $(\delta^*\circ j^0)([\xi])(e)=0$ for any $e\in E({\Delta})$, because any such edge has root vertices in ${\Delta}$ and $\xi$ vanishes at those vertices. It follows that $\delta^*\circ j^0=j^1\circ \widetilde{\delta}^*$, hence the $j^i$ form a chain map. We now consider the diagram . By definition, $$C^0({\Gamma},{\Delta},A)=\left\{\xi\in C^0({\Gamma},A):\xi(v)=0\mbox{ for all }v\in V({\Delta})\right\}$$ and $$C^1({\Gamma},{\Delta},A)=\left\{\eta\in C^1({\Gamma},A):\chi(e)=0\mbox{ for all }e\in E({\Delta})\right\}.$$ It is clear that $j^i$ maps $\widetilde{C}^i({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ bijectively onto $C^i({\Gamma},{\Delta},A)$ for $i=0,1$. It follows that $j^i$ is a chain isomorphism from $\widetilde{C}^i({\Gamma}/{\Delta},A_{{\Gamma}/{\Delta}})$ to $C^i({\Gamma},{\Delta},A)$, which completes the proof. Classification of $G$-covers of graphs and tropical curves {#sec:classification} ========================================================== In this section, we use dilated cohomology groups, defined in the previous section, to classify $G$-covers of graphs and tropical curves. In Sec. \[sec:Gcoversofgraphs\] we give the main classification result for unweighted graphs, Thm. \[thm:main\], which identifies the set of $G$-covers of ${\Gamma}$ with a given dilation datum $D$ with the group $H^1({\Gamma},D)$. Among these covers, we characterize the connected ones in Prop. \[prop:connectedcovers\], and give examples. The case of unramified $G$-covers of a weighted graph is treated in Sec. \[sec:admissible\], the only novelty being a numerical restriction on the dilation datum $D$ imposed by the local Riemann–Hurwitz condition . Finally, the case of weighted metric graphs and tropical curves is summarized in Sec. \[sec:metric\]. $G$-covers of graphs {#sec:Gcoversofgraphs} -------------------- In this section, we determine all $G$-covers of an unweighed graph ${\Gamma}$ with a given $G$-dilation datum $D$. Our theorem generalizes the standard result that the set of topological $G$-covers of ${\Gamma}$ (i.e. with trivial stabilizers) is identified with $H^1({\Gamma},G)$ (see Ex. \[ex:topologicalcovers\] and Ex. \[ex:trivialdilation\]). \[thm:main\] Let ${\Gamma}$ be a graph, let $G$ be a finite abelian group, and let $D$ be a $G$-dilation datum on ${\Gamma}$. Then there is a natural bijection between $H^1({\Gamma},D)$ and the set of $G$-covers having dilation datum $D$. We first explain how to associate an element $[\eta_{{\varphi}}]\in H^1({\Gamma},D)$ to a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$. Pick an orientation on $E({\Gamma})$ and a consistent orientation on $E({\Gamma}')$, and denote $s,t:E({\Gamma}')\to V({\Gamma}')$ and $s,t:E({\Gamma})\to V({\Gamma})$ the source and target maps. For each $x\in X({\Gamma})$, the preimage ${\varphi}^{-1}(x)$ is a $G/D(x)$-torsor, so pick a $G$-equivariant bijection $f_x:{\varphi}^{-1}(x)\to G/D(x)$. Namely, for every $x'\in {\varphi}^{-1}(x)$ and every $g\in G$ we have $$f_x(gx')=f_x(x')+g\ {\operatorname{mod}}D(x).$$ We require that if $e=(h_1,h_2)\in E({\Gamma})$, then, under the identification ${\varphi}^{-1}(h_1)={\varphi}^{-1}(h_2)={\varphi}^{-1}(e)$, the two maps $f_{h_1}$ and $f_{h_2}$ are equal, in which case we denote them by $f_e$. If $l\in L({\Gamma})$ is a leg, then $D(l)\subset D(r(l))$ and we have the following diagram of $G$-sets: \^[-1]{}(l)& \^[-1]{}([r(l)]{})\ G/D(l)& G/D([r(l)]{}) The vertical maps are bijections, while the horizontal maps are surjections. Adding a constant to $f_l$ if necessary, we can assume that the lower horizontal map is reduction modulo $D({r(l)})/D(l)$. Now let $e\in E({\Gamma})$ be an edge, then $D(s(e))$ and $D(t(e))$ are subgroups of $G$ containing $D(e)$. The source and target maps restrict to $G$-equivariant surjections $s:{\varphi}^{-1}(e)\to {\varphi}^{-1}(s(e))$ and $t:{\varphi}^{-1}(e)\to {\varphi}^{-1}(t(e))$, and we have a commutative diagram of $G$-sets \^[-1]{}(s(e))& \^[-1]{}(e)& \^[-1]{}([t(e)]{})\ G/D(s(e)) & G/D(e) & G/D([t(e)]{}) where the vertical arrows are bijections. The lower horizontal arrows are surjections, and are therefore given by adding certain elements $\eta_t(e)\in G/D({t(e)})$ and $\eta_s(e)\in G/D(s(e))$, and then reducing modulo $D({t(e)})$ and $D(s(e))$, respectively. Hence the cover ${\varphi}$ determines an element $$\eta_{{\varphi}}=\big(\eta_s(e),-\eta_t(e)\big)_{e\in E({\Gamma})}\in \prod_{e\in E({\Gamma})} G/D\big(s(e)\big)\oplus G/D\big(t(e)\big).$$ Denote $[\eta_{{\varphi}}]$ its class in $H^1({\Gamma},D)$. We need to verify that the association ${\varphi}\mapsto[\eta_{{\varphi}}]$ is independent of all choices. Suppose that we chose different bijections ${\widetilde{f}}_x:{\varphi}^{-1}(x)\to G/D(x)$. For any leg $l\in L({\Gamma})$ we can assume, as above, that the induced map $G/D(l)\to G/D(r(l))$ is reduction modulo $D(r(l))/D(l)$. Now let $e\in E({\Gamma})$ be an edge. We have a diagram of $G$-sets G/D(s(e)) & G/D(e) & G/D([t(e)]{})\ \^[-1]{}(s(e))& \^[-1]{}(e)& \^[-1]{}([t(e)]{})\ G/D(s(e)) & G/D(e) & G/D([t(e)]{}) The top horizontal maps define an element ${\widetilde{\eta}}_{{\varphi}}=({\widetilde{\eta}}_s(e), -{\widetilde{\eta}}_t(e))_{e\in E({\Gamma})}$ and a corresponding class $[{\widetilde{\eta}}_{{\varphi}}]$ in $H^1({\Gamma},D)$. The middle column consists of isomorphisms of $G$-sets, hence the map ${\widetilde{f}}_e\circ f_e^{-1}:G/D(e)\to G/D(e)$ is the addition of an element $\omega(e)\in G/D(e)$. Similarly, the isomorphisms ${\widetilde{f}}_{s(e)}\circ f_{s(e)}^{-1}:G/D\big(s(e)\big)\to G/D\big(s(e)\big)$ and ${\widetilde{f}}_{t(e)}\circ f_{t(e)}^{-1}:G/D\big({t(e)}\big)\to G/D\big({t(e)}\big)$ are given by adding certain elements $\xi\big(s(e)\big)\in G/D\big(s(e)\big)$ and $\xi\big({t(e)}\big)\in G/D\big({t(e)}\big)$. The two vertical rectangles give the following relations on all of these elements: $$\begin{split} \eta_s(e)+\xi(s(e))&=\omega(e)+{\widetilde{\eta}}_s(e){\operatorname{mod}}D(s(e))\\ \eta_t(e)+\xi({t(e)})&=\omega(e)+{\widetilde{\eta}}_t(e) {\operatorname{mod}}D({t(e)}). \end{split}$$ Comparing this with Eq. , we see that $[\eta_{{\varphi}}]=[{\widetilde{\eta}}_{{\varphi}}]$, so the $G$-cover ${\varphi}$ determines a well-defined element of $H^1({\Gamma},D)$. Conversely, let $D$ be a $G$-dilation datum on ${\Gamma}$, let $[\eta]\in H^1({\Gamma},D)$ be an element, and let $(\eta_t(e),\eta_s(e))_{e\in E({\Gamma})}$ be a lift of $[\eta]$. Running the above construction in reverse, we obtain a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ with associated $G$-dilation datum $D$. Specifically, let - $V({\Gamma}')=\coprod_{v\in V({\Gamma})} G/D(v)$, - $E({\Gamma}')=\coprod_{e\in E({\Gamma})} G/D(e)$, and - $L({\Gamma}')=\coprod_{l\in L({\Gamma})} G/D(l)$. We define ${\varphi}:{\Gamma}'\to {\Gamma}$ by sending each $G/D(x)$ to the corresponding $x\in X({\Gamma})$. For a leg $l\in L({\Gamma})$, we define the lifting $r:G/D(l)\to G/D\big(r(l)\big)$ of the root map to ${\Gamma}'$ as reduction modulo $D(r(l))/D(l)$. Finally, for an edge $e\in L({\Gamma})$, we define the liftings $s:G/D(e)\to G/D(s(e))$ and $t:G/D(e)\to G/D(t(e))$ of the source and target maps as $$s(g)=g+\eta_s(e){\operatorname{mod}}D(s(e))/D(e) \quad \textrm{ and }\quad t(g)=g-\eta_t(e){\operatorname{mod}}D(t(e))/D(e).$$ We observe that there is at least one $G$-cover associated to any $G$-dilation datum $D$ on ${\Gamma}$, namely the [*trivial $G$-cover with dilation datum*]{} $D$, corresponding to the identity element of $H^1({\Gamma},D)$. Explicitly, the source graph $\Gamma'$ is the union of the sets $G/D(x)$ for all $x\in X({\Gamma})$, and the root maps $G/D(x)\to G/D\big(r(x)\big)$ are the quotient maps corresponding to the injections $D(x)\subset D\big(r(x)\big)$. Note also that the set of $G$-covers with dilation datum $D$ depends only on the vertex groups $D(v)$ for $v\in V({\Gamma})$, or, alternatively, on the dual stratification ${\mathcal{S}}^*(D)$. The correspondence ${\varphi}\mapsto \eta_{{\varphi}}$ between $G$-covers of ${\Gamma}$ with dilation datum $D$ and elements of $H^1({\Gamma},D)$ given in Thm. \[thm:main\] is functorial, in the following sense. Let ${\varphi}_1:{\Gamma}'_1\to {\Gamma}$ and ${\varphi}_2:{\Gamma}'_2\to {\Gamma}$ be $G$-covers, and let $\tau:{\Gamma}'_1\to {\Gamma}'_2$ be a morphism of $G$-covers (in the sense of Def. \[def:morphismofGcovers\]). Then the dilation datum $D_{{\varphi}_1}$ is a refinement of $D_{{\varphi}_2}$, and by the proof of Prop. \[prop:tree\] there is a surjective map $\pi:H^1({\Gamma},D_{{\varphi}_1})\to H^1({\Gamma},D_{{\varphi}_2})$. It is easy to check that $\pi(\eta_{{\varphi}_1})=\eta_{{\varphi}_2}$. More generally, the correspondence ${\varphi}\mapsto \eta_{{\varphi}}$ is functorial with respect to pullback maps induced by finite harmonic morphisms ${\Delta}\to {\Gamma}$, which, as we have already remarked, are beyond the scope of our paper. We saw in Prop. \[prop:tree\] that $H^1({\Delta},D)=0$ for any $G$-dilation datum on a tree ${\Delta}$. In other words, any $G$-cover of a tree is isomorphic to the trivial $G$-cover associated to some dilation datum $D$. This statement allows us to give a somewhat explicit description of $G$-covers of an arbitrary graph ${\Gamma}$. Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a $G$-cover with dilation datum $D$. Pick a spanning tree ${\Delta}\subset {\Gamma}$, and let $\{e_1,\ldots,e_n\}=E({\Gamma})\backslash E({\Delta})$ be the remaining edges. The restricted $G$-cover ${\varphi}|_{{\Delta}}$ is isomorphic to the trivial $G$-cover of ${\Delta}$ with dilation datum $D|_{{\Delta}}$, in other words there is a $G$-equivariant bijection $$\tau:{\varphi}^{-1}({\Delta})\to \coprod_{x\in X({\Delta})} G/D(x),\quad \tau\big({\varphi}^{-1}(x)\big)=G/D(x).$$ The cover ${\varphi}$ is then completely determined by the way that the fibers $G/D(e_i)$ are attached to the fibers $G/D(s(e_i))$ and $G/D(t(e_i))$. As we saw in the proof above, this attachment datum can be recorded (in general, non-uniquely) by an $n$-tuple of elements of $\eta_i\in A^D(e_i)=G/C(e_i)$. In terms of the dilated cohomology group, we have shown that any element $[\eta]\in H^1({\Gamma},D)$ can be represented by a cochain $\eta\in C^1({\Gamma},D)$ such that $\eta(e)=0$ unless $e=e_i$ for some $i=1,\ldots,n$ (cf. Lemma 2.3.4 in [@LenUlirsch]). ### Connected covers {#connected-covers .unnumbered} Given a connected graph ${\Gamma}$, it is natural to ask which of its $G$-covers constructed above are connected. To answer this question, we first consider the following construction. Let $H$ be a proper subgroup of $G$, and let $D$ be an $H$-dilation datum on a connected graph ${\Gamma}$. We can then view $D$ as a $G$-dilation datum, which we denote by $D^G$ to prevent confusion. There are natural injective chain maps $\iota^i:C^i({\Gamma},D)\to C^i({\Gamma},D^G)$ that induce maps $\iota^i:H^i({\Gamma},D)\to H^i({\Gamma},D^G)$. The maps $\iota^i:H^i({\Gamma},D)\to H^i({\Gamma},D^G)$ are injective. The cochain groups $C^0({\Gamma},D)$ and $C^0({\Gamma},D^G)$ are the products of $H/D(v)$ and $G/D(v)$, respectively, over all $v\in V({\Gamma})$. It follows that $\operatorname{Coker}\iota^0$ can be identified with the cochain group $C^0({\Gamma},G/H)$, and similarly $\operatorname{Coker}\iota^1=C^1({\Gamma},G/H)$. By the snake lemma, we have a long exact sequence of cohomology groups: $$\begin{tikzcd} 0 \arrow[r]& H^0({\Gamma},D)\arrow[r,"\iota^0"] & H^0({\Gamma},D^G)\arrow[r] & H^0({\Gamma},G/H) \arrow[r] & H^1({\Gamma},D) \arrow[r,"\iota^1"] & H^1({\Gamma},D^G).\end{tikzcd}$$ Therefore $\iota^0$ is injective. To prove that $\iota^1$ is injective, we show that the map $\pi:H^0({\Gamma},D^G)\to H^0({\Gamma},G/H)$ is surjective. All of our chain complexes split into direct sums over the connected components of ${\Gamma}$, so we assume that ${\Gamma}$ is connected. In this case $H^0({\Gamma},G/H)=G/H$, and moreover any $[\xi]\in H^0({\Gamma},G/H)$ is represented by a constant cochain $\xi(v)=\overline{g}$ for some $\overline{g}\in G/H$. Pick $g\in G$ representing $\overline{g}$, then the constant cochain $\xi'(v)=g{\operatorname{mod}}G/D(v)$ in $C^0({\Gamma},D^G)$ lies in $\operatorname{Ker}\delta^0_{{\Gamma},D^G}$, hence represents a class $[\xi']\in H^0({\Gamma},D^G)$, and $\pi\big([\xi']\big)=[\xi]$. Therefore $\pi$ is surjective, so $\iota^0$ is injective. There is a natural way to associate a $G$-cover of ${\Gamma}$ to an $H$-cover of ${\Gamma}$ that corresponds, under the bijection of Thm. \[thm:main\], to the injective map $\iota^1:H^1({\Gamma},D)\to H^1({\Gamma},D^G)$. Let ${\Gamma}$ be a graph, let $H\subset G$ be abelian groups, and let ${\varphi}\colon{\Gamma}'\to {\Gamma}$ be an $H$-cover with $H$-dilation datum $D$. We define the $G$-cover ${\varphi}^G:{\Gamma}'^G\to {\Gamma}$ with $G$-dilation datum $D^G$, called the [*extension of ${\varphi}$ by $G$*]{}, as follows. For each $x\in X({\Gamma})$, pick an identification of $H$-sets, as in the proof of Thm. \[thm:main\], of ${\varphi}^{-1}(x)$ with $H/D(x)$, and for every edge $e\in E({\Gamma})$ let $\eta_t(e)\in H/D\big(t(e)\big)$ and $\eta_s(e)\in H/D\big(s(e)\big)$ be the elements that determine the root maps $t:{\varphi}^{-1}(e)\to {\varphi}^{-1}\big(t(e)\big)$ and $s:{\varphi}^{-1}(e)\to {\varphi}^{-1}\big(s(e)\big)$. We define ${\varphi}^G$ by identifying each fiber $({\varphi}^G)^{-1}(x)$ with the $G$-set $G/D(x)$, and rooting $({\varphi}^G)^{-1}(e)$ to $({\varphi}^G)^{-1}\big(t(e)\big)$ and $({\varphi}^G)^{-1}\big(s(e)\big)$ using $\eta_t(e)$ and $\eta_s(e)$, viewed, respectively, as elements of $G/D\big(t(e)\big)$ and $G/D\big(s(e)\big)$. Looking at the proof of \[thm:main\], it is clear that $\iota^1(\eta_{{\varphi}})=\eta_{{\varphi}^G}$. Furthermore, the cover ${\varphi}^G$ is disconnected (unless $H=G$), since the root maps $t:G/D(e)\to G/D(t(e))$ and $s:G/D(e)\to G/D(s(e))$ preserve the decomposition into $H$-cosets. We now show that all disconnected $G$-covers of a connected graph ${\Gamma}$ arise in this way. Indeed, let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a $G$-cover of a connected graph with $G$-dilation datum $D$, and let ${\Gamma}'={\Gamma}'_1\sqcup \cdots \sqcup {\Gamma}'_n$ be the connected components of ${\Gamma}'$. The group $G$ acts on the connected components by permutation. Let $H=\big\{g\in G\big\vert g({\Gamma}'_1)={\Gamma}'_1\big\}$, then $D(v)\in H$ for all $v\in V({\Gamma})$. We view $D$ as an $H$-dilation datum, which we denote $D_H$. It is clear that the restriction ${\varphi}_{{\Gamma}'_1}\colon{\Gamma}'_1\to {\Gamma}$ is a connected $H$-cover with $H$-dilation datum $D_H$, and that ${\varphi}$ is isomorphic to the $G$-extension of ${\varphi}|_{{\Gamma}'_1}$ by $G$. In other words, every disconnected $G$-cover of ${\Gamma}$ is the extension of an $H$-cover, where $H\subset G$ is some proper subgroup. We have proved the following result, which classifies connected $G$-covers of a connected graph ${\Gamma}$. Let ${\Gamma}$ be a connected graph, and let $D$ be a $G$-dilation datum on ${\Gamma}$. If the groups $D(v)$ span $G$, then every $G$-cover with dilation datum $D$ is connected. If not, then the set of disconnected $G$-covers with dilation datum $D$ is the union of the images of the maps $H^1({\Gamma},D_H)\to H^1({\Gamma},D)$ over all proper subgroups $H\subset G$ such that $D(v)\subset H$ for all $v\in V({\Gamma})$, where for each such $H$, $D_H$ denotes $D$ viewed as an $H$-dilation datum. \[prop:connectedcovers\] \[ex:Klein2\] We now apply the results of this section to enumerate all $G$-covers of the graph ${\Gamma}$ consisting of two vertices $u$ and $v$ joined by two edges $e$ and $f$, when $G={{\mathbb{Z}}/{2}{\mathbb{Z}}}\oplus{{\mathbb{Z}}/{2}{\mathbb{Z}}}$ is the Klein group. In particular, we describe the covers of Ex. \[ex:Klein1\] in terms of dilated cohomology. We recall that we denote $00$, $10$, $01$, and $11$ the elements of $G$, and $H_1$, $H_2$, and $H_3$ the subgroups of $G$ generated respectively by $10$, $01$, and $11$. We enumerate the covers in the following way: first, we enumerate the choices for $D(u)$ and $D(v)$, then, for each choice, we consider the possible $D(e), D(f)\subset D(u)\cap D(v)$, and finally $\#H^1({\Gamma},D)$ counts the $G$-covers with such $G$-dilation data (note that the last two steps are independent, since $H^1({\Gamma},D)$ does not depend on the edge dilation groups). We saw in Ex. \[ex:nedges\] that $H^1({\Gamma},D)=G/(D(u)+D(v))$ for any $G$-dilation datum on ${\Gamma}$. We now make this identification more explicit. Orient ${\Gamma}$ so that $s(e)=s(f)=u$ and $t(e)=t(f)=v$. An element $[\eta]\in H^1({\Gamma},D)$ is represented by two pairs of elements $$\big(\eta_s(e),\eta_t(e)\big), \big(\eta_s(f),\eta_t(f)\big)\in G/D(u)\oplus G/D(v),$$ modulo the relations . It is clear that for any $[\eta]$ we can pick a representative with $\eta_s(e)=0$, $\eta_s(f)=0$, and $\eta_t(f)=0$ (in other words, we trivialize $[\eta]$ along the spanning tree $\{u,v,f\}$), so we can represent $[\eta]$ with a single element $\eta_t(e)\in G/D(v)$. Furthermore, the class of this $\eta_t(e)$ in $G/(D(u)+D(v))$ is equal to $[\eta]$ under the isomorphism $G\big/\big(D(u)+D(v)\big)=H^1({\Gamma},D)$. Explicitly, the cover corresponding to $[\eta]$ is constructed as follows: define the sets $\{u_{ij}\}=G/D(u)$, $\{v_{ij}\}=G/D(v)$, $\{e_{ij}\}=G/D(e)$, and $\{f_{ij}\}= G/D(f)$ (where the labeling is non-unique for a nontrivial dilation group), attach $f_{ij}$ to $u_{ij}$ and $v_{ij}$, and attach $e_{ij}$ to $u_{ij}$ and $v_{ij+\eta_t(e)}$. 1. $D(u)=D(v)=0$. This is the topological case, with trivial dilation. Here $D(e)=D(f)=0$, $H^1({\Gamma},D)=H^1({\Gamma},G)=G$, and there are four covers, three of them non-trivial. All of these covers are disconnected, since there are no surjective maps $\pi_1({\Gamma})={\mathbb{Z}}\to G$. The cover corresponding to $\eta_t(e)=10$ is given in Fig. \[subfig:cover1\]. 2. $D(u)=0$, $D(v)=H_i$ for $i=1,2,3$. In this case $D(e)=D(f)=0$, $H^1({\Gamma},D)=G/H_i$, so for each $i$ there is one trivial and one nontrivial cover. For example, Fig. \[subfig:cover2\] shows the non-trivial cover with $D(v)=H_1$ and $\eta_t(e)=01$. There are a total of six covers of this type: three trivial disconnected covers and three non-trivial connected covers. 3. $D(u)=H_i$ for $i=1,2,3$, $D(v)=0$. This case is symmetric to the one above, with three connected and three disconnected covers. 4. $D(u)=D(v)=H_i$ for $i=1,2,3$. Each of the groups $D(e)$ and $D(f)$ can be chosen to be $0$ or $H_i$. Since $H^1({\Gamma},D)=G/H_i$, there is one trivial and one non-trivial cover for each choice. For example, when $D(u)=D(v)=D(e)=H_2$ and $D(f)=0$, we obtain the non-trivial cover of Fig. \[subfig:cover3\] by choosing $\eta_t(e)=10$, and the trivial cover of Fig. \[subfig:cover4\] by choosing $\eta_t(e)=00$. There are a total of 24 such covers, 12 connected and 12 disconnected. 5. $D(u)=H_i$, $D(v)=H_j$, $i\neq j$. The only possibility is $D(e)=D(f)=0$, and $H^1({\Gamma},D)=0$, so for each $i\neq j$ there is a unique trivial cover, for a total of six covers, all connected. 6. If one or both of the groups $D(u)$ and $D(v)$ are equal to $G$, then $H^1({\Gamma},D)=0$. Picking $D(e)$ and $D(f)$ to be arbitrary subgroups of $D(u)\cap D(v)$, we obtain 51 connected trivial covers. Two such covers are given in Figs. \[subfig:cover5\] and \[subfig:cover6\]. We note that 9 of these covers, including the one on \[subfig:cover6\], have non-cyclic edge dilation groups, and are therefore not algebraically realizable. In total, there are 97 Klein covers of ${\Gamma}$, including 75 connected covers. Weighted graphs and unramified $G$-covers {#sec:admissible} ----------------------------------------- We now consider the category of weighted graphs and finite harmonic morphisms between them. Given a weighted graph ${\Gamma}$ and a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ (where we view ${\Gamma}$ as an unweighted graph and ${\varphi}$ as a morphism), there is a natural way to promote ${\varphi}$ to a harmonic morphism of degree equal to $\#(G)$. Since the action of $G$ is transitive on the fibers, the genera of all vertices of ${\Gamma}'$ lying in a single fiber are equal. Therefore a $G$-cover of ${\Gamma}$ with a given dilation datum $D$ is uniquely specified by an element of $H^1({\Gamma},D)$ and a weight function $g':V({\Gamma})\to {\mathbb{Z}}_{\geq 0}$ (which we lift to ${\Gamma}'$). There is a natural way to specify this weight: require ${\varphi}$ to be unramified. This condition imposes a numerical restriction on the $G$-dilation datum $D$. Let ${\Gamma}$ be a weighted graph, and let $G$ be a finite abelian group. A [*$G$-cover*]{} of ${\Gamma}$ is a finite harmonic morphism ${\varphi}:{\Gamma}'\to {\Gamma}$ together with an action of $G$ on ${\Gamma}'$, such that the following properties are satisfied: 1. The action is invariant with respect to ${\varphi}$. 2. For each $x\in X({\Gamma})$, the group $G$ acts transitively on the fiber ${\varphi}^{-1}(x)$. 3. $\#(G)=\deg {\varphi}$. We say that a $G$-cover ${\varphi}$ is [*effective*]{} or [*unramified*]{} if it is so as a harmonic morphism. This definition is similar to Definition 7.1.2 in [@BertinRomagny]. Let ${\Gamma}$ be a weighted graph. In Example \[ex:topologicalcovers\], we saw that an element $\eta\in H^1({\Gamma},G)$ determines a topological $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$. We now weight ${\Gamma}'$ by setting $g(v')=g(v)$ for all $v\in V({\Gamma})$ and all $v'\in {\varphi}^{-1}(v)$. Setting $\deg_{{\varphi}}(x)=1$ for all $x\in X({\Gamma}')$, we see that ${\varphi}$ is an unramified $G$-cover. Conversely, it is clear that a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ is a topological $G$-cover if and only if $\deg_{{\varphi}}(x)=1$ for all $x\in X({\Gamma}')$. \[example:topologicaladmissiblecovers\] We now classify all $G$-covers and unramified $G$-covers of a given weighted graph ${\Gamma}$. We first note that there is no difference between studying $G$-covers of a weighted graph and $G$-covers of the underlying unweighted graph. Indeed, let ${\varphi}:{\Gamma}'\to {\Gamma}$ be a $G$-cover of a weighted graph ${\Gamma}$, and let $D_{{\varphi}}$ be the associated $G$-dilation datum. An element $g\in G$ determines an automorphism of ${\Gamma}'$, which in particular is an unramified cover of degree one. Therefore, for any $x\in X({\Gamma})$, the harmonic morphism ${\varphi}$ has the same degree at $x$ and at $g(x)$. Since $G$ acts transitively on ${\varphi}^{-1}(x)$, we see that $d_{{\varphi}}(x')$ is the same for all $x'\in {\varphi}^{-1}(x)$. Since $$\deg {\varphi}=\sum_{x'\in {\varphi}^{-1}(x)} d_{{\varphi}}(x')=d_{{\varphi}}(x') \#\big({\varphi}^{-1}(x)\big)=d_{{\varphi}}(x')\big[G:D_{{\varphi}}(x)\big],$$ we see that $$d_{{\varphi}}(x')=\#\big(D_{{\varphi}}(x)\big) \label{eq:degreedilation}$$ for all $x'\in {\varphi}^{-1}(x)$. Therefore, the local degrees of ${\varphi}$ are uniquely defined by the associated dilation datum. Conversely, if ${\varphi}:{\Gamma}'\to {\Gamma}$ is a $G$-cover of ${\Gamma}$ viewed as an unweighted graph, then Eq.  gives the unique way to promote ${\varphi}$ to a harmonic morphism of degree $\#(G)$. As a result, the classification of $G$-covers of weighted graphs reduces trivially to the unweighted case, except that we need to manually specify the weights on the cover. Let ${\Gamma}$ be a weighted graph, let $G$ be a finite abelian group, let $D$ be a $G$-dilation datum on ${\Gamma}$, and let $g':V({\Gamma})\to {\mathbb{Z}}_{\geq 0}$ be a function. Then the there is a natural bijection between $H^1({\Gamma},D)$ and the set of $G$-covers ${\varphi}:{\Gamma}'\to {\Gamma}$ having dilation datum $D$, such that $g(v')=g'({\varphi}(v'))$ for all $v'\in V({\Gamma}')$. \[thm:main2\] This follows immediately from Thm. \[thm:main\], since $G$ acts transitively on each fiber ${\varphi}^{-1}(v)$ and therefore the numbers $g(v')$ for $v'\in {\varphi}^{-1}(v)$ are all equal to some $g'(v)$. For the remainder of this section, we restrict our attention to unramified $G$-covers, which are the graph-theoretic analogues of étale maps. Given such a cover ${\varphi}:{\Gamma}'\to {\Gamma}$, we consider the Riemann–Hurwitz condition  at all vertices $v'\in V({\Gamma}')$. This condition uniquely specifies the genera of the vertices of ${\Gamma}'$. However, these genera may fail to be non-negative integers, which imposes a numerical constraint on the $G$-dilation data on ${\Gamma}$ that are associated to unramified $G$-covers. Let $({\Gamma},D)$ be a $G$-dilated graph. We define the [*index function*]{} $a_{{\Gamma},D}\colon V({\Gamma})\times S(G)\to {\mathbb{Z}}_{\geq 0}$ of $({\Gamma},D)$ by $$a_{{\Gamma},D}(v;H)=\#\big\{h\in T_v{\Gamma}\big\vert D(h)=H\big\}. \label{eq:dilationindex}$$ \[def:index\] Let ${\varphi}:{\Gamma}'\to{\Gamma}$ be an unramified $G$-cover, let $D_{{\varphi}}$ be the associated $G$-dilation datum, and let $a_{{\Gamma},D}$ be the index function of $({\Gamma},D_{{\varphi}})$. Let $v\in V({\Gamma})$ be a vertex with dilation group $D(v)$, and let $S(D(v))$ be the set of subgroups of $D(v)$. Then $$2-2g'(v)-\sum_{K\in S(D(v))} a_{{\Gamma},D}(v;K)\big[D(v):K\big]=\#\big(D(v)\big)\Big[2-2g(v)-\sum_{K\in S(D(v))} a_{{\Gamma},D}(v;K)\Big] \label{eq:localRH2}$$ where $g'(v)$ is the genus of any vertex $v'\in{\varphi}^{-1}(v)$. \[prop:admissibility\] For any half-edge $h\in T_v{\Gamma}$, the dilation group $D(h)$ is a subgroup of $D(v)$. Hence $$\operatorname{val}(v)=\sum_{K\in S(D(v))}a_{{\Gamma},D}(v;K).$$ As noted above, each $h\in T_v{\Gamma}$ has $[D(v):D(h)]$ preimages in ${\Gamma}'$ attached to $v'$. Therefore $$\operatorname{val}(v')=\sum_{K\in S(D(v))} a_{{\Gamma},D}(v;K)\big[D(v):K\big].$$ Plugging this into , we obtain . Let ${\Gamma}$ be a weighted graph. A $G$-dilation datum $D$ on ${\Gamma}$ is called [*admissible*]{} if for every $v\in V({\Gamma})$ the number $$g'(v)=\#\big(D(v)\big)\big[g(v)-1\big]+1+\frac{1}{2}\sum_{K\in S(D(v))}a_{{\Gamma},D}(v;K)\big(\#(D(v))-[D(v):K]\big) \label{eq:gtop}$$ determined by  is a non-negative integer. A $G$-stratification ${\mathcal{S}}$ is called [*admissible*]{} if the associated $G$-stratification $D$ is admissible. \[def:admissibledilation\] It is clear that the $G$-dilation datum associated to an unramified $G$-cover is admissible. Conversely, if $D$ is an admissible $G$-dilation datum, then  uniquely specifies the weight function on any $G$-cover of ${\Gamma}$ with dilation datum $D$. Hence we obtain the following result. Let $({\Gamma},D)$ be $G$-dilated weighted graph. If $D$ is admissible, then there is a natural bijection between the set of unramified $G$-covers of ${\Gamma}$ having dilation datum $D$ and $H^1({\Gamma},D)$. Otherwise, there are no unramified covers of ${\Gamma}$ having dilation datum $D$. \[thm:main3\] The result follows immediately from Thm. \[thm:main2\] and Prop. \[prop:admissibility\]. Condition  imposes two restrictions on a $G$-stratification ${\mathcal{S}}$ (equivalently, on a $G$-dilation datum $D$): a stability condition ($g'(v)$ is non-negative) and a parity condition ($g'(v)$ is an integer). We make a number of general observations. First, we note that the admissibility condition is trivially satisfied at each undilated vertex $v\in V({\Gamma})\backslash V({\Gamma}_{dil})$. Indeed, if $D(v)=0$, then equation  reduces to $g'(v)=g(v)$. We also observe that $g'(v)$ is positive, and hence the stability condition is satisfied, if $g(v)\geq 1$. We also note that $g'(v)$ is an integer if $\#(D(v))$ is odd, so  does not impose a parity condition if the order of $G$ is odd. Equation  is the only role that the weight function on ${\Gamma}$ plays in the classification of unramified $G$-covers of ${\Gamma}$. Furthermore, when checking the admissibility condition at a vertex $v\in V({\Gamma})$, we only need to know whether $g(v)$ is positive or not, the actual value is not important. Therefore, for example, two weighted graphs having the same underlying unweighted graph, and having the same set of genus zero vertices with respect to the two weight functions, will have the same set of unramified $G$-covers. We now show that an admissible $G$-stratification has the following semistability properties. Let ${\mathcal{S}}$ be an admissible $G$-stratification of a graph ${\Gamma}$. For every simple vertex $v\in V({\Gamma})$ and for each $H\in S(G)$, either $v\in V({\Gamma}_H)$ and $\operatorname{val}_{{\Gamma}_H}(v)=2$, or $v\notin V({\Gamma}_H)$. \[prop:admissibilitystability\] Suppose that ${\mathcal{S}}$ is admissible. Let $v\in V({\Gamma})$ be a vertex with $g(v)=0$ and two tangent directions $h_1$ and $h_2$. We write condition  at $v$: $$g'(v)=1-\frac{1}{2}\left([D(v):D(h_1)]+[D(v):D(h_2)]\right).$$ The only way that this number can be a non-negative integer is $D(v)=D(h_1)=D(h_2)$, hence $v\in V({\Gamma}_H)$ and $\operatorname{val}_{{\Gamma}_H}(v)=2$ if $H\subset D(v)$ and $v\notin V({\Gamma}_H)$ otherwise. Let ${\mathcal{S}}$ be an admissible $G$-stratification of a graph ${\Gamma}$. Then the dilated subgraph ${\Gamma}_{dil}\subset {\Gamma}$ is semistable. \[prop:semistable\] We recall that ${\Gamma}_{dil}$ is the union of the ${\Gamma}_H$ for all subgroups $H\subset G$ except $H=0$. Let $D$ be the $G$-dilation datum associated to ${\mathcal{S}}$, and let $v\in V({\Gamma}_{dil})$ be a vertex, so that $D(v)\neq 0$, and assume that $g(v)=0$. If $v$ is an isolated vertex of ${\Gamma}_{dil}$, then $a_{{\Gamma},D}(v;K)=0$ for all subgroups $K\subset D(v)$ such that $K\neq 0$. It follows that the sum in the right hand side of  vanishes, hence $g'(v)=-\#\big(D(v)\big)+1<0$. Similarly, suppose that $v$ is an extremal vertex of ${\Gamma}_{dil}$, so that there exists a unique edge $h\in T_v {\Gamma}_{dil}$ with $H=D(h)\neq 0$. It follows that $a_{{\Gamma},D}(v;H)=1$ and $a_{{\Gamma},D}(v;K)=0$ for all $K\neq 0,H$, hence $$g'(v)= -\#\big(D(v)\big)+1+\frac{1}{2}\big(\#(D(v))-[D(v):H]\big)=1-\frac{\#\big(D(v)\big)}{2}\left(1+\frac{1}{\#(H)}\right)<0,$$ since $\#\big(D(v)\big)\geq \#(H)\geq 2$. Therefore $\operatorname{val}_{{\Gamma}_{dil}}(v)\geq 2$ and ${\Gamma}_{dil}$ is semistable. ### Unramified $G$-covers and stability {#unramified-g-covers-and-stability .unnumbered} Let ${\Gamma}$ be a weighted graph, and let ${\Gamma}_{st}$ be its stabilization. We have seen in Def. \[def:coverstabilization\] that any unramified cover ${\varphi}:{\Gamma}'\to {\Gamma}$ descends to an unramified cover ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$. It follows that we can restrict unramified $G$-covers of ${\Gamma}$ to its stabilization, and vice versa. Let ${\Gamma}$ be a weighted graph. Then there is a natural bijection between the unramified $G$-covers of ${\Gamma}$ and the unramified $G$-covers of ${\Gamma}_{st}$. \[prop:coversstability\] Let ${\varphi}:{\Gamma}'\to {\Gamma}$ be an unramified $G$-cover. The $G$-action descends to the subgraph ${\Gamma}'_{sst}\subset {\Gamma}'$, hence ${\varphi}_{sst}:{\Gamma}'_{sst}\to {\Gamma}_{sst}$ is a $G$-cover. We note that the supporting arguments for Def. \[def:coverstabilization\] show that ${\varphi}$ is undilated on ${\Gamma}'\backslash {\Gamma}_{sst}$; alternatively, this follows from Prop. \[prop:semistable\], since any semistable subgraph of ${\Gamma}$ is contained in ${\Gamma}_{sst}$. Therefore, for any vertex $v\in V({\Gamma}_{sst})$, any adjacent half-edge $h\in H({\Gamma})\backslash H({\Gamma}_{sst})$ has $\deg {\varphi}$ preimages in $H({\Gamma}')$, evenly split among the preimages of $v$. It follows that ${\varphi}_{sst}$ is an unramified $G$-cover. It is then clear how to descend the $G$-action to ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$: for any $g\in G$ and any simple vertex $v'\in V({\Gamma}'_{sst})$ that is replaced by an edge or a leg, $g$ maps that edge or leg to the edge or leg that replaces $g(v)$. Conversely, let ${\Gamma}$ be a weighted graph, and let ${\varphi}_{st}:{\Gamma}'_{st}\to {\Gamma}_{st}$ be an unramified $G$-cover, where ${\Gamma}'_{st}$ is a stable weighted graph. The semistabilization ${\Gamma}_{sst}$ is obtained from ${\Gamma}_{st}$ by splitting edges and legs at new vertices of genus 0. Performing the same operation on the preimages of these vertices in ${\Gamma}'_{st}$, we obtain an unramified $G$-cover ${\varphi}_{sst}:{\Gamma}'_{sst}\to {\Gamma}_{sst}$. The graph ${\Gamma}$ is obtained from ${\Gamma}_{sst}$ by attaching trees having no vertices of positive genus. For each such tree $T$ attached at $v\in V({\Gamma}_{sst})$, we attach $\#(G)$ copies of $T$ to ${\Gamma}'_{sst}$ at the fiber ${\varphi}^{-1}(v)$, and extend the $G$-action in the obvious way. We obtain an unramified $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ whose stabilization is ${\varphi}_{st}$. $G$-covers of weighted metric graphs and tropical curves {#sec:metric} -------------------------------------------------------- In this final section, we reformulate our classification results for weighted metric graphs and tropical curves. There is essentially no new mathematical content obtained by adding metrics to graphs, so this section is essentially a restatement and a summary of the results of the previous sections, and is included for the reader’s convenience. First, we introduce $G$-covers of weighted metric graphs: Let $({\Gamma},\ell)$ be a weighted metric graph. A [*$G$-cover*]{} of $({\Gamma},\ell)$ is a finite harmonic morphism ${\varphi}:({\Gamma}',\ell')\to ({\Gamma},\ell)$ together with an action of $G$ on $({\Gamma}',\ell')$, such that the following properties are satisfied: 1. The action is invariant with respect to ${\varphi}$. 2. For each $x\in X({\Gamma})$, $G$ acts transitively on the fiber ${\varphi}^{-1}(x)$. 3. $\#(G)=\deg {\varphi}$. We say that a $G$-cover ${\varphi}$ is [*unramified*]{} if it is an unramified harmonic morphism. In other words, a $G$-cover ${\varphi}:({\Gamma}',\ell')\to({\Gamma},\ell)$ is a $G$-cover ${\varphi}:{\Gamma}'\to {\Gamma}$ of the underlying weighted graph ${\Gamma}$ that satisfies the dilation condition . Given ${\varphi}$ and $\ell$, there is a unique way to choose $\ell'$ such that the dilation condition is satisfied (see Rem. \[rem:lengths\]). It follows that the classification of $G$-covers of $({\Gamma},\ell)$ is identical to the classification of $G$-covers of ${\Gamma}$. Specifically, such a cover is uniquely determined by choosing the dilation subgroups, an element of the corresponding dilated cohomology group, and a genus assignment on ${\Gamma}$ which is then lifted to ${\Gamma}'$. To obtain unramified $G$-covers, we require the dilation data to be admissible, and pick the genus using Eq. : Let $({\Gamma},\ell)$ be a weighted metric graph. There is a natural bijection between the set of $G$-covers of $({\Gamma}',\ell')$ and the set of triples $(D,\eta,g')$, where 1. $D$ is a $G$-dilation datum on the underlying weighted graph ${\Gamma}$, 2. $\eta$ is an element of $H^1({\Gamma},D)$, 3. $g'$ is a map from $V({\Gamma})$ to ${\mathbb{Z}}_{\geq 0}$. The set of unramified $G$-covers is obtained by choosing $D$ to be an admissible $G$-dilation datum, and defining $g'$ by Eq. . \[thm:main4\] This follows immediately from Thms. \[thm:main2\] and \[thm:main3\], and Rem. \[rem:lengths\]. Finally, we describe $G$-covers of tropical curves. Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve. A [*$G$-cover*]{} of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a finite harmonic morphism $\tau:{\scalebox{0.8}[1.3]{$\sqsubset$}}'\to {\scalebox{0.8}[1.3]{$\sqsubset$}}$ together with an action of $G$ on ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ such that the following properties are satisfied: 1. The action is invariant with respect to $\tau$. 2. For each $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$, $G$ acts transitively on the fiber $\tau^{-1}(x)$. 3. $\#(G)=\deg \tau$. We say that a $G$-cover $\tau$ is [*unramified*]{} if it is an unramified harmonic morphism. To describe $G$-covers of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, we need to define $G$-dilation data on ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. We can define this to be a $G$-dilation datum on some model of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. It is more convenient to define dilation in terms of the associated stratification, which does not involve choosing a model. The following definition generalizes Defs. \[def:stratification\], \[def:index\], and \[def:admissibledilation\] to the case of tropical curves. Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve. A [*$G$-stratification*]{} ${\mathcal{S}}=\{{\scalebox{0.8}[1.3]{$\sqsubset$}}_H:H\in S(G)\}$ on ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a collection of subcurves ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H\subset {\scalebox{0.8}[1.3]{$\sqsubset$}}$, indexed by the set $S(G)$ of subgroups of $G$, such that - ${\scalebox{0.8}[1.3]{$\sqsubset$}}_0={\scalebox{0.8}[1.3]{$\sqsubset$}}$, - ${\scalebox{0.8}[1.3]{$\sqsubset$}}_K\subset {\scalebox{0.8}[1.3]{$\sqsubset$}}_H$ if $H\subset K$, - ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H\cap {\scalebox{0.8}[1.3]{$\sqsubset$}}_K={\scalebox{0.8}[1.3]{$\sqsubset$}}_{H+K}$ for all $H,K\in S(G)$. We allow ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H$ to be empty or disconnected for $H\neq 0$. A $G$-stratification ${\mathcal{S}}$ partitions ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ into disjoint subsets $${\scalebox{0.8}[1.3]{$\sqsubset$}}=\coprod_{H\in S(G)}{\scalebox{0.8}[1.3]{$\sqsubset$}}_H\backslash {\scalebox{0.8}[1.3]{$\sqsubset$}}^0_H \quad \textrm{ and }\quad {\scalebox{0.8}[1.3]{$\sqsubset$}}_{H}^0=\bigcup_{H\subsetneq K} {\scalebox{0.8}[1.3]{$\sqsubset$}}_K.$$ For $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$ we define the [*dilation subgroup*]{} $D(x)$ to be the unique subgroup $H\subset G$ such that $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}_H\backslash {\scalebox{0.8}[1.3]{$\sqsubset$}}^0_H$. We define the [*index function*]{} $a_{{\mathcal{S}}}:{\scalebox{0.8}[1.3]{$\sqsubset$}}\times S(G)\to {\mathbb{Z}}_{\geq 0}$ of ${\mathcal{S}}$ by setting $a_{{\mathcal{S}}}(x,H)$ to be the number of connected components of the intersection of ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H\backslash {\scalebox{0.8}[1.3]{$\sqsubset$}}_H^0$ with a sufficiently small punctured neighborhood of $x$. We say that ${\mathcal{S}}$ is [*admissible*]{} if for every $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$ the number $$g'(x)=\#(D(x))\big[g(x)-1\big]+1+\frac{1}{2}\sum_{K\in S(D(x))}a_{{\mathcal{S}}}(x;K)\big(\#(D(x))-[D(x):K]\big) \label{eq:genustropical}$$ is a non-negative integer. \[def:stratificationtropical\] Finally, we define the dual stratification ${\mathcal{S}}^*$ of a stratification ${\mathcal{S}}$ of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ as follows. Choose a model ${\Gamma}$ of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ minimal with respect to the property that each element ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H$ of ${\mathcal{S}}$ corresponds to a subgraph ${\Gamma}_H$ of ${\Gamma}$. Then the ${\Gamma}_H$ form a $G$-stratification of the weighted metric graph ${\Gamma}$, and we let ${\mathcal{S}}^*$ be the dual of this stratification. We note that choosing a larger model ${\Gamma}'$ will result in a larger dual stratification, which will, however, retract to ${\mathcal{S}}^*$. Similarly, we can define the dilated cohomology groups of a tropical curve with a $G$-stratification: Let ${\mathcal{S}}$ be a stratification of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. Pick a model $({\Gamma},\ell)$ for ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ such that each ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H$ corresponds to a subgraph ${\Gamma}_H$ of ${\Gamma}$, then ${\mathcal{S}}$ is a $G$-stratification of ${\Gamma}$ and induces a $G$-dilation datum $D$ on ${\Gamma}$. We define the [*dilated cohomology group*]{} $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}},{\mathcal{S}})$ as the cohomology group $H^1({\Gamma},D)$; it is clear that this group does not depend on the choice of model.\[def:dilatedcohomologytropical\] We can now state our main classification result for $G$-covers of tropical curves, which is simply a restatement of Thm. \[thm:main4\] using the equivalent description of dilation by means of a stratification: Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve. There is a natural bijection between the set of $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ and the set of triples $({\mathcal{S}},\eta,g')$, where 1. ${\mathcal{S}}$ is a $G$-stratification of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, 2. $\eta$ is an element of $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}},{\mathcal{S}})$, 3. $g'$ is a function from ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ to ${\mathbb{Z}}_{\geq 0}$. The set of unramified $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is obtained by requiring ${\mathcal{S}}$ to be an admissible $G$-stratification, and defining $g'$ by . \[thm:main5\] We also restate Prop. \[prop:coversstability\] for tropical curves. Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve. Then there is a natural bijection between the unramified $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ and the unramified $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}^{st}$. Any tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ has infinitely many $G$-covers for any nontrivial group $G$, since we can choose a dilation stratification with arbitrarily many connected components. However, the number of [*unramified*]{} $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is finite. Indeed, Prop. \[prop:admissibilitystability\] shows that if ${\mathcal{S}}$ is an admissible stratification of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, then no ${\scalebox{0.8}[1.3]{$\sqsubset$}}_H$ can contain any simple point $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}$ as an unstable extremal point. Since any tropical curve has only finitely many non-simple points, it follows that the number of admissible stratifications of a tropical curve is finite, and hence so is the number of unramified $G$-covers. \[rem:finitelymanycovers\] ### Cyclic covers of prime order {#cyclic-covers-of-prime-order .unnumbered} We now classify the unramified $G$-covers of a tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ in the case when $G={{\mathbb{Z}}/p{\mathbb{Z}}}$, where $p$ is prime. These covers were studied in [@2018JensenLen] and [@2017BologneseBrandtChua] for $p=2$, and in [@2017BrandtHelminck] for arbitrary $p$ in the case when ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is a tree. Let ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ be a tropical curve, let $p$ be a prime number, and let $G={\mathbb{Z}}/p{\mathbb{Z}}$. A $G$-stratification ${\mathcal{S}}=\{{\scalebox{0.8}[1.3]{$\sqsubset$}}_0,{\scalebox{0.8}[1.3]{$\sqsubset$}}_G\}$ of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ has a single nontrivial element ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G={\scalebox{0.8}[1.3]{$\sqsubset$}}_{dil}$, the dilated subcurve. Condition  is trivially satisfied at any non-dilated point. If $x\in {\scalebox{0.8}[1.3]{$\sqsubset$}}_G$, then $D(x)={\mathbb{Z}}/p{\mathbb{Z}}$ and $a_{{\mathcal{S}}}(x;{\mathbb{Z}}/p{\mathbb{Z}})=\operatorname{val}_{{\scalebox{0.8}[1.3]{$\sqsubset$}}_G}(x)$, and condition  is $$g'(x)=\big[g(x)-1\big]p+1+\frac{p-1}{2}\operatorname{val}_{{\scalebox{0.8}[1.3]{$\sqsubset$}}_G}(x).$$ We see that $g'(x)$ is non-negative if $g(x)>0$, or if $g(x)=0$ and $\operatorname{val}_{{\scalebox{0.8}[1.3]{$\sqsubset$}}_G}(x)\geq 2$. Similarly, $g'(x)$ is an integer if $p\geq 3$, or if $p=2$ and $\operatorname{val}_{{\scalebox{0.8}[1.3]{$\sqsubset$}}_G}(x)$ is even. We therefore have the following result: 1. For $p\geq 3$, a ${{\mathbb{Z}}/p{\mathbb{Z}}}$-stratification ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is admissible if and only if the dilated subcurve ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G$ is semistable. 2. For $p=2$, a ${{\mathbb{Z}}/p{\mathbb{Z}}}$-stratification ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is admissible if and only if the dilated subcurve ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G$ is a semistable cycle. This was observed in [@2018JensenLen] (see Corollary 5.5). If ${\mathcal{S}}$ is an admissible ${{\mathbb{Z}}/p{\mathbb{Z}}}$-stratification of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$, Ex. \[ex:subgraph\] and Thm. \[thm:main3\] shows that the set of unramified $G$-covers of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ having dilation stratification ${\mathcal{S}}$ is equal to $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}}^0,{{\mathbb{Z}}/p{\mathbb{Z}}})$, where ${\scalebox{0.8}[1.3]{$\sqsubset$}}^0$ is the nontrivial element of the dual stratification ${\mathcal{S}}^*(D)$. Specifically, ${\scalebox{0.8}[1.3]{$\sqsubset$}}^0$ is obtained from ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ by removing ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G$, and then removing any edges or legs that are missing an endpoint. In other words, an unramified ${{\mathbb{Z}}/p{\mathbb{Z}}}$-cover of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is uniquely specified by choosing a (possibly empty) semistable subcurve ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G\subset {\scalebox{0.8}[1.3]{$\sqsubset$}}$, which is required to be a cycle when $p=2$, and an element of $H^1({\scalebox{0.8}[1.3]{$\sqsubset$}}^0,{{\mathbb{Z}}/p{\mathbb{Z}}})$. As an example, we count the number of unramified ${{\mathbb{Z}}/p{\mathbb{Z}}}$-covers of the following genus two tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ with one leg (the edge lengths are arbitrary and irrelevant): (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); This curve has eight admissible ${{\mathbb{Z}}/p{\mathbb{Z}}}$-stratifications, listed below, with the last one being admissible only when $p$ is odd. We draw the semistable dilated subgraph ${\scalebox{0.8}[1.3]{$\sqsubset$}}_G$ in blue, and the corresponding element ${\scalebox{0.8}[1.3]{$\sqsubset$}}^0$ of the dual stratification in red. Below each stratification we list the number $p^{b_1\left({\scalebox{0.8}[1.3]{$\sqsubset$}}^0\right)}$ of ${{\mathbb{Z}}/p{\mathbb{Z}}}$-covers with the given stratification. [c c c c c c c c]{} (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); & (0.5,0) circle(.5); (0.5,0.5) – (0.5,1.0); (0.5,1.0) – (0.5,1.5); (0.5,1.0) – (1.0,1.0); (0.5,2.0) circle(.5); (0.5,0.5)circle(.10); (0.5,1.0)circle(.10); (0.5,1.5)circle(.10); \ $p^2$ & $p$ & $p$ & $p$ & $p$ & 1 & 1 & 1 Hence there are a total of $p^2+4p+3$ covers when $p$ is odd, and 14 covers when $p=2$. Tropicalizing the moduli space of admissible $G$-covers {#sec:tropicalization} ======================================================= In this section we explain how unramified tropical $G$-covers naturally arise as tropicalizations of algebraic $G$-covers from a moduli-theoretic perspective, expanding on [@ACP] and [@CavalieriMarkwigRanganathan_tropadmissiblecovers] (recall that tropical unramified covers are called [*tropical admissible covers*]{} in [@CavalieriMarkwigRanganathan_tropadmissiblecovers]). Throughout this section we assume that the genus $g\geq 2$ and we work over an algebraically closed field $k$ endowed with the trivial absolute value. In this section, we do not need to assume that $G$ is abelian. Compactifying the moduli space of $G$-covers -------------------------------------------- Let $G$ be a finite group and let $X\rightarrow S$ be a family of smooth projective curves of genus $g$. A $G$-cover of $X$ is a finite unramified Galois morphism $f\colon X'\rightarrow X$ together with an isomorphism $\operatorname{Aut}(X'/X)\simeq G$. Denote by ${\mathcal{H}}_{g,G}$ the moduli space of connected $G$-covers of smooth curves of genus $g$ (see e.g. [@RomagnyWewers] for a construction). There is a good notion of a limit object as $X$ degenerates to a stable curve, as introduced in [@AbramovichCortiVistoli]. The definition below generalizes this construction, by allowing a fixed ramification profile along marked points. \[def\_admissibleGcover\] Let $G$ be a finite group and let $X\rightarrow S$ be a family of stable curves of genus $g$ with $n$ marked disjoint sections $s_1,\ldots, s_n$. Let $\mu=(r_1,\ldots, r_n)$ be a $n$-tuple of natural numbers that divide $\#(G)$, and denote $k_i=\#(G)/r_i$ for $i=1,\ldots, n$. An *admissible $G$-cover* of $X$ consists of a finite morphism $f\colon X'\rightarrow X$ from a family of stable curves $X'\rightarrow S$ that is Galois and unramified away from the sections of $X$, an action of $G$ on $X'$, and disjoint sections $s'_{ij}$ of $X'$ over $S$ for $i=1,\ldots,n$ and $j=1,\ldots,k_i$, subject to the following conditions: (i) The map $f:X'\rightarrow X$ is a principal $G$-bundle away from the nodes and sections of $X$. (ii) The preimage of the set of nodes in $X$ is precisely the set of nodes of $X'$. (iii) The preimage of a section $s_i$ is precisely given by the sections $s'_{i1},\ldots, s'_{ik_i}$. (iv) Let $p$ be a node in $X$ and $p'$ a node of $X'$ above $p$. Then étale-locally $p'$ is given by $x'y'=t$ for $t\in{\mathcal{O}}_S$ and $p$ is étale-locally given by $xy=t^r$ for some integer $r\geq 1$ with $x'=x^r$ and $y'=y^r$, and the stabilizer of $G$ at $p'$ is cyclic of order $r$ and operates via $$(x',y')\longmapsto (\zeta x',\zeta^{-1} y')$$ for an $r$-th root of unity $\zeta\in\mu_r$. (v) Étale-locally near the sections $s_i$ and $s'_{ij}$ respectively, the morphism $f$ is given by ${\mathcal{O}}_S[t_i]\rightarrow {\mathcal{O}}_S[t_{ij}']$ with $(t_{ij}')^{r_i}=t_i$, and the stabilizer of $G$ along $s_{ij}$ is cyclic or order $r_i$ and operates via $t'_{ij}\mapsto \zeta t$, for an $r_i$-th root of unity $\zeta\in \mu_{r_i}$. We emphasize that the $G$-action is part of the data, in particular, an isomorphism between two admissible $G$-covers has to be a $G$-equivariant isomorphism. As explained in [@AbramovichCortiVistoli], the moduli space ${\overline{\mathcal{H}}}_{g,G}(\mu)$ of $G$-admissible covers of stable $n$-marked curves of genus $g$ is a smooth and proper Deligne-Mumford stack that contains the locus ${\mathcal{H}}_{g,G}(\mu)$ of $G$-covers of smooth curves of ramification type $\mu$ as an open substack. The complement of ${\mathcal{H}}_{g,G}(\mu)$ is a normal crossing divisor. Although closely related, the moduli space ${\overline{\mathcal{H}}}_{g, G}(\mu)$ is not quite the same as the one constructed in [@AbramovichCortiVistoli]. The quotient $$\big[{\overline{\mathcal{H}}}_{g, G}(\mu)/S_{k_1}\times \ldots \times S_{k_{n}}\big]$$ which forgets about the order of the marked sections on $s'_{ij}$ of $X'$ over $S$ for $i=1,\ldots,n$ and $j=1,\ldots,k_i$, is equivalent to a connected component of the moduli space of twisted stable maps to $\mathbf{B}G$ in the sense of [@AbramovichVistoli; @AbramovichCortiVistoli], indexed by ramification profile and decomposition into connected components. Our variant of this moduli space ${\overline{\mathcal{H}}}_{g, G}(\mu)$, with ordered sections on $X'$, has also appeared in [@SchmittvanZelm] and in [@JarvisKaufmannKimura] (the latter permitting admissible covers with possibly disconnected domains). An object in this stack is technically not a admissible $G$-cover $X'\rightarrow X$ but rather a $G$-cover $X'\rightarrow {\mathcal{X}}$ of a twisted stable curve ${\mathcal{X}}$. A *twisted stable curve* ${\mathcal{X}}\rightarrow S$ is a Deligne-Mumford stack ${\mathcal{X}}$ with sections $s_1,\ldots, s_n\colon S\rightarrow {\mathcal{X}}$ whose coarse moduli space $X\rightarrow S$ is a family of stable curves over $S$ with $n$ marked sections (also denoted by $s_1,\ldots, s_n$) such that - The smooth locus of ${\mathcal{X}}$ is representable by a scheme, - The singularities are étale-locally given by $\big[\{x'y'=t\}/\mu_r\big]$ for $t\in{\mathcal{O}}_S$, where $\zeta\in\mu_r$ acts by $\zeta\cdot(x',y')=(\zeta x',\zeta^{-1}y')$. In this case the singularity in $X'$ is locally given by $xy=t^{r}$. - The stack ${\mathcal{X}}$ is a root stack $\big[\sqrt[r_i]{s_i/X}\big]$ along the section $s_i$ for all $i=1,\ldots n$. Both notions are naturally equivalent: given a $G$-admissible cover $X'\rightarrow X$ the associated twisted $G$-cover is given by $X'\rightarrow [X'/G]$. Conversely, given a twisted $G$-cover $X'\rightarrow {\mathcal{X}}$ in the corresponding connected component, the composition $X'\rightarrow{\mathcal{X}}\rightarrow X$ with the morphism to the coarse moduli space $X$ is a $G$-admissible cover. We refer the interested reader to [@BertinRomagny] for an alternative approach to this construction. The moduli space of unramified tropical $G$-covers -------------------------------------------------- We now construct a moduli space $H_{g,G}^{trop}(\mu)$ of unramified $G$-covers of stable tropical curves of genus $g$ with $n$ marked points and ramification profile $\mu=(r_1,\ldots,r_n)$, where each $r_i$ divides $\#(G)$. Denote $k_i=\#(G)/r_i$, as well as $k=k_1+\cdots+k_n$, and assume that $n\cdot \#(G)-k$ is even. A point $\big[{\varphi},l,l'\big]$ of $H_{g,G}^{trop}(\mu)$ consists of the following data: 1. A $G$-equivariant isomorphism class of an unramified $G$-cover ${\varphi}\colon {\scalebox{0.8}[1.3]{$\sqsubset$}}'\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}}$ of a stable tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ of genus $g$ with $n$ legs, by a stable tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ of genus $$g'=(g-1)\cdot\#(G)+1+(n\cdot \#(G)-k)/2$$ with $k$ legs. 2. A marking $l:\{1,\ldots,n\}\simeq L({\scalebox{0.8}[1.3]{$\sqsubset$}})$ of the legs of ${\scalebox{0.8}[1.3]{$\sqsubset$}}$. 3. A marking $l'\colon\big \{(1,\ldots, k_1),\ldots,(1,\ldots,k_n)\big\}\simeq L({\scalebox{0.8}[1.3]{$\sqsubset$}}')$ of the legs of ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ such that ${\varphi}(l'_{ij})=l_i$, where we denote $l_i=l(i)$ and $l'_{ij}=l(i,j)$. \[prop\_HgG=genconecomplex\] The moduli space $H_{g,G}^{trop}(\mu)$ naturally carries the structure of a generalized cone complex. We need to show that $H_{g,G}^{trop}(\mu)$ is naturally the colimit of a diagram of rational polyhedral cones connected by (not necessarily proper) face morphisms. We first construct an index category $J_{g,G}(\mu)$ as follows: - The objects are tuples $\big({\varphi}\colon\Gamma'\rightarrow \Gamma,l,l'\big)$, where $\Gamma'$ and $\Gamma$ are stable weighted graphs of genera $g'$ and $g$ having respectively $k$ and $n$ legs, ${\varphi}$ is an unramified $G$-cover, and $l'$ and $l$ are markings of the legs of ${\Gamma}'$ and ${\Gamma}$, respectively, such that ${\varphi}(l'_{ij})=l_i$. - The morphisms are generated by the automorphisms of $\Gamma'\rightarrow\Gamma$ that preserve the markings on both $\Gamma$ and $\Gamma'$, and weighted edge contractions (see Def. \[def:edgecontraction\]) of the target graph $\Gamma$. We recall that a weighted edge contraction of the target graph $\Gamma$ induces a weighted edge contraction of the source graph $\Gamma'$ along the preimages of the contracted edges. Moreover, the $G$-action on $\Gamma'$ induces a $G$-action on its weighted edge contraction, which is an unramified $G$-cover by Prop. \[prop:edgecontraction\]. We then consider a functor $\Sigma_{g,G}(\mu)\colon J_{g,G}(\mu)\rightarrow \mathbf{RPC}_{face}$ to the category $\mathbf{RPC}_{face}$ of rational polyhedral cones with (not necessarily proper) face morphisms defined as follows: - An object $\big({\varphi}:\Gamma'\rightarrow \Gamma,l,l'\big)$ is sent to the rational polyhedral cone $\sigma_{{\varphi}}={\mathbb{R}}_{\geq 0}^{E(\Gamma)}$. - An automorphism of $\big(\Gamma'\rightarrow \Gamma,l,l'\big)$ induces an automorphism of $\sigma_{{\varphi}}$ that permutes the entries according to the induced permutation of the edges of $\Gamma$; for a set of edges $S\subset E({\Gamma})$, a weighted edge contraction ${\varphi}_S:\Gamma'/{\varphi}^{-1}(S)\to \Gamma/S$ of ${\varphi}:\Gamma'\to\Gamma$ induces a morphism $\sigma_{{\varphi}_S}\hookrightarrow \sigma_{{\varphi}}$ that sends $\sigma_{{\varphi}_S}$ to the face of $\sigma_{{\varphi}}$ given by setting all entries of the contracted edges equal to zero. The natural maps $\sigma_{{\varphi}}\rightarrow H_{g,G}^{trop}(\mu)$ are given by associating to a point $(a_e)_{e\in E(\Gamma)}\in {\mathbb{R}}_{\geq 0}^{E(\Gamma)}$ an unramified $G$-cover $\big[{\scalebox{0.8}[1.3]{$\sqsubset$}}'\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}}\big]$ defined as follows: - In the special case that $a_e\neq 0$ for all $e\in E(\Gamma)$, the tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is given by the graph $\Gamma$ with the metric $\ell(e)=a_e$. In general, the tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}$ is given by contracting those edges $e\in E(\Gamma)$ for which $a_e=0$ and then by endowing the contracted weighted graph with the induced edge length given by the $a_e\neq 0$. - The tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ is defined accordingly: we first contract all edges that map to an edge $e$ with $a_e=0$, and then endow ${\scalebox{0.8}[1.3]{$\sqsubset$}}'$ with the edge length $\ell'(e')= \ell({\varphi}(e'))/d_{\varphi}(e)$ so that the induced map ${\scalebox{0.8}[1.3]{$\sqsubset$}}'\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}}$ is an unramified $G$-cover of tropical curves. The maps $\sigma_{{\varphi}}\rightarrow H_{g,G}^{trop}(\mu)$ naturally commute with the morphisms induced by $J_{g,G}(\mu)$, and therefore descend to a map $$\varinjlim_{({\varphi},l,l')\in J_{g,G}(\mu)} \sigma_{{\varphi}} \simeq H_{g,G}^{trop}(\mu)$$ that is easily checked to be a bijection. This realizes $H_{g,G}^{trop}(\mu)$ as a colimit of a diagram of (not necessarily proper) face morphisms and therefore endows it with the structure of a generalized cone complex. There are natural *source* and *target morphisms* $$\operatorname{src}_{g,G}^{trop}(\mu)\colon H_{g,G}^{trop}(\mu)\longrightarrow M_{g',k}^{trop}\\ \qquad \textrm{ and } \qquad \operatorname{tar}_{g,G}^{trop}(\mu)\colon H_{g,G}^{trop}(\mu)\longrightarrow M_{g,n}^{trop}\\$$ that are given by the associations $$\big[{\scalebox{0.8}[1.3]{$\sqsubset$}}'\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}},l,l'\big]\longmapsto \big[{\scalebox{0.8}[1.3]{$\sqsubset$}}',l'\big] \qquad \textrm{ and } \qquad \big[{\scalebox{0.8}[1.3]{$\sqsubset$}}'\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}},l,l'\big]\longmapsto \big[{\scalebox{0.8}[1.3]{$\sqsubset$}},l\big]$$ respectively. By Rem. \[rem:finitelymanycovers\], the map $\operatorname{tar}_{g,G}^{trop}(\mu)$ has finite fibers. The functor $J_{g,G}(\mu)\rightarrow \mathbf{RPC}_{face}$ in the proof of Proposition \[prop\_HgG=genconecomplex\] defines a category fibered in groupoids over $\mathbf{RPC}_{face}$, i.e. a *combinatorial cone stack* in the sense of [@CCUW]. So we may think of $H_{g,G}^{trop}(\mu)$ as a “coarse moduli space” of a *cone stack* ${\mathcal{H}}_{g,G}^{trop}(\mu)$, a geometric stack over the category of rational polyhedral cones (see [@CCUW] for details), that parametrizes families of unramified tropical $G$-covers over rational polyhedral cones. A modular perspective on tropicalization ---------------------------------------- Denote by ${\mathcal{H}}_{g,G}^{an}(\mu)$ the Berkovich analytic space[^1] associated to ${\mathcal{H}}_{g,G}(\mu)$. We define a natural *tropicalization map* $$\begin{split} \operatorname{trop}_{g,G}(\mu)\colon {\mathcal{H}}_{g,G}^{an}(\mu)&\longrightarrow H_{g,G}^{trop}(\mu)\\ [X'\rightarrow X,s_i,s'_{ij}]& \longmapsto \big[{\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'}\rightarrow{\scalebox{0.8}[1.3]{$\sqsubset$}}_X,l,l'\big] \end{split}$$ that associates to an admissible $G$-cover $X'\rightarrow X$ of smooth curves over a non-Archimedean extension $K$ of $k$ an unramified tropical $G$-cover ${\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'}\rightarrow {\scalebox{0.8}[1.3]{$\sqsubset$}}_X$ of the dual tropical curve ${\scalebox{0.8}[1.3]{$\sqsubset$}}_X$ of $X$ that is defined in the following way. Let $X$ be a smooth projective curve of genus $g$ over a non-Archimedean extension $K$ of $k$ with $n$ marked sections $s_1,\ldots, s_n$ over $K$. Let $(X'\rightarrow X,s'_{ij})$ be a $G$-cover of $X$, where $i=1,\ldots,n$ and $j=1,\ldots,k_i$. By the valuative criterion for properness, applied to the stack ${\overline{\mathcal{H}}}_{g,G}(\mu)$, there is a finite extension $L$ of $K$ such that $X'_L\rightarrow X_L$ extends to a family of admissible $G$-covers $f:{\mathcal{X}}'\rightarrow{\mathcal{X}}$ defined over the valuation ring $R$ of $L$ (with marked sections also denoted by $s_i$ and $s'_{ij}$). The *dual tropical curve* $({\scalebox{0.8}[1.3]{$\sqsubset$}}_X,l)$ of ${\mathcal{X}}$ (and similarly $({\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'},l')$ of ${\mathcal{X}}'$) is given by the following data: - The dual graph $\Gamma_{{\mathcal{X}}_0}$ of the special fiber ${\mathcal{X}}_0$ of ${\mathcal{X}}$: the components of ${\mathcal{X}}_0$ correspond to vertices, nodes correspond to edges, and the sections correspond to legs. - A vertex weight $V(\Gamma_{{\mathcal{X}}_0})\rightarrow {\mathbb{Z}}_{\geq 0}$ that associates to a vertex $v$ the genus of the normalization of the corresponding component of ${\mathcal{X}}_0$. - A marking $l\colon \{1,\ldots n\}\simeq L(\Gamma_{{\mathcal{X}}_0})$ of the legs of $\Gamma_{{\mathcal{X}}_0}$ according to the full order of $s_1,\ldots, s_n$. - An edge length $\ell\colon E({\Gamma}_{{\mathcal{X}}_0})\rightarrow{\mathbb{R}}_{>0}$ that associates to an edge $e$ the positive real number $r\cdot\operatorname{val}(t)$, where the corresponding node is étale-locally given by an equation $xy=t^r$ for $t\in R$. The map $f:{\mathcal{X}}'\rightarrow{\mathcal{X}}$ induces a map ${\varphi}:\Gamma_{{\mathcal{X}}'_0}\rightarrow\Gamma_{{\mathcal{X}}_0}$: - Every component $X'_{v'}$ of ${\mathcal{X}}'_0$ is mapped to exactly one component $X_{v}$ of ${\mathcal{X}}_0$. - Every node $p_{e'}$ of ${\mathcal{X}}'_0$, over a node $p_e$ of ${\mathcal{X}}_0$ given by $xy=t^r$ for $t\in R$ on the base, has a local equation $x'y'=t$ which determines the dilation factor $r=d_{\varphi}(e')$. - Étale-locally around the sections $s_i$ and $s'_{ij}$ respectively, the morphism $f$ is given by ${\mathcal{O}}_S[t_i]\rightarrow {\mathcal{O}}[t'_{ij}]$ with $(t'_{ij})^{r_i}=t_i$, so the dilation factor $d_{\varphi}(l'_{ij})$ is given by $r_i$. The map $f:\Gamma_{{\mathcal{X}}'_0}\rightarrow\Gamma_{{\mathcal{X}}_0}$ is harmonic by [@ABBRI Theorem A] (identifying both ${\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'}$ and ${\scalebox{0.8}[1.3]{$\sqsubset$}}_X$ with the non-Archimedean skeletons of $(X')^{an}$ and $X^{an}$ respectively). Applying the Riemann–Hurwitz formula to $X_{v'}\rightarrow X_v$ shows that it is unramified. The operation of $G$ on ${\mathcal{X}}'_0$ induces an operation of $G$ on $\Gamma_{{\mathcal{X}}'_0}$ for which the map ${\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'}\rightarrow {\scalebox{0.8}[1.3]{$\sqsubset$}}_X$ is $G$-invariant. The stabilizer at every edge $e'_i$ and of every leg $l'_{ij}$ is a cyclic group of order $r_i$ and $r_{ij}$ respectively by Definition \[def\_admissibleGcover\] (iii) and (iv). Since ${\mathcal{X}}'_0\rightarrow{\mathcal{X}}_0$ is a principal $G$-bundle away from the nodes, the operation of $G$ on the fiber over each point in ${\scalebox{0.8}[1.3]{$\sqsubset$}}_{X}$ is transitive and so ${\scalebox{0.8}[1.3]{$\sqsubset$}}_{X'}\rightarrow {\scalebox{0.8}[1.3]{$\sqsubset$}}_X$ is a $G$-cover. In the language of twisted stable curves, the stabilizers of the $G$-operation on the nodes and legs of ${\mathcal{X}}_0'$ give rise to the dilation datum on the dual graph. So one may think of a dilation datum as a stack-theoretic enhancement of a tropical curve. Since the boundary of ${\overline{\mathcal{H}}}_{g,G}(\mu)$ has normal crossings, the open immersion ${\mathcal{H}}_{g,G}(\mu)\hookrightarrow {\overline{\mathcal{H}}}_{g,G}(\mu)$ is a toroidal embedding in the sense of [@KKMSD]. Therefore, as explained in [@Thuillier_toroidal; @ACP], there is a natural strong deformation retraction $\rho_{g,G}\colon {\mathcal{H}}_{g,G}^{an}(\mu)\rightarrow {\mathcal{H}}_{g,G}^{an}(\mu)$ onto a closed subset of ${\mathcal{H}}_{g,G}^{an}(\mu)$ that carries the structure of a generalized cone complex, the *non-Archimedean skeleton* $\Sigma_{g,G}(\mu)$ of ${\mathcal{H}}_{g,G}^{an}(\mu)$. Expanding on [@CavalieriMarkwigRanganathan_tropadmissiblecovers Theorem 1 and 4], we have: \[thm\_skeletonvstropicalization\] The tropicalization map $\operatorname{trop}_{g,G}(\mu)\colon{\mathcal{H}}_{g,G}^{an}(\mu)\longrightarrow H_{g,G}^{trop}(\mu)$ factors through the retraction to the non-Archimedean skeleton $\Sigma_{g,G}(\mu)$ of ${\mathcal{H}}_{g,G}^{an}(\mu)$, so that the restriction $$\operatorname{trop}_{g,G}(\mu)\colon \Sigma_{g,G}(\mu)\longrightarrow H^{trop}_{g,G}(\mu)$$ to the skeleton is a finite strict morphism of generalized cone complexes. Moreover, the diagram $$\label{eq_functorialityoftrop}\begin{tikzcd} {\mathcal{H}}_{g,G}^{an}(\mu)\arrow[rr,"\operatorname{src}_{g,G}^{an}(\mu)"] \arrow[dd,"\operatorname{tar}_{g,G}^{an}(\mu)"']\arrow[rd,"\operatorname{trop}_{g,G}(\mu)"]& & {\mathcal{M}}_{g',k}^{an}\arrow[d,"\operatorname{trop}_{g',k}"]\\ & H_{g,G}^{trop}(\mu) \arrow[r,"\operatorname{src}_{g,G}^{trop}(\mu)"]\arrow[d,"\operatorname{tar}_{g,G}^{trop}(\mu)"']& M_{g',k}^{trop}\\ {\mathcal{M}}_{g,n}^{an}\arrow[r,"\operatorname{trop}_{g,n}"']& M_{g,n}^{trop} & \end{tikzcd}$$ commutes. In other words, the restriction of $\operatorname{trop}_{g,G}(\mu)$ onto a cone in $\Sigma_{g,G}(\mu)$ is an isomorphism onto a cone in $H^{trop}_{g,G}(\mu)$ and every cone in $H_{g,G}^{trop}(\mu)$ has at most finitely many preimages in $\Sigma_{g,G}(\mu)$. Let $x$ be a closed point in ${\overline{\mathcal{H}}}_{g,G}(\mu)$, which corresponds to an admissible $G$-cover $X'\rightarrow X$ over $k$. Denote by ${\varphi}:\Gamma_{X'}\rightarrow \Gamma_X$ the corresponding unramified $G$-cover of the dual graphs. Denote by $\mathfrak{o}_k$ either $k$ when $\operatorname{char}k=0$ or the unique complete local ring with residue field $k$ when $\operatorname{char}k=p>0$ (using Cohen’s structure theorem). The complete local ring at $x$ is given by $$\widehat{{\mathcal{O}}}_{{\overline{\mathcal{H}}}_{g,G}(\mu),x}\simeq \mathfrak{o}_k\big\llbracket t_1,\ldots, t_{3g-3+n}\big\rrbracket$$ where $t_i=0$ for $i=1,\ldots,r$ cuts out the locus where the corresponding node $q_i$ of $X$ remains a node. The retraction to the skeleton is locally given by $$\label{eq_localtrop}\begin{split} \big(\operatorname{Spec}\widehat{{\mathcal{O}}}_{{\overline{\mathcal{H}}}_{g,G}(\mu),x}\big)^\beth &\longrightarrow {\overline{\mathbb{R}}}_{\geq 0}^r\\ x&\longmapsto \big( -\log\vert t_1\vert_x, \ldots, -\log\vert t_r\vert_x\big) \ , \end{split}$$ where $(.)^\beth$ denotes the generic fiber functor constructed in [@Thuillier_toroidal Prop./Déf. 1.3] and ${\overline{\mathbb{R}}}={\mathbb{R}}\cup\{\infty\}$. We find that under the isomorphism $\sigma_{{\varphi}}\simeq {\mathbb{R}}_{\geq 0}^r$ the restriction of to the preimage of ${\mathbb{R}}_{\geq 0}^r$ is nothing but the tropicalization map $\operatorname{trop}_{g,G}(\mu)$ defined above. We observe the following: - A degeneration of $X'\rightarrow X$ in ${\overline{\mathcal{H}}}_{g,G}(\mu)$ to another admissible $G$-cover $X_0'\rightarrow X_0$ with corresponding ${\varphi}_0:{\Gamma}_{X'_0}\to {\Gamma}_{X_0}$ may be described by additional coordinates $t_{r+1}, \ldots, t_{r_0}$ that encode the new nodes $q_{r+1},\ldots, q_{r_0}$ in the degeneration. The induced map ${\mathbb{R}}_{\geq 0}^r\hookrightarrow{\mathbb{R}}_{\geq 0}^{r_0}$ describes $\sigma_{{\varphi}}$ as the face of $\sigma_{{\varphi}_0}$ that corresponds to letting the edges $e_{r+1},\ldots, e_{r_0}$ have length zero. - Denote by $E\subset {\overline{\mathcal{H}}}_{g,G}(\mu)$ the toroidal stratum containing $x$ and by $\widetilde{E}$ and $\widetilde{x}$ respectively their images in ${\overline{\mathcal{M}}}_{g,n}$. The operation of the fundamental group $\pi_1(E,x)$ of $E$ on ${\mathbb{R}}_{\geq 0}^r\simeq\operatorname{Hom}(\Lambda^+_E,{\mathbb{R}}_{\geq 0})$, where $\Lambda^+_E$ denotes the monoid of effective divisors supported on the closure of $E$, naturally factors through the operation of $\pi_1(\widetilde{E},\widetilde{x})$ on ${\mathbb{R}}_{\geq 0}^r\simeq\operatorname{Hom}(\Lambda^+_{\widetilde{E}},{\mathbb{R}}_{\geq 0})$. Analogously, the operation of the automorphisms of $\Gamma_{X'}\rightarrow\Gamma_X$ on ${\mathbb{R}}_{\geq 0}^r$ naturally factors through the operation of the automorphisms of $\Gamma_X$ on ${\mathbb{R}}_{\geq 0}^r={\mathbb{R}}_{\geq 0}^{E(\Gamma_X)}$. Therefore, by [@ACP Proposition 7.2.1], the images of the automorphism groups of both $\pi_1(E,x)$ and $\operatorname{Aut}(\Gamma_{X'}\rightarrow\Gamma_X)$ in the permutation group of the entries of ${\mathbb{R}}_{\geq 0}^r$ are equal. This shows that the isomorphisms ${\mathbb{R}}_{\geq 0}^r\simeq\sigma_{{\varphi}}$ induce a necessarily strict morphism of generalized cone complexes $\Sigma_{g,G}(\mu)\rightarrow H_{g,G}^{trop}(\mu)$ that factors the tropicalization map as ${\mathcal{H}}_{g,G}^{an}(\mu)\rightarrow \Sigma_{g,G}(\mu)\rightarrow H_{g,G}^{trop}(\mu)$. Its fibers are finite, since above every toroidal stratum of ${\overline{\mathcal{M}}}_{g,n}$ there are only finitely many toroidal strata of ${\overline{\mathcal{H}}}_{g,G}(\mu)$. Finally, the commutativity of is an immediate consequence of the definition of $\operatorname{trop}_{g,G}(\mu)$. In general, not every unramified tropical cover is realizable. We may, for example, consider an unramified cover for which the local ramification profile is not of Hurwitz type (e.g. when $d=4$ and the ramification profile at a vertex is given by $\big\{(3,1),(2,2),(2,2)\big\}$). This explains why the tropicalization map on the moduli space of admissible covers (without the $G$-action), as considered in [@CavalieriMarkwigRanganathan_tropadmissiblecovers], is not surjective. We refer the reader to [@Caporaso_gonality Section 2.2] for a discussion of this issue in the context of comparing algebraic and tropical gonality and to [@PervovaPetronio_HurwitzexistenceI] for a survey of the underlying widely open problem, the so-called *Hurwitz existence problem*. We do not know whether, for a general finite abelian group $G$, every unramified tropical $G$-cover (with cyclic stablizers at the nodes) is realizable. When $G$ itself is cyclic and there are no marked legs (along which ramification is possible), we do expect every $G$-admissible cover to be realizable, since by [@2018JensenLen Theorem 3.1] the tropicalization map on the level of $n$-torsion points on Jacobians is surjective. We will return to this topic in its proper setting in the upcoming [@LenUlirschZakharov_cyclictropicalcovers]. Yoav Len <span style="font-variant:small-caps;">School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332-0160, USA</span> *Email address:* `yoav.len@math.gatech.edu` Martin Ulirsch <span style="font-variant:small-caps;">Institut für Mathematik, Goethe-Universität Frankfurt, 60325 Frankfurt am Main, Germany</span> *E-mail address:* `ulirsch@math.uni-frankfurt.de` Dmitry Zakharov <span style="font-variant:small-caps;">Department of Mathematics, Central Michigan University, Mount Pleasant, MI 48859, USA</span> *E-mail address:* `zakha1d@cmich.edu` [^1]: We implicitly work with the underlying topological space of the Berkovich analytic stack ${\mathcal{H}}_{g,G}^{an}(\mu)$, as introduced in [@Ulirsch_tropisquot Section 3].
--- abstract: 'A modified Green operator is proposed as an improvement of Fourier-based numerical schemes commonly used for computing the electrical or thermal response of heterogeneous media. Contrary to other methods, the number of iterations necessary to achieve convergence tends to a finite value when the contrast of properties between the phases becomes infinite. Furthermore, it is shown that the method produces much more accurate local fields inside highly conducting and quasi-insulating phases, as well as in the vicinity of phase boundaries. These good properties stem from the discretization of Green’s function, which is consistent with the pixel grid while retaining the local nature of the operator that acts on the polarization field. Finally, a fast implementation of the ‘direct scheme’ of Moulinec *et al.* (1994) that allows for parsimonious memory use is proposed.' address: - 'CEA, DAM, DIF, F-91272 Arpajon, France.' - | MINES ParisTech, PSL - Research university, CMM - Centre for mathematical\ morphology, 35, rue St Honoré, F-77300 FONTAINEBLEAU, France. author: - 'François Willot\*' - Bassam Abdallah - 'Yves-Patrick Pellegrini' title: ' Fourier-based schemes with modified Green operator for computing the electrical response of heterogeneous media with accurate local fields ' --- FFT methods; numerical homogenization; heterogeneous media; electrical conductivity [*Nota Bene*: The present document constitutes a ‘postprint’ version of the published paper, in which a few errors in the proofs (on the present pages 2, 5, 15 and 17) have been corrected and an incomplete reference on p. 22 has been completed. These minor corrections are marked out in red. Results are unchanged.]{} Introduction {#sec:intro} ============ In recent years, Fourier-based methods, originally introduced by Moulinec *et al.* [@Moulinec94xx], have become ubiquitous for computing numerically the properties of composite materials, with applications in domains ranging from linear elasticity [@Willot08], [viscoplasticity \[instead of ‘thermoplasticity’\]]{} [@a2001n], and crack propagation [@li2011non] to thermal and electrical [@Willot13a; @Willot10] and also optical properties [@Willot13b]. The success of the method resides in its ability to cope with arbitrarily complex and often very large microstructures, supplied as segmented images of real materials, for example, [multiscale \[instead of ‘multistage’\]]{} nanocomposites [@Jean2011large], austenitic steel [@brenner11], granular media [@Willot13a] or [polycrystals \[instead of ‘polycrystal’\]]{} [@Lebensohn09; @Rollett10; @lebensohn2005study]. This technique allows maps of the local fields to be computed in realistic microstructures. Such fields are representative of the material behavior if the resolution is small enough, and if the system size is large enough, compared with the typical length scale of the heterogeneities. Contrary to finite-element methods (FEM) where matrix pre-conditioning often necessitates additional memory occupation, fast-Fourier-transform (FFT) methods are limited only by the amount of RAM or fast-access computer memory required to store the fields. The use of an image and of its underlying equispaced grid, however, comes with drawbacks not seen in FEM. First, FFT methods will ultimately be less efficient when dealing with highly porous media such as foams, where voids need to be discretized. Second, interfaces are crudely rendered when using voxel grids, although smoothness can be somewhat recovered by introducing intermediate properties between phases [@Dunant13; @Brisard10]. This matter is the most important one for ideal microstructure models where interfaces are completely known; less so when dealing with experimental images where such information is usually absent. Third, the representation of the fields in terms of harmonic functions introduces oscillations around interfaces, which is akin to Gibbs’s phenomenon. High-frequency artifacts are conspicuous in many field maps where oscillations are visible. Fourth, the Fourier representation presupposes periodicity; that is, the microstructure is seen as the elementary cell of an infinite, periodic medium. However, finite-size effects associated to periodic boundary conditions are generally smaller than that of uniform boundary conditions used in FEM [@RVE1]. In the present work, use is made of an alternative discretization of the Green function, leading to a revisit of some previously developped FFT algorithms. Specifically, their performances in terms of accuracy and speed are investigated. Our paper is organized as follows: the numerical problem and FFT algorithms are presented in Secs. \[sec:prob\] and \[sec:FFTmethods\], respectively. An alternative discretization is introduced in Section \[sec:green\]. The accuracy of the local fields is investigated in Section \[sec:local\] and the convergence properties of FFT schemes, using the modified and unmodified Green functions, are studied in Section \[sec:results\]. Finally, a specific implementation of the FFT method using the modified Green function is proposed in Section \[sec:speed\]. Problem setup and Lippmann-Schwinger’s equation {#sec:prob} =============================================== \[sec:ls\] This work investigates the numerical computation of the electric field $E_i(\mathbf{x})$ and current $J_i(\mathbf{x})$ ($i=1$, ..., $d$), in a $d$-dimensional cubic domain $\Omega=[-L/2,L/2]^d$ of width $L$ for $d=2$ or $3$. The fields verify (chapter 2 in [@MiltonBook]) $$\label{eq:edp} \partial_i J_i(\mathbf{x})=0, \qquad E_i(\mathbf{x})=-\partial_i \Phi(\mathbf{x}), \qquad J_i(\mathbf{x})=\sigma_{ij}(\mathbf{x}) E_j(\mathbf{x}),$$ where $\Phi(\mathbf{x})$ is the electric potential and $\boldsymbol\sigma(\mathbf{x})$ is the local conductivity tensor of the material phase at point $\mathbf{x}$. Thereafter, for simplicity, all media are locally linear and isotropic so that $\sigma_{ij}=\sigma\delta_{ij}$, with $\boldmath{\sigma}(\mathbf{x})$ a scalar field. Only binary composite media are considered in this study, in which inclusions have variable conductivity $\sigma_2$, and where conventionally $\sigma_1$=1 in the matrix. Edges of $\Omega$ are aligned with the Cartesian axis of unit vectors $(\mathbf{e}_i)_{1\leq i\leq d}$. Periodic boundary conditions are employed, in the form $$\mathbf{J}(\mathbf{x})\cdot \mathbf{n}\,-\#, \quad \Phi(\mathbf{x}+L\mathbf{e}_i)\equiv \Phi(\mathbf{x})-\overline{E}_iL, \quad\mathbf{x},\, \mathbf{x}+L\mathbf{e}_i\in \partial\Omega,$$ where $-\#$ denotes anti-periodicity, $\mathbf{n}$ is the outer normal along the boundary $\partial\Omega$ of $\Omega$ and $\overline{\mathbf{E}}$ is the applied electric field. They ensure that the current and the electric field verify Equation (\[eq:edp\]) along the boundary $\partial\Omega$ of the periodic medium. Note that $\overline{\mathbf{E}}$ represents a macroscopic electric field so that $\langle E_i(\mathbf{x})\rangle = \overline{E}_i$, where $\langle \cdot\rangle$ is the volume average over $\Omega$. All FFT methods proceed from Lippmann-Schwinger’s equation ([@MiltonBook] p. 251) $$\label{eq:ls1} E_i=\overline{E}_i-G^0_{ij}\ast P_j, \quad P_j=J_j-\sigma^0E_j,$$ where $\sigma^0$ is an arbitrary reference conductivity, $\mathbf{P}$ and $\mathbb{G}^0$ are the associated polarization field and Green operator, respectively, and $\ast$ is the convolution product. An equivalent ‘dual’ formulation stems from writing the problem in terms of the electric current as $$\label{eq:ls2} J_i=\overline{J}_i-H^0_{ij}\ast T_j, \quad T_j=E_j-\rho^0J_j,$$ where $\rho^0=1/\sigma^0$ is the reference resistivity, and $\overline{\mathbf{J}}$ is the prescribed macroscopic current. The Green operator associated to the governing equation for the current reads $$\label{eq:greendual} H_{ij}^0(\mathbf{x})=\sigma^0\left\lbrace\left[\delta(\mathbf{x})-1\right]\delta_{ij}-\sigma^0G^0_{ij}(\mathbf{x})\right\rbrace,$$ where $\delta(\mathbf{x})$ is Dirac’s distribution and $\delta_{ij}$ is the Kronecker symbol. Thus, for all $\mathbf{T}$, $$H_{ij}^0\ast T_j=\sigma^0\left(T_i-\langle T_i\rangle_\Omega-\sigma^0G^0_{ij}\ast T_j\right).$$ In particular, $\langle H_{ij}^0\ast T_j\rangle=\langle G^0_{ij}\ast T_j\rangle=0$ and Equation (\[eq:ls2\]) enforces $\overline{\mathbf{J}}=\langle \mathbf{J}\rangle$. The FFT algorithms considered in this paper rest on evaluating the convolution product in Equation (\[eq:ls1\]) or (\[eq:ls2\]) in the Fourier domain, using FFT libraries. FFT methods {#sec:FFTmethods} =========== Although most of FFT methods have been introduced in the context of elasticity, their adaptation to conductivity problems is straightforward. Hereafter, all FFT algorithms are formulated in this setting. \[subsec:FFTalg\] Equation (\[eq:ls1\]) is the basis of the simplest method, the ‘direct’ scheme [@Moulinec94xx]. Iterations consist in applying the following recursion: $$\label{eq:ls1imp} \mathbf{E}^{k+1}=\overline{\mathbf{E}}-\mathbb{G}^0\ast \left[(\sigma-\sigma^0)\mathbf{E}^k\right],$$ where $\mathbf{E}^k$ is the electric field at iteration $k$. Over time, refined FFT algorithms with faster convergence properties have been devised, notably the ‘accelerated’ [@Eyre99] and ‘augmented-Lagrangian’ [@Michel01] schemes. Both algorithms can be encapsulated in the formula [@Monchiet12; @moulinec13] $$\label{eq:alphabeta} \mathbf{E}^{k+1}=\mathbf{E}^k+\frac{\sigma^0\left[\overline{\mathbf{E}}-\langle \mathbf{E}^k\rangle-\beta \mathbb{G}^0\ast (\sigma \mathbf{E}^k)\right]-\mathbb{H}^0\ast \mathbf{E}^k}{\alpha(\sigma+\beta\sigma^0)}$$ where $\alpha=\beta=1$ for the augmented-Lagrangian scheme and $\alpha=-1/2$, $\beta=-1$ for the ‘accelerated’ one. Our formula differs from Equation (13) in [@moulinec13] because of a different definition of $\sigma^0$. Another scheme, the so-called ‘polarization’ scheme where $\langle \mathbf{P}\rangle$ is prescribed instead of $\langle \mathbf{E}\rangle$, can be described by an equation similar to (\[eq:alphabeta\]) [@Monchiet12]. The alternative ‘variational’ algorithm [@Brisard10] relies on two distinct ideas. First, Equation (\[eq:ls1\]) is written as: $$\label{eq:var} \left[(\sigma-\sigma^0)^{-1}\delta(\mathbf{x})\delta_{ij}+G^0_{ij}\right]\ast P_j= \overline{E}_i.$$ Upon discretization, this equation is transformed into a linear system $\mathcal{M}\cdot \mathbf{P}=\overline{\mathbf{E}}$, which is solved by conjugate-gradient descent. The operator $\mathcal{M}$ is never computed. Instead, FFTs are used to provide $\mathcal{M}\cdot \mathbf{P}$ for any $\mathbf{P}$, which is sufficient for applying the descent method. Second, the discretization employed amounts to using constant-per-voxel trial polarization fields. This leads to a rule for computing $(\sigma-\sigma^0)^{-1}\mathbf{P}$ on voxels that lie on interfaces, and to a representation of the Green operator as a slowly converging series for which approximations are available [@Brisard12]. Other FFT methods have been proposed, including an alternative ‘conjugate-gradient’ scheme [@Zeman10; @Vondrejc11] different from the variational one, and yet another one in which the convolution product is carried out in the direct space [@yvonnet2012fast]. For conciseness, these and the ‘polarization’ scheme will not be considered further. The dual formulation (\[eq:ls2\]) allows one to derive dual algorithms for all FFT methods. For instance, substituting $\mathbf{E}$, $\mathbb{G}^0$, and $\sigma^0$ by $\mathbf{J}$, $\mathbb{H}^0$, and $\rho^0$ in Equation (\[eq:alphabeta\]), the dual augmented-Lagrangian scheme reads: $$\mathbf{J}^{k+1}=\mathbf{J}^k+\frac{\rho^0\left[\overline{\mathbf{J}}-\langle \mathbf{J}^k\rangle-\mathbb{H}^0\ast \left(\frac{1}{\sigma} \mathbf{J}^k\right)\right]-\mathbb{G}^0\ast \mathbf{J}^k}{1/\sigma+\rho^0}.$$ All of these methods involve a reference conductivity $\sigma^0$, or a reference resistivity $\rho^0$. Whereas the final result is in principle independent of these quantities, their [values \[instead of ‘value’\]]{} have a dramatic influence on the convergence properties of the algorithms. Notably, optimal convergence of the ’accelerated’ scheme is obtained with the choice [@Eyre99] $$\label{eq:opteyre} \sigma^0=-\sqrt{\sigma_1\sigma_2},$$ where the use of a negative reference conductivity (devoid of physical meaning) is warranted by the arbitrary character of the reference medium. In this connection, we point out that in Ref. [@moulinec13], which addresses the analogous elasticity problem, the reference stiffness moduli have their sign changed, which avoids dealing with negative values. For the “direct" scheme, optimal convergence properties were studied in the context of elasticity [@moulinec1998numerical]. Adapting the method used in the latter reference to the conductivity problem, it is straightforward to show that the corresponding optimal choice is $$\label{eq:optdirect} \sigma^0\approx \frac{1}{2}(\sigma_1+\sigma_2),$$ a result to be used extensively below. Classical and modified Green operators {#sec:green} ====================================== In practice, the domain $\Omega$ is discretized as a two-dimensional (2D) pixel image, or three-dimensional (3D) voxel image. The convolution product $G^0_{ij}\ast P_j$ in (\[eq:ls1\]) is evaluated in the Fourier domain as $$\label{eq:greenFour} \int_\Omega {\rm d^d}\,\mathbf{x'}G^0_{ij}(\mathbf{x}-\mathbf{x'})P_j(\mathbf{x'}) \approx \frac{1}{L^d}\sum_\mathbf{q} G^0_{ij}(\mathbf{q})P_j(\mathbf{q}){\textnormal{e}}^{{\textnormal{i}}\mathbf{q}\cdot\mathbf{x}},$$ where the Fourier mode components take on values $q_i=(2\pi/L)(-L/2,...,L/2-1)$ ($i=1$, ..., $d$), and $L$ is measured in pixel/voxel size units. The vector $P_j(\mathbf{q})$ is the Fourier transform $$P_j(\mathbf{q})=\sum_\mathbf{x} P_j(x){\textnormal{e}}^{-{\textnormal{i}}\mathbf{q}\cdot\mathbf{x}},$$ where the sum is over all pixels/voxels $\mathbf{x}$ in $\Omega$. Classically, the Fourier transform of the Green operator used in (\[eq:greenFour\]) is approximated by its continuum expression $$\label{eq:greenCont} G^0_{ij}(\mathbf{q})=\int {\rm d}^d\!\mathbf{x}\,G^0_{ij}(\mathbf{x}){\textnormal{e}}^{-{\textnormal{i}}\mathbf{q}\cdot\mathbf{x}}=\frac{q_iq_j}{\sigma^0|q|^2},$$ where the integration is over the infinite domain and $|q|=\sqrt{q_kq_k}$. We call hereafter this version of the Green operator the ‘continuous’ Green operator. This name is choosen as a matter of convenience as the operator $\mathbb{G}^0$ is only the discretization, on a regular grid, in the Fourier domain, of the continuum Green operator. On the other hand, intrinsically discrete schemes can be considered. For instance, in the context of continuum mechanics, modified Green operators have been introduced, where partial derivatives are approximated by centered [@Muller96] or forward [@willot08c] differences. In the conductivity problem, the latter discretization amounts to solving a resistor network problem [@luck1991conductivity] $$\label{eq:rn} \partial_i J_i(\mathbf{x})\approx J_i(\mathbf{x})-J_i(\mathbf{x}-\mathbf{e}_i), \qquad \partial_i \Phi(\mathbf{x})\approx \Phi(\mathbf{x}+\mathbf{e}_i)-\Phi(\mathbf{x}),$$ where $J_i(\mathbf{x})$ represents the current along the bond pointing in the direction $\mathbf{e}_i$ from point $\mathbf{x}$, and $\Phi(\mathbf{x})$ is the potential at node $\mathbf{x}$. The same fields are used as approximations of the exact solution in a continuous medium. The nodes in the network are mapped to the corners of each voxel and the bonds are mapped to the edges (see Figure \[fig:schema\]). In this setting, the electric field and current are estimated at edge centers, which turns (\[eq:rn\]) into the centered scheme $$\label{eq:rn2} \partial_i J_i(\mathbf{x})\approx J_i\left(\mathbf{x}+\frac{\mathbf{e}_i}{2}\right)-J_i\left(\mathbf{x}-\frac{\mathbf{e}_i}{2}\right), \qquad -E_i\left(\mathbf{x}+\frac{\mathbf{e}_i}{2}\right)=\partial_i\Phi\left(\mathbf{x}+\frac{\mathbf{e}_i}{2}\right)\approx \Phi(\mathbf{x}+\mathbf{e}_i)-\Phi(\mathbf{x}).$$ Here again, derivatives are approximated by differences over points separated by one voxel size, unlike in [@Muller96]. Discretizations (\[eq:rn2\]) and (\[eq:rn\]) are equivalent up to a translation of $J_i$ and $E_i$ by a vector $\mathbf{e}_i/2$, provided that $\sigma$ is constant in each voxel (see Figure \[fig:schema\]). For simplicity, we use (\[eq:rn\]) hereafter. The ‘discrete’ Green operator $\widetilde{\mathbb{G}^0}$ entering the corresponding Lippmann-Schwinger equation reads [@luck1991conductivity; @willot08c] $$\label{eq:greendisc} \widetilde{G}^0_{ij}(\mathbf{k})=\frac{k_ik_j^*}{\sigma^0|k|^2}, \qquad k_i={\textnormal{e}}^{{\textnormal{i}}q_i}-1=2{\textnormal{i}}\sin(q_i/2){\textnormal{e}}^{{\textnormal{i}}q_i/2},$$ where $|k|=\sqrt{k_ik_i^*}$ and $^*$ is the complex conjugate. In the Fourier domain, the ‘discrete’ gradient, divergence and Laplacian operators amount to multiplications by $k_i$, $-k_i^*$ and $|k|^2$, respectively, instead of ${\textnormal{i}}q_i$, ${\textnormal{i}}q_i$ and $|q|^2$ when using the continuum Green operator $\mathbb{G}^0$. Likewise, the terms ‘divergence-free’ and ‘compatible’ depend on the employed discretization. In the long-wavelength limit $\mathbf{q}\to 0$, these differences disappear and equation (\[eq:greendisc\]) reduces to (\[eq:greenCont\]). In the dual setting, the discrete Green operator associated to the current is defined, mutatis mutandis, as in Equation (\[eq:greendual\]). Hereafter, the operator $\widetilde{\mathbb{G}^0}$ is referred to as the ‘discrete’ Green operator. ![ \[fig:schema\] 2D pixel at point $\mathbf{x}$ with superimposed resistor network (see Equation \[eq:rn\]); here $\mathbf{e}_1$ is oriented from top to bottom and $\mathbf{e}_2$ left to right. ](fig0.eps){width="6cm"} The representation of the problem in terms of a resistor network result in several useful properties. First, contrary to the variational algorithm [@Brisard10], the solution does not depend on the choice for the reference material $\sigma^0$. Second, the operator $\widetilde{\mathbb{G}^0}$ is a smooth periodic function, where contrary to $\mathbb{G}^0$, high-frequencies are cut out in the Fourier domain. This is expected to result in better convergence properties. Third, the discretization in (\[eq:rn\]) enforces local current conservation, which makes Kirchhoff’s law hold at each node. Consequently, the outward flow of $\mathbf{J}$ along a closed surface, defined as a sum of currents over the bonds that pierce the surface, is zero. As long as they converge, all numerical schemes must deliver the same results for a given choice of Green operator. Conversely, choosing one Green operator will select one particular approximation to the solution of the problem considered. It is the purpose of this work to assess the advantages, from the numerical viewpoint, in the context of electrical conductivity, of using $\widetilde{\mathbb{G}^0}$ in place of $\mathbb{G}^0$. In this paper, the direct (DS), accelerated (AS), augmented-Lagrangian (AL), and variational (VAR) schemes are investigated. We also consider the dual versions of DS and AL, denoted by [$\textnormal{DS}_\textnormal{D}$]{} and [$\textnormal{AL}_\textnormal{D}$]{}, respectively. All of these make use of the continuous Green operator $\mathbb{G}^0$. Same algorithms, but with the *discrete* Green operator $\widetilde{\mathbb{G}}^0$ instead of $\mathbb{G}^0$ are also examined. They are referred to with a ‘tilde’ notation as [$\widetilde{\textnormal{DS}}$]{}, [$\widetilde{\textnormal{AS}}$]{}, [$\widetilde{\textnormal{AL}}$]{}, [$\widetilde{\textnormal{VAR}}$]{}, [$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}, and [$\widetilde{\textnormal{AL}}_\textnormal{D}$]{}. We emphasize that the results presented here for the variational approaches VAR and [$\widetilde{\textnormal{VAR}}$]{} make use of the Green operators $\mathbb{G}^0$ and $\widetilde{\mathbb{G}^0}$ rather than of the more complex discretization proposed in [@Brisard10]. Also, in the latter approaches, definite-positiveness of matrix $\mathcal{M}$ (see Sec. \[sec:FFTmethods\]) is not guaranteed in the conjugate-gradient procedure. This specific issue has not been considered further as numerical experiments that we performed indicate that the latter schemes nevertheless converge. A stiff case: fields in the four-cell microstructure {#sec:local} ==================================================== The ‘four-cell’ microstructure is one of the few periodic structures for which an exact solution [@craster01] is available. We consider the special case, represented in Figure \[fig:4cells\], where the elementary cell is made of a single square inclusion of surface fraction $25$%. Because of the presence of corners, fields are singular in the infinite-contrast limit, which makes this case a good benchmark for numerical methods. In this Section, numerical results for the current computed with either the continuous Green operator $\mathbb{G}^0$ or the discrete operator $\widetilde{\mathbb{G}}^0$ are compared with the exact solution. The inclusion is highly conducting, with a contrast ratio $\sigma_2/\sigma_1=2\times 10^3$. ![\[fig:4cells\] Elementary periodic domain $\Omega=(-L/2,+L/2)^2$ with four-cell microstructure. The inclusion has conductivity $\sigma_2$ and the matrix has conductivity $\sigma_1$. ](fig1.eps){width="8cm"} The behavior of the electric current near the singular corner at point $(x,y)=(0,0)$ is illustrated in Figure \[fig:localFieldMaps\]. Maps of the vertical component $J_1(x,y)$ obtained with $\mathbb{G}^0$ (top) and $\widetilde{\mathbb{G}}^0$ (bottom) are displayed for increasing resolutions (left to right). Only the small region $-5.10^{-2}L\leq x, y\leq 5.10^{-2}L$ around the corner is shown. Numerical artifacts in the highly-conducting phase are conspicuous when using the continuous Green operator $\mathbb{G}^0$. They consist of high-frequency oscillations all over the conducting region, particularly near the horizontal interface [@note0], where the represented field component should be continuous. Such oscillations are almost absent when using $\widetilde{\mathbb{G}}^0$. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2a.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2b.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2c.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2d.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2e.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2f.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2g.eps "fig:"){width="3cm"} ![\[fig:localFieldMaps\] Four-cell microstructure of Figure \[fig:4cells\]. Maps of the vertical current component $J_1(x,y)$ in the region $-0.05 L\leq x, y\leq 0.05 L$, for increasing resolution $L$ (as indicated). Top: with continuous Green operator $\mathbb{G}^0$. Bottom: with discrete operator $\widetilde{\mathbb{G}}^0$. ](fig2h.eps "fig:"){width="3cm"} $L=1024$ $L=2048$ $L=4096$ $L=8192$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure \[fig:localFieldplots\] displays plots of the horizontal component $J_2(x,y)$ versus $x$ at $y=10^{-3}L$, close to the inclusion boundary. Negative values of $x$ correspond to the interior of the inclusion. Numerical results computed with both Green operators are compared with the exact solution. To draw meaningful graphs, data points obtained with $\mathbb{G}^0$ were post-processed prior to plotting by convolution over a window of $2\times 2$ adjacent pixels. This crude filtering device greatly reduces oscillations. Results obtained with $\widetilde{\mathbb{G}}^0$ have not been modified. Given sufficient resolution all methods converge to the exact solution. However, although all methods lead to almost identical solutions in the matrix, results strongly differ in the highly conducting region. The figure, which represents calculations carried out for various resolutions, shows that employing $\widetilde{\mathbb{G}}^0$ makes convergence notably easier. Indeed, data points obtained with $\widetilde{\mathbb{G}}^0$ at moderate resolution $L=1024$ are much closer to the exact solution than those obtained from $\mathbb{G}^0$ at the highest resolution $L=32\,768$. ![\[fig:localFieldplots\] Four-cell microstructure of Figure \[fig:4cells\]. Values of the horizontal current component $J_2(x,y=10^{-3}L)$ vs. $x$, for various resolutions $L$ (as indicated). Solid black: exact solution. Markers $\ast$, $\times$ and $+$ (red): FFT results with discrete Green operator $\widetilde{\mathbb{G}}^0$. Other markers and colors: FFT results with continuous Green operator $\mathbb{G}^0$.](fig3.eps){width="12cm"} In a previous study involving porous media [@willot08c], the continuous Green operator was already observed to induce awkward aliasing effects at high contrast. They usually take place near interfaces involving a region where the field considered is not uniquely defined in the infinite-contrast limit (e.g., the strain in a pore, or the electric current in an infinitely conducting inclusion). Convergence rate {#sec:results} ================ This Section further examines for a few selected microstructures the convergence properties of FFT schemes. Algorithmic convergence being harder in the case of strongly contrasted composites, the quantity of interest here is the number of iterations as a function of the contrast ratio $\sigma_2/\sigma_1$. Convergence criteria {#subsec:criteria} -------------------- Convergence criteria can be written either in the direct or Fourier representations. The most compelling ones are those that include high Fourier frequency behavior [@moulinec13]. In relation to FFT algorithms, the following criteria are considered: \[eq:precision\] $$\begin{aligned} \label{crit:divergence} \eta_1 &=& \|\overline{\mathbf{J}}\|^{-1} \max_\mathbf{x} \left| \textnormal{FT}^{-1}\left\lbrace k_i^*(\mathbf{q})J_i(\mathbf{q}); \mathbf{x}\right\rbrace\right|\leq \epsilon,\\ \label{crit:equilibrium} \eta_2 &=& \|\overline{\mathbf{E}}\|^{-1} \max_{i\neq j, \mathbf{x}} \left| \textnormal{FT}^{-1}\left\lbrace k_i(\mathbf{q})E_j(\mathbf{q})-k_j(\mathbf{q})E_i(\mathbf{q}); \mathbf{x}\right\rbrace\right|\leq \epsilon,\end{aligned}$$ where $\epsilon\ll 1$ is the required precision and $\textnormal{FT}^{-1}$ is the backward Fourier transform. Criterion (\[crit:divergence\]) puts emphasis on the current conservation, whereas (\[crit:equilibrium\]) imposes compatibility; apart from a difference in the norm used, they are akin to those used in [@moulinec13]. These equations refer to the discrete Green operator $\widetilde{\mathbb{G}^0}$. Current conservation and compatibility are enforced differently when using the continuous Green operator $\mathbb{G}^0$. In the latter case, $\mathbf{k}(\mathbf{q})$ and $\mathbf{k}^*(\mathbf{q})$ are replaced by $\mathbf{q}$ in Equation (\[eq:precision\]). Among the computational schemes introduced in Section 4, DS and [$\widetilde{\textnormal{DS}}$]{} enforce compatibility, at each iteration, which trivially guarantees that $\eta_2 = 0$. Instead, electric current conservation in the form of the equality $\eta_1 = 0$ is enforced by the dual schemes [$\textnormal{DS}_\textnormal{D}$]{} and [$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}. On the other hand, the remaining schemes in general lead to nonzero values of $\eta_1$ and $\eta_2$. This suggests using as a convergence criterion the inequality $\eta\leq\epsilon$ where $\eta=\eta_1$ for the primary (non-dual) schemes DS, [$\widetilde{\textnormal{DS}}$]{}, AL, [$\widetilde{\textnormal{AL}}$]{}, AS, [$\widetilde{\textnormal{AS}}$]{}, VAR, and where $\eta=\eta_2$ for the dual ones [$\textnormal{DS}_\textnormal{D}$]{}, [$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}, [$\textnormal{AL}_\textnormal{D}$]{}, [$\widetilde{\textnormal{AL}}_\textnormal{D}$]{}. Test microstructures -------------------- Convergence rates are monitored for three microstructures, periodic in all directions, whose unit cells $\Omega$ are represented in Figure \[fig:micro\]. The leftmost 2D cell, of size $L=1024$ pixels, contains a single circular disk-shaped inclusion of surface fraction $25\%$. This system is simply referred to as the ‘2D-periodic’ medium hereafter. The middle cell is a random 2D Boolean model of size $L=1024$ built from disks of diameter 80 pixels, of overall surface fraction $30\%$. The rightmost cell is a random 3D Boolean model of size $L=256$, made of spherical inclusions of diameter 20 voxels, with overall volume fraction $20\%$. ![\[fig:micro\] Elementary cell $\Omega$ of the “2D-periodic” microstructure (left), and the 2D (center) and 3D (right) random periodic Boolean models. Surface and volume fractions of the inclusions are, respectively, $25$, $30$ and $20\%$.](fig4.eps){width="13cm"} 2D periodic medium ------------------ Figure (\[fig:niter0\]) illustrates for some of the algorithms introduced in Section 4 applied to the ‘2D-periodic’ medium how the indicator $\eta$ tends to zero as the number of iterations increases. The contrast ratio is fixed at $\sigma_2/\sigma_1=2\times 10^3$. For exploratory purposes, quadruple precision was used in these calculations to allow for tiny values of $\eta$. Prior to drawing the plots, the quantities $\sigma^0$ and $\rho^0$ were optimized manually to minimize the number of iterations needed to reach the arbitrary threshold $\eta<\epsilon=10^{-12}$. For all methods, $\eta$ decreases exponentially with the number of iterations down to some constant value determined by machine precision. Roughly, algorithms separate in two classes. The first one comprises the continuous schemes, namely, DS, AL and AS, which are the slowest converging ones. However, in this class and for the microstructure considered, Eyre and Milton’s AS is clearly superior. The simple DS is by far the worst, and the AL scheme is intermediate. The other class encompasses the ‘discrete’ schemes (primary and dual). They all make $\eta$ saturate in less than 300 iterations, which is another hint at the good behavior of the discrete Green operator. In that class, Eyre and Milton’s method ([$\widetilde{\textnormal{AS}}$]{}) again proves the fastest converging one. ![\[fig:niter0\] “2D-periodic" medium. Convergence indicator $\eta$ vs. number of iterations in logarithmic-linear scale, for various FFT schemes: using the continuous Green operator (DS, AS, and AL), and the discrete Green operator ([$\widetilde{\textnormal{DS}}$]{}, [$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}, [$\widetilde{\textnormal{AS}}$]{}, [$\widetilde{\textnormal{AL}}$]{} and [$\widetilde{\textnormal{AL}}_\textnormal{D}$]{}).](fig5.eps){width="10cm"} The optimal reference conductivity $\sigma^0$ and resistivity $\rho^0$ used in Figure \[fig:niter0\] are summarized in the second column of Table \[tab:ref\]. The integer number in brackets is the number of iterations needed to reach the threshold $\eta<\epsilon=10^{-8}$, which in practice is a good trade-off between speed and accuracy. As already mentionned, Equation (\[eq:optdirect\]) optimizes the DS with the continuous Green operator. It gives $\sigma^0=1000.5$ and —this is an empirical finding— also optimizes [$\widetilde{\textnormal{DS}}$]{} with the discrete Green operator. Introducing phase resistivities as $\rho_{1,2}=1/\sigma_{1,2}$, an analogous formula (easy to demonstrate in the continuum) holds for the optimal resistivity in the continuous dual ‘direct’ scheme [$\textnormal{DS}_\textnormal{D}$]{}, namely, $$\rho^0=\frac{1}{2}(\rho_1+\rho_2),$$ which gives here $\rho^0\simeq 0.5$. Again empirically, we find that this value optimizes as well the discrete dual ‘direct’ scheme [$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}. As expected, the optimum $\sigma^0\simeq -44.7$ reported for AS matches Eyre and Milton’s result, Equation (\[eq:opteyre\]). However, although negative, the optimum $\sigma^0$ found for [$\widetilde{\textnormal{AS}}$]{} is *not* consistent with this formula. Finally, the values reported for the primary augmented-Lagrangian schemes AL and [$\widetilde{\textnormal{AL}}$]{} and their dual versions do not match any of the previous analytical estimates. [lccc]{} &\ & “2D-periodic" & 2D Boolean & 3D Boolean\ [DS]{} & $1000.5$ ($15621$) & $(\sigma_1+\sigma_2)/2$ & $(\sigma_1+\sigma_2)/2$\ [AL]{} & $76$ ($1556$) & $3.\,10^{-3}\sigma_1+1.8\sqrt{\sigma_1\sigma_2}$ & $1.7\sqrt{\sigma_1\sigma_2}$\ [AS]{} & $-44.7$ ($663$) & $-\sqrt{\sigma_1\sigma_2}$ & $-\sqrt{\sigma_1\sigma_2}$\ [VAR]{} & N/A & $0.50(\sigma_1+\sigma_2)$ & N/A\ [[$\widetilde{\textnormal{DS}}$]{}]{} & $1000.5$ ($46$) & $0.50(\sigma_1+\sigma_2)$ & $0.53\sigma_1+0.50\sigma_2$\ [[$\widetilde{\textnormal{AL}}$]{}]{} & $1855$ ($95$) & $0.30(\sigma_1+\sigma_2)$ & $0.56\sigma_1+0.26\sigma_2$\ [[$\widetilde{\textnormal{AS}}$]{}]{} & $-1390$ ($46$) & $-0.30(\sigma_1+\sigma_2)$ & -------------------- -------------------------------- $-(1/3.6)\sigma_1$ $\,\,(\sigma_2/\sigma_1\ll 1)$ $-3.6\sigma_1$ $\,\,(\sigma_2/\sigma_1\gg 1)$ -------------------- -------------------------------- : \[tab:ref\] Optimal reference conductivities $\sigma^0$ and resistivities $\rho^0$ determined for the indicated FFT schemes. Values given for the “2D-periodic" microstructure correspond to the contrast ratio $\sigma_2/\sigma_1=2\times 10^3$, with the number of iterations indicated in brackets. For Boolean models, the formulas given are consistent with the behavior observed at high contrast, although the low-contrast behavior may slightly differ. Those for schemes DS, AS and [$\textnormal{DS}_\textnormal{D}$]{} are exact ones. Missing entries (N/A) indicate that the corresponding schemes have not been investigated. \ [[$\widetilde{\textnormal{VAR}}$]{}]{} & N/A & $0.50(\sigma_1+\sigma_2)$ & N/A\ &\ [[$\textnormal{DS}_\textnormal{D}$]{}]{} & $0.5$ ($14616$) & $(\rho_1+\rho_2)/2$ & $(\rho_1+\rho_2)/2$\ [[$\textnormal{AL}_\textnormal{D}$]{}]{} & $0.033$ ($1336$) & $3\,10^{-3}\rho_1+1.8\sqrt{\rho_1\rho_2}$ & $1.7\sqrt{\rho_1\rho_2}$\ [[$\widetilde{\textnormal{DS}}_\textnormal{D}$]{}]{} & $0.5$ ($46$) & $0.50(\rho_1+\rho_2)$ & $0.48\rho_1+0.52\rho_2$\ [[$\widetilde{\textnormal{AL}}_\textnormal{D}$]{}]{} & $1.09$ ($93$) & $0.30(\rho_1+\rho_2)$ & $0.40\rho_1+0.55\rho_2$\ 2D and 3D Boolean media: reference conductivity or resistivity {#subsec:2drandom} -------------------------------------------------------------- A more thorough study was carried out for the Boolean models, in which the optimal reference conductivity $\sigma^0$ or resistivity $\rho^0$ was measured as a function of the contrast. In order to avoid unnecessary long computations, the reference was first manually optimized on a low-resolution grid of size $L=64$ (in 2D) or $L=32$ (in 3D). The optimized reference was then tested on a full-resolution grid of size $L=1024$ (2D) or $L=256$ (3D). In all but a few cases, the number of iterations to convergence found with the low-resolution and high-resolution grids was nearly the same. The number of iterations found on the full-resolution grid was kept if the difference was less than $10\%$; otherwise, the reference was optimized again, this time on the full-resolution grid, to provide a definitive number of iterations. Manual optimization of the reference parameters was carried out following a rough dichotomy procedure, disregarding for simplicity the possibility of concurrent local optima. The convergence criterion was set to $\eta\leq \epsilon=10^{-8}$ in these calculations. Our findings are summarized in the third and fourth columns of Table \[tab:ref\], where the formulas given essentially represent high-contrast behaviors in the regimes $\sigma_2/\sigma_1\ll 1$ or $\sigma_2/\sigma_1\gg 1$. Indeed, in some cases, the low-contrast behavior may differ from that given (see succeeding text). At the exception of scheme [$\widetilde{\textnormal{AS}}$]{} in the 3D Boolean medium, for which $\sigma^0/\sigma_1$ tends to a constant at high contrast —notice the symmetry between both high-contrast regimes, the behaviors we observed are of the following types: $$\begin{aligned} \label{eq:lin} \sigma^0/\sigma_1&=\alpha_1+\alpha_2\,r,\\ \label{eq:sqrt} \text{or }\quad\sigma^0/\sigma_1&=\beta_1+\beta_2\,r^{1/2},\end{aligned}$$ where $r=\sigma_2/\sigma_1$, and $\alpha_{1,2}$ and $\beta_{1,2}$ are numerical constants of various signs (see Table I). These forms generalize Equations (\[eq:opteyre\]) and (\[eq:optdirect\]). They apply to the ‘primary’ schemes, and similar ones hold for the ‘dual’ schemes with $\sigma$ substituted by $\rho$. When nonzero, the coefficient $\beta_1$, of order $10^{-3}$, is of unclear origin. The coefficients reported in the table were determined by nonlinear least-square fitting on our data. Additional fitting attempts with functional forms other than (but related to) those retained indicate that the first digit of the coefficients is significative, whereas the error on the second one is hard to evaluate. Different coefficients $\alpha_1$ and $\alpha_2$ are provided when our results do not support an equality $\alpha_1=\alpha_2$. However, our results strongly suggest that $\alpha_1=\alpha_2$ for the 2D Boolean system whenever Equation (\[eq:lin\]) applies, while this symmetry does not carry over to the 3D case, except for the DS, where $\alpha_1=\alpha_2=1/2$ (exact) in two and three dimensions. Although the optimum may in some cases be of the same form with the continuous and discrete Green operators, there are other cases such as with AS and [$\widetilde{\textnormal{AS}}$]{}, for which the optimal forms look strongly dissimilar. Moreover, comparing columns 2 and 3 of the table for the contrast $\sigma_2/\sigma_1=2\times 10^3$ indicates that the optima found somewhat depend on the microstructure. The behaviors gathered in the table are supported by Figure \[fig:fits\], which presents plots of our 2D and 3D data and the corresponding fitting curves. The ‘primary’ and ‘dual’ schemes are addressed in separate plots. The signs indicated in the Table cannot be read from the figures, where absolute values are displayed in logarithmic scale. In the 2D Boolean model, the data for the primary schemes and for their dual are numerically quite close in this mode of representation, so that the left and right plots superimpose almost exactly. Interestingly, the plots reveal the unique non-trivial behavior of the discrete schemes [$\widetilde{\textnormal{AL}}$]{} and [$\widetilde{\textnormal{AL}}_\textnormal{D}$]{} in the low-contrast region $0.1\leq\sigma_2/\sigma_1\leq 10$, where they behave as $\sqrt{\sigma_2/\sigma_1}$ even though the linear behavior reported in Table \[tab:ref\] takes place at higher contrasts. On the other hand, the continuous schemes AL and [[$\textnormal{AL}_\textnormal{D}$]{} $[$ instead of ‘${\rm AL^D}$’ $]$]{} essentially behave as a square root for all contrasts (up to a small corrective term in 2D cases). As already noticed in the discussion of the table, the discrete 3D ‘accelerated’ scheme [$\widetilde{\textnormal{AS}}$]{} with its intriguing asymptotic behavior (constant on both sides of the contrast range) stands as an outlier. For it no fit has been attempted. We emphasize that in all cases examined with the ‘accelerated’ schemes, the optimal square-root estimate (\[eq:opteyre\]) —exact in scheme AS— yields poor convergence when applied to [$\widetilde{\textnormal{AS}}$]{}. ![\[fig:fits\] 2D (top) and 3D (bottom) Boolean models. Absolute value of the normalized optimal conductivity $|\sigma^0/\sigma_1|$ vs. $\sigma_2/\sigma_1$ (left), and optimal resistivity $|\rho_0/\rho_1|$ vs. $\rho_2/\rho_1$ (right), for the schemes indicated. Symbols: FFT results. Solid: numerical fits (see Table \[tab:ref\]).](fig6a.eps "fig:"){width="6.7cm"} ![\[fig:fits\] 2D (top) and 3D (bottom) Boolean models. Absolute value of the normalized optimal conductivity $|\sigma^0/\sigma_1|$ vs. $\sigma_2/\sigma_1$ (left), and optimal resistivity $|\rho_0/\rho_1|$ vs. $\rho_2/\rho_1$ (right), for the schemes indicated. Symbols: FFT results. Solid: numerical fits (see Table \[tab:ref\]).](fig6b.eps "fig:"){width="6.7cm"} ![\[fig:fits\] 2D (top) and 3D (bottom) Boolean models. Absolute value of the normalized optimal conductivity $|\sigma^0/\sigma_1|$ vs. $\sigma_2/\sigma_1$ (left), and optimal resistivity $|\rho_0/\rho_1|$ vs. $\rho_2/\rho_1$ (right), for the schemes indicated. Symbols: FFT results. Solid: numerical fits (see Table \[tab:ref\]).](fig6c.eps "fig:"){width="6.7cm"} ![\[fig:fits\] 2D (top) and 3D (bottom) Boolean models. Absolute value of the normalized optimal conductivity $|\sigma^0/\sigma_1|$ vs. $\sigma_2/\sigma_1$ (left), and optimal resistivity $|\rho_0/\rho_1|$ vs. $\rho_2/\rho_1$ (right), for the schemes indicated. Symbols: FFT results. Solid: numerical fits (see Table \[tab:ref\]).](fig6d.eps "fig:"){width="6.7cm"} In 2D, the formula $\sigma_0=0.5(\sigma_1+\sigma_2)$ indifferently optimizes the discrete and continuous [$\widetilde{\textnormal{VAR}}$]{} and VAR schemes. We observed similar convergence rates, up to $3$% difference in the number of iterations, for these algorithms within the range $0.4\leq \sigma_0/(\sigma_1+\sigma_2)\leq 0.9$. However, outside of this range, the convergence of the VAR scheme deteriorates. The small sensitivity with respect to the reference material $\sigma_0$ in this method is supported by other studies [@Gelebart]. We also investigated the sensitivity to the choice of $\sigma^0$ in the ‘direct’ discrete schemes. In the 2D Boolean model and for the discrete scheme [$\widetilde{\textnormal{DS}}$]{}, the choice $\sigma^0=0.50(\sigma_1+\sigma_2)$ proves optimal, which matches the exact result relative to DS. However, with [$\widetilde{\textnormal{DS}}$]{}, nearly optimal 2D results are also obtained with choices $\sigma^0<(\sigma_1+\sigma_2)/2$. By contrast, in 3D, the number of iterations may be extremely sensitive to the choice of $\sigma^0$. Figure \[fig:convRef\] illustrates this. It represents the number of iterations versus $\sigma^0$ for [$\widetilde{\textnormal{DS}}$]{} in the 3D Boolean model, with contrast $\sigma_2/\sigma_1=10^{-5}$. No convergence is observed for $\sigma^0<0.5(\sigma_1+\sigma_2)$, and the optimal choice is about $\sigma^0\approx 0.53\,\sigma_1$. ![\[fig:convRef\] 3D Boolean model. Number of iterations vs. reference conductivity $\sigma^0$ for the direct scheme with discrete Green operator ([$\widetilde{\textnormal{DS}}$]{}). The contrast ratio is $\sigma_2/\sigma_1=10^{-5}$. The convergence criterion is $\eta\leq \epsilon=10^{-8}$. The value $\sigma^0=(\sigma_1+\sigma_2)/2$ is represented by the vertical dotted line. The solid line between data points is a guide to the eye. ](fig7.eps){width="10cm"} 2D and 3D Boolean media: convergence properties ----------------------------------------------- This Section examines convergence performance for the 2D and 3D Boolean models, expressed by the number of iterations $N$ as a function of the contrast ratio $r=\sigma_2/\sigma_1$. Figure (\[fig:conv\]) illustrates the performance of the various FFT schemes considered, in calculations optimized by using the reference conductivity or resistivity discussed in the previous section. Schemes using $\mathbb{G}^0$ are represented by filled symbols and the $+$ marker, whereas discrete schemes using $\widetilde{\mathbb{G}}^0$ are represented by empty symbols and the $\times$ marker. [c]{} ![\[fig:conv\] 2D and 3D Boolean models (top and bottom). Number of iterations vs. contrast for various FFT algorithms. The convergence criterion is $\eta\leq \epsilon=10^{-8}$. Solid lines between data points are guides to the eye.](fig8a.eps "fig:"){width="11cm"}\ \ ![\[fig:conv\] 2D and 3D Boolean models (top and bottom). Number of iterations vs. contrast for various FFT algorithms. The convergence criterion is $\eta\leq \epsilon=10^{-8}$. Solid lines between data points are guides to the eye.](fig8b.eps "fig:"){width="11cm"} We recover known results of linear scaling $N\sim r$ for DS and [$\textnormal{DS}_\textnormal{D}$]{}, and of square-root scaling $N\sim r^{1/2}$ for AS [@Eyre99]. Similar convergence rates are observed for AL and [$\textnormal{AL}_\textnormal{D}$]{}, and [for VAR \[instead of ‘for the VAR’\]]{}. As a rule, given the FFT method, the ‘primary’ scheme always converges better than the ‘dual’ one when $r<1$, while the opposite holds when $r>1$. For instance, at very strong contrast ratio $r>10^7$, the convergence of the dual ‘augmented-Lagrangian’ scheme [$\textnormal{AL}_\textnormal{D}$]{} is much faster than that of the primary one AL. As to discrete schemes, they are much more efficient than their continuous counterparts. For discrete schemes, $N(r)$ is either a bounded or slowly increasing function of $r$, which shows that using the discrete Green operator $\widetilde{\mathbb{G}}^0$ definitely provides a dramatic improvement of convergence. By optimizing the choice between the ‘primary’ or ‘dual’ versions of the discrete algorithm at hand depending on whether $r<1$ or $r>1$, one can even achieve *convergence in a finite number of iterations* in the infinite-contrast limit. Overall, the figure shows that among all schemes the discrete version [$\widetilde{\textnormal{AS}}$]{} of the AS is the better converging one in 2D and 3D. Optimizing the “direct" scheme with discrete Green operator {#sec:speed} =========================================================== In applications dealing with large microstructures (typically, multiscale materials) fast and memory-efficient implementations of FFT methods are required. One common way of minimizing both CPU speed and memory storage is to recompute the Green operator at each iteration. As long as the Green operator is easy to compute, this strategy is usually faster than storing a very large tensor field. This is used in the CraFT [@craft] and morph-Hom [@morphhom] softwares. As an example, a low-cost implementation of DS is as follows: Initialization: set $A_i(\mathbf{x})\equiv 0$. 1. Set $A_i(\mathbf{x}):=[\sigma(\mathbf{x})-\sigma^0]A_i(\mathbf{x})$; 2. Set $A_i(\mathbf{q}):=\textnormal{FFT}(A_i(\mathbf{x});\mathbf{q})$; 3. Set $A_i(\mathbf{q}):=G_{ij}^0(\mathbf{q})A_j(\mathbf{q})$ for $\mathbf{q}\neq 0$ and $A_i(\mathbf{q}=0):=\overline{E}_i$; 4. Set $A_i(\mathbf{x}):=\textnormal{FFT}^{-1}(A_i(\mathbf{x});\mathbf{q})$; 5. Compute convergence criterion; if convergence is reached, set $E_i=A_i$ and STOP; otherwise GOTO (i). In this algorithm FFTs are computed *in-place*. Step (iii) consists of a loop over all modes $\mathbf{q}$ with $G_{ij}^0(\mathbf{q})$ computed on-the-fly. In total, memory space is allocated for one vector field $\mathbf{A}$ plus the microstructure. Vector $\mathbf{A}$ successively stores the polarization field in the real space \[step (i)\] and in the Fourier domain \[step (ii)\] and the electric field $\mathbf{E}$ in the Fourier domain \[step (iii)\] and real space \[step (iv)\]. The convergence criterion in step (v) must be modified, as checking for current conservation by computing criterion $\eta_1$ with in-place computations is now impractical. Monitoring the differences over two iterations of the first and second moments of the electric and current fields provides practical crieria that are less accurate, but easier to compute. On the other hand, the use of the discrete Green operator $\widetilde{\mathbb{G}}^0$ allows for a more efficient implementation of the DS. Consider the rewriting of Equation (\[eq:ls1imp\]) as $$\label{eq:dsimp} \phi^{k+1}=\frac{1}{\sigma^0\Delta}\textnormal{\textbf{div}}\left[(\sigma-\sigma^0)(\mathbf{\overline{E}}-\textnormal{\textbf{grad}}\phi^k)\right]$$ where $\phi^k$ is the periodic part of the potential associated to $\mathbf{E}^k$, so that $\mathbf{E}^k-\overline{\mathbf{E}}=-\textnormal{grad}\,\phi^k$, and where $1/\Delta$ is, symbolically, the inverse Laplacian. Equation (\[eq:dsimp\]) defines $\phi^{k+1}$ as a unique periodic function up to an irrelevant constant. When $k\to\infty$, $\phi^k$ converges to the potential up to a linear correction $\phi^\infty(\mathbf{x})=\Phi(\mathbf{x})-\overline{E}_ix_i$. The electric field and current follow from $\Phi$. In the discrete setting, equivalent to a resistor network, knowledge of a field on adjacent nodes or bonds is sufficient to compute its local divergence or gradient. Thus, the action of the $\textnormal{\textbf{div}}$ and $\textnormal{\textbf{grad}}$ operators in Equation (\[eq:dsimp\]) can be computed in the real space. This suggests the following alternative implementation of the discrete direct scheme ([$\widetilde{\textnormal{DS}}$]{}): Initialization: set $A(\mathbf{x})\equiv 0$. 1. At each point $\mathbf{x}$, set $A(\mathbf{x}):=\textnormal{\textbf{div}}\,\mathbf{P}(\mathbf{x})$ where\ $P_i(\mathbf{x})=[\sigma(\mathbf{x})-\sigma^0][\overline{E}-\textnormal{\textbf{grad}}\, A(\mathbf{x})]$; compute $\eta_1$ as defined in ($\ref{eq:precision}$); 2. Set $A(\mathbf{q}):=FFT\lbrace A(\mathbf{x});\mathbf{q}\rbrace$; 3. Set $A(\mathbf{q}):=-\frac{A(\mathbf{q})}{\sigma^0|k(\mathbf{q})|^2}$ for $\mathbf{q}\neq 0$ and $A(\mathbf{q}=0):=0$ otherwise; 4. Set $A(\mathbf{x}):=FFT^{-1}\lbrace A(\mathbf{q});\mathbf{x}\rbrace$; 5. If $\eta_1<\epsilon$ set $\mathbf{E}=\overline{\mathbf{E}}-\textnormal{\textbf{grad}}\, A$, $\mathbf{J}=\sigma \mathbf{E}$ and STOP; otherwise GOTO (i). This algorithm exactly implements the [$\widetilde{\textnormal{DS}}$]{} scheme. However, only a scalar field, rather than a vector field, is now allocated in memory. Laplacian inversion is the sole computation performed in the Fourier domain. It takes the form of a division by $|k|^2$ in step (iii). The field $A$ successively stores the divergence of the polarization field $\textnormal{\textbf{div}}\,\mathbf{P}$ in the real space \[step (i)\] and Fourier domain \[step (ii)\] and, later on, the periodic part of the potential $\phi$ in the Fourier domain \[step (iii)\] and in the real space \[step (iv)\]. Multithreading parallelization in step (i) necessitates some care as this step is non-local. Nevertheless, this new implementation reduces the number of FFTs per iteration from $4$ (in 2D) or $6$ (in 3D) down to $2$. Furthermore, the amount of storage is also reduced by a factor $d$ ($L^d$ floats instead of $dL^d$), if we neglect the storage required for the microstructure. The total CPU time spent using ‘direct’, ‘augmented-Lagrangian’ and ‘accelerated’ schemes is plotted in Figure \[fig:cputime\] as a function of contrast, the scheme ([$\widetilde{\textnormal{DS}}$]{}) being implemented as outlined earlier. These tests were carried out with convergence criterion $\eta<10^{-8}$, on the previously considered 3D Boolean microstructure discretized on a grid of size $L=256$ (16.8 million points). Computations were performed in double precision, on a $12$-core Intel Xeon machine, each core running at $2.90$ GHz with $5800$ bogomips and $15360$ Kb of L2 cache. Best performance is achieved for the [$\widetilde{\textnormal{DS}}$]{} scheme when $\sigma_2/\sigma_1<1$, and with [$\widetilde{\textnormal{AS}}$]{} when $\sigma_2/\sigma_1>1$. Using these optimal schemes at infinite contrast, convergence is completed in $29$ s for insulating inclusions, and in $53$ s for infinitely-conducting inclusions. This strategy has been implemented in the multithreaded Fortran code [morph-hom](morph-hom) developped at Mines ParisTech [@morphhom]. ![\[fig:cputime\] CPU time vs. contrast ratio $\sigma_2/\sigma_1$ for various FFT algorithms, on the 3D Boolean microstructure. The convergence criterion is $\eta\leq\epsilon$ with $\epsilon=10^{-8}$.](fig9.eps){width="11cm"} Conclusion {#sec:conclusion} ========== Use of a modified Green operator in FFT-based schemes has been advocated, in the context of the electrical response of heterogeneous conducting media. The modification consists in making the operator consistent with the underlying voxel grid, which requires only a very simple adaptation of previously existing algorithms but leads to two major improvements. First, employing the modified operator leads to much more accurate local fields, particularly in the highly conducting or insulating inclusions and in the vicinity of interfaces. Second, the convergence rate is found to be much faster compared with previous methods, in particular for highly-contrasted media. Quite remarkably the ‘direct’ scheme —usually considered to be the worst-converging one— improves tremendously, as far as CPU time is concerned, by formulating the problem in terms of iterations on the electrostatic potential rather than on the electric field. However, using the modified Green operator requires carefully adjusting the reference conductivity $\sigma^0$, since the latter has a strong influence on convergence properties. Approximate expressions for $\sigma^0$ have been derived numerically, and studied, for the Boolean models of microstructure considered in this work. It has already been noticed in the past that using ‘discrete’ versions of Green operators leads to promising methods [@Brisard10; @wiegmann2006; @willot08c]. Demonstrating that dramatic speed-up improvements follow, the present work strongly supports this view. Based on previous experience [@willot08c], it is expected that our conclusions carry over to continuum mechanics. The authors are grateful to H. Moulinec for kindly providing some field maps for comparison purposes, which has been a very helpful assistance in this study. The research leading to the results presented has received funding from the European Union’s Seventh Framework Programme (FP7 / 2007-2013) for the Fuel Cells and Hydrogen Joint Technology Initiative under grant agreement 303429. [99]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} Moulinec H, Suquet P. A fast numerical method for computing the linear and non linear mechanical properties of the composites. *Comptes rendus de l’Académie des Sciences, Série II* 1994; **318**(11):1417–1423. Willot F., Pellegrini Y-P, Idiart MI, Ponte Castañeda P. Effective-medium theory for infinite-contrast two-dimensionally periodic linear composites with strongly anisotropic matrix behavior: dilute limit and crossover behavior. *Physical Review B* 2008; **78**(10):104111. Lebensohn RA. N-site modeling of a 3D viscoplastic polycrystal using fast Fourier transform. *Acta Materialia* 2001; **49**(14):2723–2737. Li J, Meng S, Tian X, Song F, Jiang C. A non-local fracture model for composite laminates and numerical simulations by using the FFT method. *Composites Part B: Engineering* 2011; **43**(3):961–971. Willot F, Gillibert L, Jeulin D. Microstructure-induced hotspots in the thermal and elastic responses of granular media. *International Journal of Solids and Structures* 2013; **50**(10):1699–1709. Willot F, Jeulin D. Elastic and electrical behavior of some random multiscale highly-contrasted composites. *International Journal for Multiscale Computational Enginneering: special issue on Multiscale Modeling and Uncertainty Quantification of Heterogeneous Materials* 2011; **9**(3):305–326. Azzimonti D, Willot F, Jeulin D. Optical properties of deposit models for paints: full-fields FFT computations and representative volume element. *Journal of Modern Optics* 2013; **60**(7):519–528. Jean A, Willot F, Cantournet S, Forest S, Jeulin D. Large-scale computations of effective elastic properties of rubber with carbon black fillers. *International Journal for Multiscale Computational Engineering* 2011; **9**(3):271–303. Belkhabbaz A, Brenner R, Rupin N, Bacroix B, Fonseca J. Prediction of the overall behavior of a 3D microstructure of austenitic steel by using FFT numerical scheme. *Procedia Engineering* 2011; **10**:1883–1888. Prakash A, Lebensohn R. Simulation of micromechanical behavior of polycristals: finite element versus fast Fourier transforms. *Modelling and Simulation in Materials Science and Engineering* 2009; **17**(6):064010. Rollett A, Lebensohn R, Groeber M, Choi Y, Li J, Rohrer GS. Stress hot spots in viscoplastic deformation of polycristals. *Modelling and Simulation in Material Science and Engineering* 2010; **18**(7):074005. Lebensohn R, Castelnau O, Brenner R, Gilormini P. Study of the antiplane deformation of linear 2-d polycrystals with different microstructures. *International Journal of Solids and Structures* 2005; **42**(20):5441–5459. Dunant C, Bary B, Giorla A, Péniguel C, Sanahuja J, Toulemonde C., Tran A, Willot F, Yvonnet J. A critical comparison of several numerical methods for computing effective properties of highly heterogeneous materials. *Advances in Engineering Software* 2013; **58**:1–12. Brisard S, Dormieux L. FFT-based methods for the mechanics of composites: A general variational framework. *Computational Materials Science* 2010; **49**(3):663–671. Kanit T, Forest S, Galliet I, Mounoury V, Jeulin D. Determination of the size of the representative volume element for random composites: statistical and numerical approach. *International Journal of Solids and Structures* 2003; **40**(13–14):3647–3679. Milton GW. *The Theory of Composites*. Cambridge Univ. Press: Cambridge, 2002. Eyre D, Milton G. A fast numerical scheme for computing the response of composites using grid refinement. *The European Physical Journal Applied Physics* 1999; **6**(1):41–47. Michel JC, Moulinec H, Suquet P. A computational scheme for linear and non-linear composites with arbitrary phase contrast. *International Journal for Numerical Methods in Engineering* 2001; **52**(1-2):139–160. Monchiet V, Bonnet G. A polarization-based [FFT]{} iterative scheme for computing the effective properties of elastic composites with arbitrary contrast. *International Journal for Numerical Methods in Engineering* 2012; **89**(11):1410–1436. Moulinec H, Silva F. Comparison of three accelerated [FFT-based]{} schemes for computing the mechanical response of composite materials. *International Journal for Numerical Methods in Engineering* [2014; **97**(13):960–985. \[instead of volume and page numbers left blank\]]{} Brisard S, Dormieux L. Combining Galerkin approximation techniques with the principle of Hashin and Shtrikman to derive a new FFT-based numerical method for the homogenization of composites. *Computational Methods for Applied Mechanical Engineering* 2012; **217**(220):197–212. Zeman J, Vodrejc J, Novak J, Marek I. Accelerating a [FFT-based]{} solver for numerical homogenization of a periodic media by conjugate gradients. *Journal of Computational Physics* 2010; **229**(21):8065–8071. Vondřejc B, Zeman J, Marek I. Analysis of a fast [Fourier]{} transform based method for modeling of heterogeneous materials. *Large-Scale Scientific Computing* 2012; **7116**:515–522. Yvonnet J. A fast method for solving microstructural problems defined by digital images: a space [Lippmann–Schwinger]{} scheme. *International Journal for Numerical Methods in Engineering* 2012; **92**(2):178–205. Moulinec H, Suquet P. A numerical method for computing the overall response of nonlinear composites with complex microstructure. *Computer Methods in Applied Mechanics and Engineering* 1998; **157**(1):69–94. M[ü]{}ller WH. Mathematical vs. experimental stress analysis of inhomogeneities in solids. *Journal de Physique* 1996; **6**(C1):139–148. Willot F, Pellegrini YP. Fast [Fourier]{} transform computations and build-up of plastic deformation in [2D]{}, elastic-perfectly plastic, pixelwise-disordered porous media. In: *Continuum Models and Discrete Systems CMDS 11*, Jeulin D, Forest S (eds). École des Mines: Paris, 2008; 443–449. Luck J-M. Conductivity of random resistor networks: an investigation of the accuracy of the effective-medium approximation. *Physical Review B* 1991; **43**(5):3933–3944. Craster R, Obnosov Y. Four phase checkerboard composites. *SIAM Journal on Applied Mathematics* 2001; **61**(6):1839–1856. This oscillatory behavior was confirmed in elasticity by[ H. Moulinec]{} using independent software (private communication, 2013). It is similar to that reported in [@willot08c]. Gélébart L, Mondon-Cancel R. Non-linear extension of FFT-based methods accelerated by conjugate gradients to evaluate the mechanical behavior of composite materials. *Computational Materials Science* 2013; **77**:430–439. software. (Available from: `http://craft.lma.cnrs-mrs.fr`),\ $[$accessed on 30 June 2013$]$. morph-Hom software. (Available from: `http://cmm.ensmp.fr/morphhom`),\ $[$accessed on 30 June 2013$]$. Wiegmann A, Zemitis A. [EJ-HEAT]{}: A fast explicit jump harmonic averaging solver for the effective heat conductivity of composite materials. *Berichte des Fraunhofer ITWM* 2006; **94**:1–21.
--- abstract: 'When the EPIC-pn instrument on board *XMM-Newton* is operated in Timing mode, high count rates ($>100$ cts s$^{-1}$) of bright sources may affect the calibration of the energy scale, resulting in a modification of the real spectral shape. The corrections related to this effect are then strongly important in the study of the spectral properties. Tests of these calibrations are more suitable in sources which spectra are characterised by a large number of discrete features. Therefore, in this work, we carried out a spectral analysis of the accreting Neutron Star GX 13+1, which is a dipping source with several narrow absorption lines and a broad emission line in its spectrum. We tested two different correction approaches on an *XMM-Newton* EPIC-pn observation taken in Timing mode: the standard Rate Dependent CTI (RDCTI or *epfast*) and the new, Rate Dependent Pulse Height Amplitude (RDPHA) corrections. We found that, in general, the two corrections marginally affect the properties of the overall broadband continuum, while hints of differences in the broad emission line spectral shape are seen. On the other hand, they are dramatically important for the centroid energy of the absorption lines. In particular, the RDPHA corrections provide a better estimate of the spectral properties of these features than the RDCTI corrections. Indeed the discrete features observed in the data, applying the former method, are physically more consistent with those already found in other *Chandra* and *XMM-Newton* observations of GX 13+1.' author: - | \ $^1$ Università degli Studi di Cagliari, Dipartimento di Fisica, SP Monserrato-Sestu, KM 0.7, 09042 Monserrato, Italy\ $^2$ European Space Astronomy Center of ESA, Apartado 50727, 28080 Madrid, Spain\ $^3$ Dipartimento di Fisica e Chimica, Universitá di Palermo, via Archirafi 36 - 90123 Palermo, Italy bibliography: - 'biblio.bib' title: 'Testing Rate Dependent corrections on timing mode EPIC-pn spectra of the accreting Neutron Star GX 13+1' --- accretion, accretion discs – X-rays: binaries – X-Rays: galaxies – X-rays: individuals Introduction ============ The calibration of the detectors used in astronomy is fundamental for the sake of scientific studies. However, the processes at the basis of the calibrations are often complex especially for instruments which are onboard satellites. Indeed their calibrations have to quickly evolve with time as the space environment does not favour the stability of the instruments, for example due to the impact with micrometeorites, excessive irradiation by high energy particles or unexpected effects that were not observed on the ground. Focusing our attention to X-ray satellites, they are also highly affected by fluorescence emission lines produced by material in the satellite environment when irradiated by X-ray photons. Furthermore, the calibration of the instruments are also carried out using an X-ray source onboard of the satellite which produces only a few emission lines and does not allow a precise calibration of the whole energy range. Here we focus our attention on the impact of the calibration of the energy scale in the EPIC-pn instrument [@struder01] on-board XMM-Newton [@jansen01], when this instrument is operated in Timing Mode and observing bright sources ($>100$ count s$^{-1}$). It has been noticed that the large amount of photons, i.e. energy, which these bright sources deposit on the CCD may affect the calibration of the energy scale. In particular, it distorts the observed spectral shape of the sources, altering the scientific results. In order to account for this effect, two approaches were developed: one is calibrated on the spectrum in Pulse Invariant (PI), assuming an astrophysical model of the spectrum in the 1.5-3 keV energy band; while the other one acts on the Pulse Height Analyser information (PHA, Guainazzi 2013). The former, called Rate Dependent Charge Transfer inefficiency (RDCTI or *epfast*; Guainazzi et al. 2008, XMM-CAL-SRN-248[^1]), was historically developed to correct for Charge Transfer Inefficiency (CTI) and X-ray loading (XRL), although their energy-dependence is based on unverified assumptions (Guainazzi $\&$ Smith 2013, XMM-CAL-SNR-0302[^2]). Instead in the second approach, called Rate Dependent Pulse Height Amplitude (RDPHA), the energy scale is calibrated by fitting the peaks in derivative PHA spectra corresponding to the Si-K ($\sim1.7$ keV) and Au-M ($\sim2.3$ keV) edges of the instrumental response, where the gradient of the effective area is the largest. It also includes an empirical calibration at the energy of the transitions of the $K_{\alpha}$ Iron line (6.4-7.0 keV; Guainazzi, 2014, XMM-CAL-SRN-0312[^3]). The RDPHA approach avoids any assumption on the model dependency in the spectral range around the edges and it is also calibrated in PHA space before events are corrected for gain and CTI. In order to test the goodness of these corrections, bright sources with a number of features (in absorption or emission) are needed. To investigate these two approaches, we selected the bright source GX 13+1 as a test-study. GX 13+1 is a low mass X-ray binary (LMXB) source which is a well known persistent accreting neutron star (NS) at the distance of $7\pm1$ kpc. Its companion is an evolved K IV mass-donor giant star [@bandyopadhyay99]. In particular, GX 13+1 is a dipping source [@corbet10; @diaz12] which has probably shown periodic dips due to the orbital motion (@iaria14) during the last decades. It was suggested that dips are more likely produced by optically thick material at the outer edge of the disc created by the collision between the accretion flow from the companion star and the outer disc (e.g. @white82) or by outflows in the outer disc. The orbital period of GX 13+1 is 24.52 days, making this source the second LMXB with the longest orbital period after GRS 1915+105. The continuum emission of GX 13+1 can be described with the combination of a multicolour blackbody plus a cold, optically thick comptonisation component (e.g. @homan04 [@diaz12]). Because the comptonisation is optically thick, it can be also approximated by a blackbody component, lightening the fit calculation given that the properties of the comptonising component (electron temperature and optical depth) are usually poorly constrained [@ueda01; @sidoli02; @ueda04]. The spectra show also the existence of several spectral features [@diaz12; @dai14]. These are associated with a warm absorbing medium close to the source and produced by outflows from the outer regions of the accretion disc. The warm absorber is present during all the orbital period and might become denser during the dips episodes, suggesting a cylindrical distribution around the source. It also results more opaque, and probably cloudy, close to the plane of the disc. In addition, the absorption lines associated to the warm absorber are produced by highly ionised species and indicate bulk outflow velocities of $\sim 400$ km s$^{-1}$ [@ueda04; @madej13; @dai14]. Furthermore, a broad, emission component at the energy of the K-shell of the Iron XXI-XXVI was found and interpreted as reflection of hard photons from the surface of the accretion disc [@diaz12]. The broadness of the line has been suggested not to be produced by relativistic effects but by Compton broadening in the corona, although this interpretation has been questioned [@cackett13]. GX 13+1 is therefore an ideal candidate to test the RDCTI and RDPHA calibration approaches as it shows a simple continuum characterised by a number of narrow absorption features and a broad emission line. Data Reduction {#data_reduction} ============== We carried out a spectral analysis on one *XMM-Newton* observation (Obs.ID. 0122340901) taken in Timing mode, of the accreting NS GX 13+1. Data were reduced using the latest calibrations (at the date of April 25th, 2014) and Science Analysis Software (SAS) v. 13.5.0. At high count rates, X-ray loading and CTI effects have to be taken into account as they affect the spectral shape and, in particular, they produce an energy shift on the spectral features. In order to optimize the data reduction, we generated two EPIC-pn events files, each one created according to the RDCTI and RDPHA corrections: for the RDCTI corrections, we made use of the standard *epfast* corrections[^4] adopting the following command: “[epproc runepreject=yes withxrlcorrection=yes runepfast=yes]{}”; for the RDPHA corrections, we reprocessed the data using the command: “[epproc runepreject=yes withxrlcorrection=yes runepfast=no withrdpha=yes]{}”. The command “[runepfast=no withrdpha=yes]{}” has to be explicitly applied in order to avoid the combined use of both corrections. We note that the adopted RDCTI (*epfast*) task is the latest released version and its effect on data is dissimilar from versions of *epfast* older than May 23$^{rd}$, 2012. Indeed the older versions combined XRL and rate-dependent CTI corrections in a single correction, while they are now applied separately, each with its appropriate calibration. Hence the calibrations due to different *epfast* versions might provide different results. For each EPIC-pn event file, we extracted the spectra from events with [pattern]{}$\leq 4$ (which allows for single and double pixel events) and we set ‘[flag]{}=0’ retaining events optimally calibrated for spectral analysis. Source and background spectra were then extracted selecting the ranges RAWX=\[31:41\] and RAWX=\[3:5\], respectively. We generated the auxiliary files using *arfgen* and setting “detmaptype=psf” and “psfmodel=EXTENDED”, using the calibration file “XRT3$\_$XPSF$\_$0016.CCF” (Guainazzi et al., 2014; XMM-CAL-SRN-0313[^5]). EPIC-pn spectra were subsequently rebinned with an oversample of 3 using *specgroup*. Finally, all RGS spectra were extracted using the standard *rgsproc* task, filtered for periods of high background and grouped with a minimum of 25 counts per noticed channel. The RGS and EPIC-pn spectra were then fitted simultaneously using [xspec]{} V. 12.8.1 [@arnaud96], in the range 0.6-2.0 keV and 2.0-10.0 keV, respectively. We also compared the EPIC-pn data to a *Chandra* observation, with Obs.ID 11814. In particular, we analysed the data of the High Energy Grating (HEG) instrument onboard *Chandra*. Since the data reduction and extraction process is described in @dai14, we suggest the reader to reference that paper for more details. Pile-up ------- ![EPIC-pn spectra where any (*black*), one (*red*), three (*green*) and five (*blue*) columns were excised in order to test for pile-up effects. We also show the residuals obtained adopting the model [phabs\*edge\*(nthcomp + gauss)]{} and fitting the spectra simultaneously. We found consistency between the spectra obtained when removing three and five columns of the CCD.[]{data-label="comparison_pile_up"}](tot_1col_3col_5col_fit_simultaneo_estratti_correttamente.ps){height="9.0cm" width="7.0cm"} The source has a mean count rate in the EPIC-pn detector of $\sim700$ cts s$^{-1}$, close to the nominal threshold for pile-up effects, that is $>800$ cts s$^{-1}$ in Timing mode. In order to test the presence of pile-up effects in the EPIC data, we initially made use of the SAS tool *epatplot*. It provides indications that the data are affected by pile-up which can be widely corrected excising the three brightest central column (RAWX=\[35:37\]). However, the best test can be done comparing the residuals of the spectra with none, one, three and five columns excised, where a fit with a same spectral model is adopted. Hence we selected the range 2.4-10 keV of the four EPIC-pn spectra (RDPHA corrected) and we fitted them simultaneously with an absorbed [nthcomp]{} model (@zdiarski96), letting only the normalizations between the spectra free to vary. We also added an absorption edge and Gaussian models to take into account some absorption narrow lines and a broad emission line (see next sections for major details). We obtained a reduced $\chi^2$ of 1.9 ($\chi^2=874.93$ for 451 degrees of freedom). We show the best fit and its residuals in Figure \[comparison\_pile\_up\]. Inspecting the residuals, spectra extracted after excising three and five columns are consistent, confirming that removing only three CCD column corrects for pile-up effects. Therefore in the spectral analyses of the next sections we will only consider the spectrum extracted removing the three central column. We also highlight that the same findings hold also for the RDCTI corrected spectra. Spectral analysis {#analysis} ================= We analysed spectra extracted from RDPHA and RDCTI corrected event files and we adopted the same continuum for both of them. The neutral absorption is described with the [phabs]{} model, using the abundance of @andres89. The continuum is instead based on a blackbody component ([bbody]{} in [xspec]{}) plus a comptonisation component ([nthcomp]{} in [xspec]{}; @zdiarski96). We note that leaving the seed photons temperature (kT$_{seed}$) free to vary makes this parameter totally unconstrained; therefore we linked it with the blackbody temperature, assuming that the seed photons for comptonisation are provided by the inner regions of the accretion disc. ![ Comparison of the unfolded ($Ef(E)$) EPIC-pn spectra in the 5-10 keV energy band, applying RDPHA (*black* spectrum) and RDCTI (*red* spectrum) corrections. Both dataset are fitted with their corresponding best fit models, [edge$\cdot$phabs$\cdot$(bbody+nthcomp+6 gaussian)]{} showed in Table \[table\_continuum\_gauss\_rdpha\] and  \[gauss\_rdpha\] (the emission feature is taken into account). Below 5 keV, the spectra are generally consistent. Instead, above this threshold, clear discrepancies in the features and the continua are seen (see text).[]{data-label="comparison_rdpha_rdcti"}](rdpha_vs_rdcti.eps){height="9.0cm" width="7.0cm"} We introduced a multiplicative constant model in each fit in order to take into account the diverse calibration of RGS and EPIC instruments. We fixed to 1 the constant for the EPIC-pn spectrum and allowed the RGS constants to vary. In general, this parameter range does not vary more than 10$\%$ in comparison with the EPIC-pn constant. $^1$ EPIC-pn spectrum corrected for pile-up removing the central, brightest column; $^a$ Column density; $^b$ Blackbody temperature and seed photons temperature of the [nthcomp]{}; $^c$ Normalization of the [bbody]{} component in unity of $L_{39}^2/D_{10kpc}$, where $L_{39}$ is the luminosity in unity of 10$^{39}$ erg s$^{-1}$) and $D_{10kpc}$ is the distance in unity of 10 kpc; $d$ Electrons temperature of the corona; $^e$ Photon index; $^f$ Seed photons temperature (usually equal to kT$bb$); $^g$ Energy of the relativistic line; $^h$ Power law dependence of emissivity; $^i$ Inner radius in terms of gravitational radius R$_g$; $^l$ Inclination angle of the binary system; $^m$; Normalization of the model in [xspec]{} unit; $^n$ low energy cut-off of the reflection component; $^o$ high energy cut-off of the reflection component, set as 2.7 times the electron temperature of the comptonisation model; $^p$ Ratio of Iron and hydrogen abundance; $^q$ ionisation parameter; $^r$ reduced $\chi^2$ of the best fit including the absorption/emission features shown in Table \[gauss\_rdpha\].\ $*$: the value pegged at its higher/lower limit; $^{**}$ Two alternative models (with and without [bbody]{} component) are shown for this spectrum. $^1$ EPIC-pn spectrum corrected for pile-up removing the central, brightest column; $^a$ Energy of the feature; $^b$ Line width in keV $^c$ Normalization of the feature ([xspec]{} units); $^d$ columns density of the warm absorber; $^e$ ionisation parameter of the warm absorber; $^f$ blueshift velocity of the warm absorber. $*$ This line can be associated to a residual of calibration around the instrumental Au-M edge.\ Broadband continuum and narrow absorption features {#absfeatures} -------------------------------------------------- In Table \[table\_continuum\_gauss\_rdpha\] (columns $\#3$ and 4), we show the best fit parameters obtained with the continuum model described in the previous section. RDPHA and RDCTI corrections give similar continuum parameters: they are both well described by a [bbody]{} with a temperature of $\sim 0.55$ keV and the comptonisation component shows a marked roll-over into the *XMM-Newton* bandpass. Indeed, the electron temperature (kT$_e$) is consistent with $\sim1.2$ keV, while the powerlaw photon index ($\Gamma$) lies at 1.0, although it pegged to the lower limit. Not surprisingly, several features (in absorption and emission) are clearly observed in the EPIC-pn spectra and we initially model all of them with Gaussian components, following @diaz12 and @dai14. For the absorption lines, we fixed at zero the dispersion width ($\sigma_v$) as they are narrower than the detector sensitivity. All the best fit of the features are provided in Table \[gauss\_rdpha\] and they are all statistically significant for the corresponding spectrum. An absorption line at $\sim2.2-2.3$ keV is found in both spectra that we identify as the residuals of calibration around the instrumental edge of Au-M at 2.3 keV. We also highlight that the residuals associated to these features are clearly stronger in the RDCTI data (more than 10$\sigma$) than in the RDPHA spectrum (less than 5$\sigma$), suggesting that RDPHA calibrations provide a better correction at low energies. Other marginally statistically acceptable features may also be found in the RGS spectra, but they are not taken into account as they are beyond the scope of this paper. In addition, an absorption edge at $\sim8-9$ keV (associated to highly ionised species of Iron, Fe XXI - XXV) is observed and it was then included in the fit. The absorption lines of the RDPHA corrected spectrum, ordered by energy as shown in Table \[gauss\_rdpha\], can be associated to Fe XXV $K_{\alpha}$ (6.70 keV), Fe XXVI $K_{\alpha}$ (6.99 keV), Fe XXV $K_{\beta}$ (7.86 keV) and Fe XXVI K$_\beta$ (8.19 keV), respectively. Notably, the centroid energy of these features are only marginally affected by the continuum and are compatible with zero shift although the uncertainty on them is of the order of $\sim900$ km s$^{-1}$. In addition, we note marginal hints of the K$_\alpha$ lines of S XVI $K_{\alpha}$ (2.64 KeV), Ar XVIII $K_{\alpha}$ (3.30 keV) and Ca XX $K_{\alpha}$ (4.10 keV), but they are not statistically significant. On the other hand, the RDCTI corrected spectrum shows a number of features whose energies are not consistent with those found in the RDPHA corrected spectrum. Indeed, we detected absorption lines at 6.55 keV, 6.83 keV, 7.69 keV and 7.99 keV (see Table \[gauss\_rdpha\]). In Figure \[comparison\_rdpha\_rdcti\], we show the significant discrepancies of the centroid energies of absorption lines in the RDCTI and RDPHA corrected spectra. Hence, those found in the RDCTI data could be either different line species or miscalibrations of the energy scale with one of the two corrections. We add that the energy lines in the RDCTI spectrum do not appear to be consistent with known rest frame absorption lines: we might claim that the highest energy line (7.99 keV) can be associated to the same Fe XXVI K$_{\beta}$ line of the RDPHA corrected spectrum, but with a high redshift ($\sim 9500$ km s$^{-1}$). Adopting a similar argument also to the lines at 6.55 keV and 6.83 keV, and associating them to Fe XXV and Fe XXVI, we would expect a redshift of $\sim6000/7000$ km s$^{-1}$. On the other hand, a similar approach can be applied to the line at 7.69 keV that, if associated to Fe XXV $K_{\beta}$, would be redshifted of $\sim 4000$ km s$^{-1}$. Alternatively, the lines at 6.83 keV and 7.69 keV might be associated to blueshifted lines of Fe XXV and Fe XXVI with velocity of $\sim6000$ and $\sim30000$ km s$^{-1}$, respectively, but they are larger than the velocities commonly observed in the dippers. However, these claims are only qualitative and might be misleading. Hence, in order to better constrain the properties of the warm medium which is more likely responsible for the narrow absorption lines, we substitute the Gaussian models with an [xstar]{} grid [@kallman01]. The selected [xstar]{} grid depends on the column density of the warm absorber, its ionisation level and its redshift/blueshift velocity. The dispersion velocity is not a variable parameter of the grid and is fixed to zero. In Table \[gauss\_rdpha\], we show that for the RDPHA spectrum the warm absorber column density is about an order of magnitude higher than that of the RDCTI spectrum ($60\times10^{22}$ cm$^{-2}$ vs $4\times10^{22}$ cm$^{-2}$). The ionisation is also clearly higher for the RDPHA spectrum ($\sim4.2$ vs $\sim3.4$). This latter result is consistent with the energy of the Iron edges found in both spectra. Indeed, the edge in the RDCTI corrected spectrum is at 8.4 keV, which energy may be associated to a lower level of Iron ionisation (Fe XXIII-XXIV) if compared to the RDPHA corrected one ($\sim8.8$ keV, Fe XXV). In addition, we note that if the RDCTI edge is associated anyway with the Fe XXVI or XXV K-edge, we should consider a high redshift ($>15000$ km s$^{-1}$). On the other hand, this redshift is not consistent with the blueshift of $2600$ km s$^{-1}$ found with the [xstar]{} model (Table \[gauss\_rdpha\]), unless to hypothesize that the edge is produced near the compact object and is affected by relativistic redshift. We finally mention that [xstar]{} leaves stronger residuals around 6.5-7.2 keV in the RDCTI data, suggesting that, in this spectrum, either the lines are broad (as the dispersion velocity is fixed at zero) or [xstar]{} is not able to simultaneously well model all the lines present in the spectrum. The [xstar]{} grid in the RDPHA spectrum provides instead a blueshift of $\sim 300$ km s$^{-1}$ which better matches the blueshift of the absorption lines found in other *XMM-Newton* and *Chandra* observations of GX 13+1 (we will discuss a direct comparison with the *Chandra* data in Section \[xmm-chandra\]). Broad emission line {#emissionline} ------------------- The comparisons in the previous section can be further extended investigating the properties of the broad Iron emission line. We initially model it with a Gaussian component in which we left the $\sigma_v$ parameter free to vary, as the line is clearly broad. However, we limited its range to 0.7 keV, in order to avoid unphysical values. ![Residuals of the best fit continuum model and absorption features, in the 2.0-10 keV energy range, for the RDPHA (*black*) and RDCTI (*red*) spectra. As expected, an emission line is clearly found at $\sim6-7$ keV and it displays a different shape for the two calibrations (see text).[]{data-label="RDPHA_RDCTI_diskline_zero"}](diskline_zero.eps){height="8.5cm" width="6.5cm"} In Figure \[RDPHA\_RDCTI\_diskline\_zero\], we show the residuals of the best fit continuum model and absorption features: a clear emission feature is seen, as expected, at $\sim6-7$ keV and we note that the spectral shape is different adopting the two calibrations. This feature seems to suggest that the RDCTI correction produces a marginally less physical energy of the Iron line (6.3 keV), which is expected to be observed between 6.4 and 7.0 keV (depending on the iron ionisation level), although the error bars make the feature being consistent with 6.4 keV, i.e. neutral Fe. We found that its broadness is $\sim0.3$ keV. On the other hand, the RDPHA corrections provide an energy line of 6.6 keV which is (well) consistent with the energies found in @diaz12 and @dai14, and it can be associated to Fe XXV. However, its broadness is larger (0.7 keV) than that of the RDCTI corrected spectrum. We note that also its intensity is about a factor of 4 stronger than in the RDCTI data. The discrepancy in the two spectra for the observed properties of the broad Gaussian profile could be due either directly to the diverse calibration of the energy scale or to the difference in the underlying modeling of the continuum or also to a mismodeling of the line itself. However, as we have already shown that the continuum parameters are insensitive to the detailed calibration of the energy scale and the width of the line suggests also relativistic smearing, we substitute the Gaussian model with a [diskline]{} model [@fabian89] in order to describe a relativistic reflection line. This model depends on six parameters: the energy of the line, the radius of the inner and outer disc ($R_{in}$ and $R_{out}$) in unit of gravitational radius, the inclination angle of the system, the power-law index (*Betor10*) in the radial dependence of the emissivity, and finally the normalization. We do not allow the energy line to go beyond the range 6.4-7.0 keV as it represents the lower and upper energy limits of the K$_{\alpha}$ Iron emission lines in all possible ionisation states. In Fig. \[RDPHA\_continuum+gauss-reflionx\], we show the unfolded spectra related to the best fits model. Although the shape of the lines appears different, in general, the continuum parameters are consistent within the errors with those inferred adopting the simple Gaussian line (see Table \[table\_continuum\_gauss\_rdpha\]). The emission energy line in both RDCTI and RDPHA data are consistent with 6.6-6.7 keV (Fe XXV), while the emissivity index is constrained between -2.4 and -2.6. In addition, although the inferred inner disc radius ($R_{in}$) is poorly constrained, our results point towards $R_{in}\sim$ 6 R$_{g}$ for both the RDPHA and RDCTI corrected spectra, while the outer radius has been fixed to $10^4$ R$_g$ as it was unconstrained. The inclination angle is instead extremely different for the two corrections as a large inclination angle ($>50$) is found for the RDPHA corrected spectrum while, for the RDCTI corrected spectrum, the inclination angle is consistent with a value lower than $\sim30$. The latter result collides with the findings of dips in the lightcurves of GX 13+1, which are usually observed in sources with high inclination angles ($>65$). Reflection component -------------------- However, the [diskline]{} model gives account of the shape of a single emission line and does not describe the whole reflection emission and this may lead to a wrong estimate of the inclination angle, for example. Hence, we refined our previous results substituting the [diskline]{} model with a full broadband self-consistent reflection model, i.e. the [reflionx]{} model [@ross05]. It takes into account the reflection continuum and a set of discrete features. The reflection component close to the NS should be affected by Doppler and relativistic effects in the inner regions close to the compact object which are not included in the model. Hence, we multiplied the reflection component by the relativistic kernel [rdblur]{} that depends on the inner disc radius, the emissivity index ($Betor10$), the inclination angle and the outer disc radius. The latter has again been fixed to $10^4$ R$_g$, as it turned out to be unconstrained. In addition, we also introduced an [highecut]{} component which allowed us to physically constrain the highest energy range of the reflected emission. We fixed the low energy cut-off at 0.1 keV, while the folding energy cut-off was tied to the electron temperature of the comptonising component as $2.7\times$kT$_e$ since, for saturated comptonisation, a Wien bump is formed at $\sim 3$ times the electron temperature. Finally, we linked the photon index of the [nthcomp]{} model to that of the reflection component. In Figure \[RDPHA\_continuum+gauss\] and Table \[table\_continuum\_gauss\_rdpha\] (columns $\#7$ and 8), we show the best fit parameters for the RDPHA and RDCTI corrected spectra. The addition of the broad-band reflection component modifies the spectral description of the continuum, i.e. the parameters of the blackbody and the Comptonized components. Not surprisingly the photon index of the [nthcomp]{} model pegged (or is very close) to 1.4, since the [reflionx]{} model is not calculated for $\Gamma$ below 1.4. However, in the previous section, we found that the photon index of the [nthcomp]{} would prefer to settle close to 1. Therefore, it is important to mention that the use of [reflionx]{} might force the fit to converge towards spectral parameters of the broadband continuum which are affected by this assumption. We note that in the case of the RDPHA corrected spectrum, the [nthcomp]{} component dominates in whole bandpass, with a blackbody emission stronger than the reflection one at energies below $\sim4$ keV. However, for this best fit, the ionisation parameter of the [reflionx]{} is dramatically low ($\sim15$ erg cm s$^{-1}$) and, moreover, the normalization of the soft component is mostly unconstrained. This may suggest that the properties of the spectrum does not allow us to describe the overall continuum with both the [reflionx]{} and the [bbody]{} components. Therefore, in Table \[table\_continuum\_gauss\_rdpha\], we show also an alternative fit without the [bbody]{} component whose spectral parameters appear instead physically more plausible. Indeed the ionisation parameter is now increased up to $\sim1100$ erg cm s$^{-1}$, which is more acceptable than the previous $\sim15$ erg cm s$^{-1}$. Hereafter, we consider this fit as reference for the RDPHA data. On the other hand, the RDCTI corrected spectrum does not suffer of this degeneracy in the spectral parameters. The [bbody]{} component is well constrained, with the reflection component that dominates over the blackbody emission and is predominant at energies below 1.5 keV. The parameters of ionisation is consistent with $\sim200 $ erg cm s$^{-1}$. The discrepancies in the reflection properties inferred with [reflionx]{} between RDPHA and RDCTI corrected spectra again suggest that the shape of the Iron emission line may be different for the two spectra as found with a simple [gaussian]{} or [diskline]{} model. Then, we further note that the column density of the RDCTI has raised up to $3.4\times10^{22}$ cm$^{-2}$ while that of the RDPHA (in the fit without a blackbody) settles at $\sim3.7\times10^{22}$ cm$^{-2}$. In addition, the abundance of iron relative to solar value is over-abundant ($\sim3$, which was set as upper limit in order to avoid unphysical values). The latter result is found also in the RDPHA data, although the error bars in the parameters of both spectra are large. Finally, the [RDBLUR]{} parameters that account for the relativistic smearing of the reflection component give similar best fit parameters for the RDPHA and RDCTI data. In particular the inner disc radius is consistent with $\sim 6-15$ R$_g$ and the inclination angle is larger than $\sim50$, which would be consistent with the expected high inclination due to the presence of dips. The spectral parameters obtained from the fits of the broad component clearly suggest that RDCTI and RDPHA corrections provide different spectral shapes. However, they do not allow us to univocally discriminate between the goodness of the two corrections. We can only note that different spectral results are obtained in the two cases. Comparing *XMM-Newton* and *Chandra* data {#xmm-chandra} ----------------------------------------- We found that the RDPHA and RDCTI corrections provide similar broadband continua and the study of the broad iron emission line suggests that also the inclination angle is compatible with that inferred by the existence of dips in the lightcurve. However, we note that the most important discrepancy (beyond the ionisation level of the reflection component) between the two corrections turns out to be the centroid energy line of the absorption features which significantly differ for the two corrective approaches of spectra. However, the RDPHA corrected spectrum shows absorption features whose energies are largely consistent with those found in the Chandra observation presented in @dai14. To better assess this issue, we fitted simultaneously the HEG and RDPHA spectra, adopting a continuum model consisting of an absorbed [nthcomp]{} and a [diskline]{}. We added Gaussian models, with dispersion velocity fixed at 0, in order to fit the lines at 6.69 keV, 6.70 keV and 8.2 keV (the line at $\sim 7.8$ keV is not present in the *Chandra* spectrum), plus the addition of an absorption edge at $\sim8.8$ keV. These features are the focus of such a spectral comparison as they are common to the *Chandra* and EPIC-pn RDPHA corrected spectra. Furthermore, in HEG data, we also considered significant lines of Ca XX K$_{\alpha}$, Si XIV K$_{\alpha}$ and Si XVI K$_{\alpha}$ lines at 4.10 keV, $\sim 2.0$ keV and $\sim 2.6$ keV respectively [@dai14], which are not (or, at maximum, marginally) seen in the EPIC-pn data. We left the continuum free to vary between the spectra because the HEG data can show spectral variability and they are also affected by pile-up. Since this is not taken into account, the continuum spectral shape can be different from that of the RDPHA spectrum. This is supposed not to be an issue for the absorption features as we checked that they are largely independent from the continuum model and are usually found also at different levels of luminosity [@ueda04]. On the other hand, the energy lines are linked between the spectra during the fit calculation, except their normalisations. In Figure \[RDPHA\_HEG\], we show the best fit obtained with the mentioned model. The best fit energy lines of the common lines are 6.702 ($\pm0.008$) keV, 6.98 ($\pm0.01$) keV and 8.19 ($\pm0.09$) keV, and the edge is at 8.84 ($\pm0.08$) keV which are widely consistent with those found in Section \[absfeatures\]. It is evident that no significant residuals are still present except for a HEG point at $\sim 7$ keV, which present a residual at $\sim4\sigma$, and the EPIC-pn absorption lines at $\sim7.9$ keV and $\sim2.3$ keV. We suppose that the former is produced by either a broadening of the line in the *Chandra* data or to a blueshift of the line which cannot be detected in the EPIC-pn spectrum; on the other hand, the $\sim7.9$ keV line is the Fe XXV K$_{\beta}$ which is not observed in the HEG data while the 2.3 keV line is instead likely a residual in the calibration of the EPIC instrument around the Au edge. This may suggest that the RDPHA calibrations may be further improved. In any case, as the other absorption features are totally consistent between HEG and EPIC-pn data, we strongly suggest that the RDPHA calibrations should to be preferred to the RDCTI ones. Discussion ========== ![*Top-panel*: Unfolded $Ef(E)$ *Chandra*-HEG (*black*) and EPIC-pn RDPHA corrected spectra (*red*), adopting an absorbed [edge$\cdot$(nthcomp+diskline+gauss)]{} (see text). The positions of the absorption features which are common to the two spectra are widely consistent within the errors, when fitted simultaneously. *Bottom panel*: Ratio of the data and the best fit model. For display purposes, the HEG spectrum have been rebinned at a minimum significance of 30$\sigma$.[]{data-label="RDPHA_HEG"}](rdpha_vs_chandre_pl_eeuf.eps){height="8.5cm" width="7.5cm"} In this work we have analysed a single *XMM-Newton* observation taken in *Timing* mode of the LMXB dipping source GX 13+1. The main goal of this paper is studying the impact of different calibrations (RDPHA and RDCTI) of the energy scale in EPIC-pn Timing Mode on the spectral modeling of the LMXB dipping source GX 13+1. The RDPHA is an empirical correction of the energy scale that does not assume any specific energy-dependence, as it is instead for the RDCTI correction. We have proven that RDPHA and RDCTI corrected data provide different spectral results and we tentatively try to understand which correction offers the most plausible and physically acceptable scenario. We have shown that, in order to avoid spurious effects on the spectral analysis due to the existence of pile-up, it is necessary to remove the three brightest, central columns of the EPIC-pn CCD. We found that the broad band continuum can be well described for both types of spectral data by the combination of a soft blackbody and an optically thick, cold comptonising component. This result is consistent with that found for other accreting NSs and also for *XMM-Newton* and *Chandra* spectra of GX 13+1 (e.g. @diaz12 [@dai14] and reference therein). The soft component provides an inner temperature consistent with $\sim 0.6$ keV for the two corrections. From its normalization, we infer an emission radius of $\sim40$ km, which prevents to relate this emission with the NS surface and allows us to more likely associate it to the accretion disc. The parameters of the comptonising corona, which may be possibly produced close the NS, are instead consistent with a cold ($\sim1.2$ keV) electron population, where we assumed that the seed photons are provided by the inner disc regions for both spectra. Notably, several features are clearly observed in the RDPHA and RDCTI corrected spectra and it has been suggested that they are produced by a warm absorber during both dips and persistent epochs of GX 13+1. The warm absorber may be created by outflows at the outer regions of the disc where the thermal pressure is stronger than the relative gravitational pull (e.g. @diaz12 [@dai14]). The narrow absorption features are the best gauge to estimate the accuracy of the energy scale yielded by the two aforementioned calibration methods. We compared our absorption lines, modeled with a Gaussian, to the absorption lines observed in the *Chandra/HEG* data presented in @dai14. In that work, the authors found the existence of lines at 2.6234, 4.118, 6.706, 6.978, 8.273 keV associated with S XVI, Ca XX, Fe XXV, Fe XXVI K$_{\alpha}$ and Fe XXVI K$_{\beta}$, respectively, with possibly blueshifts comprise between $\sim200-1000$ km s$^{-1}$. We detected the same lines (although the first two are not statistically significant) only on the RDPHA data, with the addition of a line at 7.82 keV, more likely associated to Fe XXV K$_{\beta}$. Unfortunately the error bars inferred by a simple Gaussian model are large and prevents to constrain the possible blueshift of most of these features. This may be done only to the line of Fe XXVI K$_{\alpha}$ which is shifted if compared to the rest frame energy (6.9662 keV), suggesting a blueshift of $\sim 1500\pm 300$ km s$^{-1}$. On the other hand, the lines in the RDCTI data are systematically different and shifted from those found in the RDPHA corrected spectrum. These absorption features, if associated to the lines observed in *Chandra* data, should have too large redshifts ($>5000$ km s$^{-1}$; see section \[absfeatures\]) which are physically implausible if produced by a warm absorber located at the outer edge of the disc. We note that, however, these associations might then be misleading. Therefore, we tentatively tried to better constrain the properties of the warm medium which produces these absorption lines adopting an [xstar]{} grid. It showed us that for the RDPHA spectrum, the column density of the ionised medium is $\sim6\times10^{23}$ cm$^{-2}$, with an ionisation level of Log($\xi$) $\sim 4.2$, and more likely ejected by the system with a velocity of $\sim 300$ km s$^{-1}$. On the other hand, for the RDCTI data, column density and ionisation are an order of magnitudine lower while the blueshift velocity is instead a factor of 8-9 higher than the RDPHA data. However, the blueshift velocity in the RDPHA corrected spectrum is highly consistent with those found in previous works [@ueda04; @madej13; @diaz12; @dai14]. In addition, the [xstar]{} grid is not completely able to model the lines at 6-7 keV in the RDCTI spectrum, suggesting either a broadening or a general mis-modeling of the lines. This result further supports the conclusion that the RDPHA corrections are generally more reliable than the RDCTI ones. We then found that RDCTI and RDPHA corrections provide similar results on the continuum and the broad emission line, if the latter is described by a model more complex than a simple Gaussian. In fact, we found the existence of residuals which suggest a relativistic broadening of the line. For this reason, we initially introduced the [diskline]{} model which shows that the inclination angle is small ($<30$) for the RDCTI data and not consistent with the dip episodes of GX 13+1, which instead point towards a large inclination angle ($60-85$). On the other hand, this finding is, however, not confirmed if we model the disc reflection with the self-consistent disc reflection code [reflionx]{} [@ross05] modified by a relativistic kernel. Indeed, for both spectra, we found that the inclination angle can be larger than [50]{}, confirming that a single Gaussian or [diskline]{} model are too simple for the quality of the data. However, we highlight that the results obtained with [reflionx]{} can be affected by the pitfall of the model, as [reflionx]{} is not defined for photon index lower than 1.4 while we found that both type of spectra can be fitted by an [nthcomp]{} with $\Gamma\sim1.0$. In addition, [reflionx]{} does not take into account the self-ionisation of the accretion disc, introducing a possible source of uncertainty in the description of the continuum. With that in mind, for the fit of the RDPHA spectrum, we observed a degeneracy in the spectral parameters of the [bbody]{} and [reflionx]{} component, or in other words, we could find two best fits which are statistically comparable: in the first one, the normalization of the soft component is poorly constrained and the ionisation parameter of the [reflionx]{} is close to its lower limit ($\sim 15$ erg cm s$^{-1}$); in the other case, the soft component can be removed from the fit and the ionisation parameter converges towards more physical values ($\sim 1000$ erg cm s$^{-1}$). The spectral degeneracy warns that the data are possibly not adequate to constrain the properties of the blackbody emission, when using [reflionx]{} because of the complexity of the adopted model. We highlight that in the first fit, the reflection systematically needs to converge towards a strong iron emission line in low ionisation conditions. This is in contrast with the existence of H-like and He-like Iron absorption features that can exist only in case of ionisation higher than 100 erg cm s$^{-1}$ [@kallman04]. On the other hand, this is instead satisfied in the second best fit, where the ionisation parameter is higher than 1000 erg cm s$^{-1}$. The discrepancy with the latter ionisation value and that found with the [xstar]{} grid may more likely suggest that the density of the material is different in the warm absorber and in the disc. Instead, for the RDCTI data, the ionisation parameter of the reflection component is lower ($\sim200$ erg cm s$^{-1}$) than the second best fit of the RDPHA data and only marginally supports the value found with the [xstar]{} grid ($>1000$ erg cm s$^{-1}$). Such a low ionisation level of the reflection component may collide with the energies of the absorption lines that, according to [xstar]{}, would be expected at energies higher than those found in RDCTI corrected spectrum. However, as the ionisation parameter depends on the density, we note again that the absorption features are more likely produced in a warm medium. This should have a density ($10^{22}$ cm$^{-2}$) lower than that in the accretion disc, where the reflection is produced. However, as the ionisation parameter depends also on the distance, we cannot exclude the effect of the latter on the estimates of the ionisation. We finally conclude that the reflection component cannot be easily investigated to infer the goodness of one correction in comparison to the other. Instead, it clearly suggests that the spectra, i.e. the shapes of the broad emission line, obtained with the two calibrations are different. However, the quality of the two corrections can be discriminated studying the narrow absorption lines. Indeed, adopting simple continuum models and Gaussian models for the narrow absorption features, the RDPHA corrected spectrum of GX 13+1 provides more physical spectral parameters. In particular, regarding the absorption lines that are much more consistent with those inferred also by *Chandra*. In addition, the residuals at the energy of the instrumental Au edge (2.2-2.3 keV) are smaller in the RDPHA data, favouring a better calibration of that energy range with the RDPHA corrections. For these reasons, although the EPIC-pn calibrations can be further improved, we propose that the RDPHA corrections should be generally preferred to the standard RDCTI (or *epfast*) corrections, especially in case of spectra with a large number of absorption features. Hence, supported also by our results, RDPHA will be the default calibration of the next SAS version (SAS v.14), superseding RDCTI (i.e. *epfast*) that was the default from SAS v.9 to SAS v.13.5. Our conclusions will be further tested on other accreting sources in order to support with more solid basis the RDPHA corrections. Conclusions =========== Accuracy of the scale energy calibration in *XMM-Newton* data taken in Timing mode is extremely important when observing bright sources, where rate-dependent effects have to be taken into account. For this reason, two calibration approaches have been developed: the new RDPHA and the standard RDCTI (*epfast*) corrections. The aim of this work was to analyse and test the two calibrations on one EPIC-pn *XMM-Newton* observation, taken in Timing mode, of the persistent accreting NS GX 13+1. This source is a dipper which has shown periodic dips and its spectra are characterised by a number of absorption features. It also shows an emission line associated to the Fe XXV or XXVI K$_{\alpha}$ transition. Hence GX 3+1 is a suitable source to infer the goodness of the two calibrations, thanks to its several absorption features, the emission line and a simple continuum. We found that: - the continuum can be well described, in both spectra, by a blackbody of $\sim0.5$ keV and a high energy comptonisation with electron temperature of $\sim1.1-1.2$ keV and photon index of $\sim 1$; - however, the two calibrations provide different results of the spectral features as the absorption lines observed in the RDCTI and RDPHA spectra differ significantly. - We suggest that the lines in the RDCTI spectrum could be associated to known atomic transitions only assuming implausibly high inflow and outflow velocities for this source. On the other hand, the absorption lines in the RDPHA spectrum are more consistent with those already found in previous *Chandra* and *XMM-Newton* observations and we easily associated them to Fe XXV $K_{\alpha}$ (6.70 keV), Fe XXVI $K_{\alpha}$ (6.99 keV), Fe XXV $K_{\beta}$ (7.86 keV) and Fe XXVI K$_\beta$ (8.19 keV). - We also observed marginal differences in the shape of the broad emission line, either when fit with a [diskline]{} or with a [reflionx]{} model. However, such a component cannot allow us to clearly infer the validity of the two corrections because of the poor quality of the constraints on the best fit parameters. - Finally, although we note that improvement can be made especially at the energy of the instrumental Au line ($\sim2.3$ keV), our results suggest that the RDPHA calibrations are more physically reliable than the RDCTI ones and, for this reason, they should be implemented as default as of SASv14 (and associated data reduction pipeline). Acknowledgements {#acknowledgements .unnumbered} ================ A. R. gratefully acknowledges the Sardinia Regional Government for the financial support (P. O. R. Sardegna F.S.E. Operational Programme of the Autonomous Region of Sardinia, European Social Fund 2007-2013 - Axis IV Human Resources, Objective l.3, Line of Activity l.3.1). This work was partially supported by the Regione Autonoma della Sardegna through POR-FSE Sardegna 2007-2013, L.R. 7/2007, Progetti di Ricerca di Base e Orientata, Project N. CRP-60529, and by the INAF/PRIN 2012-6. The High-Energy Astrophysics Group of Palermo acknowledges support from the Fondo Finalizzato alla Ricerca (FFR) 2012/13, project N. 2012-ATE-0390, founded by the University of Palermo. [^1]: http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0248-1-0.ps.gz [^2]: http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0302-1-5.pdf [^3]: http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0312-1-4.pdf [^4]: http://xmm.esac.esa.int/sas/current/documentation/threads/EPIC$\_$ reprocessing.shtml [^5]: http://xmm2.esac.esa.int/docs/documents/CAL-SRN-0313-1-3.pdf
\ \ \ Representing the SLD Collaboration\ \ We present preliminary results on two analyses performed by the SLD Collaboration using inclusive $B$ decays: a search for CP violation and a search for the $b\rightarrow sg$ transition. [Talk given at the International Europhysics Conference on High Energy Physics\ Jerusalem, Israel, August 19–26, 1997]{} Search for violation in inclusive decays ======================================== Because they involve large branching fractions and sizable CP asymmetries, (semi-) inclusive $B$ decays have been proposed[@dunietz] as a means of searching for CP violation, and extracting measurements of CKM parameters. The totally inclusive asymmetry provides a measurement of the CP observable $a = {\cal I}m{\Gamma_{12}\over M_{12}}$. It is the focus of this analysis. Its time dependence is[@dunietz]: [$${\cal{A}}(t) = {{\Gamma(B^0(t)\rightarrow all)-\Gamma(\bar{B^0}(t)\rightarrow all)}\over {\Gamma(B^0(t)\rightarrow all)+\Gamma(\bar{B^0}(t)\rightarrow all)}} = {a \left({{\Delta m\ \tau_B}\over 2} \sin\Delta m\ t - \sin^2{{\Delta m\ t}\over 2}\right)}\cdot$$ ]{} A non-zero value of $a$ implies CP violation. Due to the large value of $\Delta m_s$, this analysis is only sensitive to asymmetries in $B_d$ decays for which $a_d$ is expected to be $\approx 10^{-3}$ in the Standard Model.\ ![image](figure1.ps)\ [ Asymmetry as a function of decay length.]{} $B$-decay vertices are reconstructed using a topological technique[@jackson]. From the 1993-96 data sample ($200k\ Z^0$’s), about $11k$ neutral and $19k$ charged vertices are selected, with a $B_d$ content of 50% and 35%, respectively. The crucial part of the analysis is tagging the flavor of the $b$ quark at production. This is done mainly using the left-right forward-backward asymmetry (given by the electron beam longitudinal polarization and the thrust axis polar angle) and the opposite hemisphere momentum-weighted jet charge. These two tags which have an efficiency of 100%, are complemented by additional information from the opposite hemisphere when it is available (vertex charge, sign of a high-$p_T$ lepton, charge sum for kaons from a $B$-decay). The overall $b$-flavor tag purity is estimated to be 84%. The measured asymmetry is shown in Fig. 1 as a function of decay length. A binned $\chi^2$ fit is performed and a value of $a_d = -0.04\pm 0.12(stat)\pm 0.05(sys)$ is obtained. This gives a 95% C.L. limit of $-0.29 < a_d < 0.22$. Search for enhanced in inclusive decays ======================================= It was suggested recently that a branching ratio $\sim 10\%$ for the $b\rightarrow sg$ transition (0.2% in the Standard Model) could resolve a variety of $B$ decay puzzles (e.g., $b$ semileptonic branching ratio, number of $c$ quarks produced per $B$ decay, etc)[@kagan]. The search strategy consists of looking for an excess in kaon production at high $p_T$, where the signal-to-background ratio is expected to be of the order of 1:1 (for a 10% branching ratio). [ Number of kaons with $p_T > 1.8$ GeV/c]{}\ ------- -- --------------- --------------- -- -------------- -------------- All No Lepton All No Lepton Data 35.0 30.0 30.0 23.0 M.C. 22.1 14.1 27.5 20.3 Diff. $12.9\pm 5.9$ $15.9\pm 5.5$ $2.5\pm 5.5$ $2.7\pm 5.5$ ------- -- --------------- --------------- -- -------------- -------------- At SLD, we select $B$ vertices that contain an identified $K^\pm$ using the [Č]{}erenkov Ring Imaging Detector. We measure the kaon transverse momentum w.r.t. the direction defined by the small SLC interaction point and the well-reconstructed $B$ decay vertex. The signal is enhanced by: $i)$ separating the data into a one-vertex sample (signal) and a two-vertex sample (control) according to the probability for all $B$-decay tracks to originate from a single point, $ii)$ rejecting decays that contain an identified lepton. The efficiency for isolating true one-vertex decays (e.g., charmonium) is estimated at 80%, whereas only 45% of standard $b\rightarrow c$ transitions satisfy the one-vertex requirement. We compare the $K^\pm$ transverse momentum spectrum observed in the data to that in the Monte Carlo and look for an excess above 1.8 GeV/c. Our modeling of the $p_T$ resolution is cross-checked using leptons whose spectrum is well known. The results are summarized in Table 1. We observe in the 1993-95 data ($150k\ Z^0$’s) an excess of $12.9\pm 5.9(stat)\pm 3.1(syst)$ decays, without lepton rejection. The systematic error is dominated by the uncertainty in the modeling of the $D^0$ momentum spectrum and its two-body decay branching fractions. The result for the case with lepton rejection is also given in Table 1 (statistical error only). This analysis will be significantly improved with the addition of an anticipated large data sample in the near future. [99]{} M. Beneke, G. Buchalla, and I. Dunietz, Phys. Lett. B393 (1997) 132. D.J. Jackson, Nucl. Inst. and Meth. A388 (1997) 247. A. Kagan and J. Rathsman, HEP-PH/9701300. [**The SLD Collaboration**]{} 0.5cm *Adelphi University, Garden City, New York 11530 INFN Sezione di Bologna, I-40126 Bologna, Italy Boston University, Boston, Massachusetts 02215 Brunel University, Uxbridge, Middlesex UB8 3PH, United Kingdom University of California at Santa Barbara, Santa Barbara, California 93106 University of California at Santa Cruz, Santa Cruz, California 95064 University of Cincinnati, Cincinnati, Ohio 45221 Colorado State University, Fort Collins, Colorado 80523 University of Colorado, Boulder, Colorado 80309 Columbia University, New York, New York 10027 INFN Sezione di Ferrara and Università di Ferrara, I-44100 Ferrara, Italy INFN Lab. Nazionali di Frascati, I-00044 Frascati, Italy University of Illinois, Urbana, Illinois 61801 Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720 Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 University of Massachusetts, Amherst, Massachusetts 01003 University of Mississippi, University, Mississippi 38677 Nagoya University, Chikusa-ku, Nagoya 464 Japan University of Oregon, Eugene, Oregon 97403 INFN Sezione di Padova and Università di Padova, I-35100 Padova, Italy INFN Sezione di Perugia and Università di Perugia, I-06100 Perugia, Italy INFN Sezione di Pisa and Università di Pisa, I-56100 Pisa, Italy Rutgers University, Piscataway, New Jersey 08855 Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX United Kingdom Sogang University, Seoul, Korea Soongsil University, Seoul, Korea 156-743 Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309 University of Tennessee, Knoxville, Tennessee 37996 Tohoku University, Sendai 980 Japan Vanderbilt University, Nashville, Tennessee 37235 University of Washington, Seattle, Washington 98195 University of Wisconsin, Madison, Wisconsin 53706 Yale University, New Haven, Connecticut 06511 Deceased Also at the Università di Genova Also at the Università di Perugia*
--- abstract: | This paper concerns the numerical solution of the finite-horizon Optimal Investment problem with transaction costs under Potential Utility. The problem is initially posed in terms of an evolutive HJB equation with gradient constraints. In [@DaiYi], the problem is reformulated as a non-linear parabolic double obstacle problem posed in one spatial variable and defined in an unbounded domain where several explicit properties and formulas are obtained. The restatement of the problem in polar coordinates allows to pose the problem in one spatial variable in a finite domain, avoiding some of the technical difficulties of the numerical solution of the previous statement of the problem. If high precision is required, the spectral numerical method proposed becomes more efficient than simpler methods as finite differences for example. **Keywords:** [Optimal Investment, Potential Utility, Transaction costs, Spectral method.]{} author: - 'Javier de Frutos[^1] and Víctor Gatón[^2]' title: 'A spectral method for an Optimal Investment problem with Transaction Costs under Potential Utility[^3]' --- Introduction ============ This paper concerns with the numerical solution of the finite horizon optimal investment problem with transaction costs under Potential Utility. Let us consider an investor whose wealth can be inverted in a risky stock and in a riskless bank account. We suppose that the investor is risk averse with constant relative risk aversion (CRRA). In [@Merton], Merton showed that, in absence of transaction costs, the problem can be explicitly solved. The optimal strategy consists in keeping a fixed proportion between the money invested in the risky asset and the bank account. When transaction cost are considered, the Merton strategy is unfeasible because it requires a continuous portfolio rebalancing with unbounded costs. Proportional transaction costs were first introduced in [@Magill]. More recently, in [@DaiYi], the problem was reformulated as a non-linear parabolic double obstacle problem posed in one spatial variable, defined in an unbounded domain. Several explicit properties and formulae were obtained in [@DaiYi], although explicit formulae for the solution are not available. The problem was numerically solved in [@Arregui], where the authors employ a characteristics method with a projected relaxation scheme. The scheme proportionates satisfactory results with good agrement with the results in [@DaiYi]. When we want to solve financial problems, like finding investment strategies or pricing derivative contracts, in general, there is no known closed form solution of the different problems and several numerical methods have been employed. Without the aim to be exhaustive, Monte-Carlo based methods ([@Duan2], [@Stentoft]), Piecewise linear interpolations ([@Benameur]), Lattice methods ([@Lyuu], [@Ritchen]), Finite Elements ([@Achdou]) or Spectral methods ([@Frutos]) are some of them. A general review of financial problems or models, numerical techniques and software tools can be found in [@DuanHandbook]. The objective of this paper is to construct a spectral method specifically adapted to the Optimal Investment problem with Potential Utility when proportional transaction costs are present. As it is well known, spectral methods [@Canuto] are a class of spatial discretizations for partial differential equations which offer fast convergence in the case of smooth solutions. They are not widely used yet in numerical finances because it is usually believed that the lack of smoothness present in most interesting problems makes spectral methods uncompetitive. However, several papers have used spectral methods for problems in Finance with good results. For instance, in [@Chiarella] a Fourier-Hermite procedure to the valuation of american options has been presented. In [@Frutos] a spectral method based on Laguerre polynomials has been employed to numerical valuation of bonds with embedded options. A Fourier spectral method to compute fast and accurate prices of american options written on asset following GARCH models has been presented in [@Breton]. In [@Breton2] the authors use an adaptive method with Chebyshev polynomials coupled with a dynamic programming procedure for contracts with early exercise features. In [@Oosterlee] a very efficient procedure for asian options defined on arithmetic averages has been proposed. In all cases, the spectral-based methods have been proved to be competitive with other alternatives in terms of precision versus computing time needed to compute the numerical solution. In the present paper, we restate the problem using polar coordinates. This allows to consider a double parabolic obstacle problem in one spatial-like variable defined in a bounded domain. Furthermore, this formulation avoids the emergence of nonlinear terms simplifying the numerical treatment. We present a Chebyshev spectral approach based on adaptive meshes to locate the optimal frontiers. Although some of the numerical difficulties that appear with the parabolic double obstacle problem are avoided with our approach, we still have to deal with the so-called Gibbs effect, which comes from the fact that the objective function is continuous but not differentiable at maturity. We show that this issue can be circumvented by using a time-adapted spatial mesh. We show that our approach is efficient by comparing it with a standard finite difference scheme. The outline of the paper is as follows. In Section \[Ch3TOIPrecon\], a description of the Optimal Investment problem as it can be found in [@Davis] or [@Shreve] is presented. In Section \[Ch3DPR\], the problem is reformulated as a parabolic double obstacle problem as it was done in [@DaiYi]. Afterwards, we propose an equivalent formulation of the problem employing polar coordinates. Section \[Ch3NM\] is devoted to a mesh-adapted Chebyshev-collocation method which solves the problem of Section \[Ch3DPR\]. In Section \[Ch3numresults\] we perform the numerical analysis of the method. Section \[Ch3Conclus\] presents some conclusions and future research. The Optimal Investment Problem {#Ch3TOIPrecon} ============================== We consider an optimal investment problem with transaction costs, [@Davis], [@Shreve]. Let $\left(\Omega,\mathscr{F},P\right)$ be a filtered probability space. Let us consider an investor who holds amounts $X(t)$ and $Y(t)$ in a bank and a stock account respectively. The dynamics of the processes is $$\label{Ch3ecudinacc} \begin{aligned} dX(t) &=rX(t)dt-(1+\lambda)dL(t)+(1-\mu)dM(t), && X(t_0)=x, \\ dY(t) &=\alpha Y(t) dt + \sigma Y(t) dz_t+dL(t)-dM(t), && Y(t_0)=y, \end{aligned}$$ where $r$ denotes the constant risk-free rate, $\alpha$ is the constant expected rate of return of the stock, $\sigma>0$ is the constant volatility of the stock and $z_t$ is a standard Brownian motion such that $\mathscr{F}^{z}_t\subseteq\mathscr{F}$ where $\mathscr{F}^{z}_t$ is the natural filtration induced by $z_t$. We suppose that $L(t)$ and $M(t)$ are adapted, right-continuous, nonnegative and nondecreasing processes representing the cumulative monetary values of the stock purchased or sold respectively and $\lambda\ge 0$ and $0\le\mu<1$, represent the constant proportional transaction costs incurred on the purchase or sale of the stock. In this paper we assume that $\lambda+\mu>0$. The finance meaning of equation (\[Ch3ecudinacc\]) is natural. Along time, the rate of change of the amount of money invested in the risky asset, represented by the stochastic process $Y(t)$, evolves according to a standard geometric brownian motion modified by the difference between the amount of money invested in buying stock, $dL(t)$, and the amount of money obtained selling stock, $dM(t)$. At the same time, the value of the bank account, $X(t)$, is instantaneously increased by the difference $-(1+\lambda)dL(t)+(1-\mu)dM(t)$, that represents the net flow of money resulting from stock negotiations, including the transaction costs. Processes $L(t)$ and $M(t)$ can be financially understood as an historical record of the total purchases and sales of stock of the investor. The net wealth is the money the investor would have if he closes his positions. It can be written as $$\label{Ch3ecunewwealth_long} W(t)= X(t)+(1-\mu)Y(t), \quad \text{if} \ Y(t)\geq0,$$ if the investor is long in the stock or $$\label{Ch3ecunewwealth_short} W(t)= X(t)+(1+\lambda)Y(t), \quad \text{if} \ Y(t)<0,$$ in case the investor is short in the stock. Let $U(w)$ be a utility function, that is, a continuous, strictly increasing, concave function. The optimal value function is given by: $$\label{Ch3defvarpphi} \varphi(x,y,t)=\sup_{(L,M)\in A_t(x,y)}E\left[\left.U\left(W(T)\right)\right|(X(t),Y(t))=(x,y)\right],$$ for all $ (x,y,t)\in\mathbb{S}\times[0,T]$, where $A_t(x,y)$ is the set of admissible strategies, defined as the set of processes $(L,M)$ such that if $(X(t),Y(t))=(x,y)\in\mathbb{S}$ then $(X(\tau),Y(\tau))\in\mathbb{S}$ for $t\le \tau\le T$ and where $\mathbb{S}$ is the Solvency Region, $$\label{Ch3defsolvreg} \mathbb{S}=\left\{(x,y)\in \mathbb{R}^2 \mid x+(1+\lambda)y >0, x+(1-\mu)y>0\right\}.$$ In this paper, we assume that $U(w)$ is a potential function (constant relative risk aversion utility function) of the form $$U\left(w\right)=\frac{w^{\gamma}}{\gamma},$$ for some constant $\gamma$, $0<\gamma<1$. The optimal value function (\[Ch3defvarpphi\]), see [@Shreve], is the viscosity solution in $\mathbb{S}\times [0,T]$ of $$\label{Chp3HJBecu1} \min\left\{-\varphi_t-{\mathcal{L}}\varphi,\ -(1-\mu)\varphi_x+\varphi_y, \ (1+\lambda)\varphi_x-\varphi_y \right\}=0,$$ subject to: $$\label{Chp3HJBecu2} \varphi(x,y,T)= \left\{ \begin{aligned} & U(x+(1-\mu)y), \quad \text{if} \ y>0, \\ & U(x+(1+\lambda)y), \quad \text{if} \ y\leq0, \end{aligned} \right.$$ where $$\label{Chp3HJBecu3} \mathcal{L}\varphi=\frac{1}{2}\sigma^2y^2\varphi_{yy}+\alpha y \varphi_y+rx\varphi_x.$$ The existence and uniqueness of a viscosity solution of (\[Chp3HJBecu1\])-(\[Chp3HJBecu2\]) has been proved in [@Davis]. There, it is proved that at any time $t$, the spatial domain is divided in three regions, namely, in financial terms, the Buying Region $\text{BR}(t)=\{ (x,y)|(1+\lambda)\varphi_x-\varphi_y=0\}$, the Selling Region $\text{SR}(t)= \{(x,y)|-(1-\mu)\varphi_x+\varphi_y=0\}$ and the No Transactions Region $\text{NT}(t)=\{ (x,y)| -\varphi_t-\mathcal{L}\varphi=0\}$. The Selling and Buying Regions do not intersect. For simplicity in the exposition, we suppose that $\alpha>r$. With this hypothesis, short-selling is always a suboptimal strategy [@Cvitanic], [@Merton], [@Shreve]. This means that the optimal trading strategy is always to have a nonnegative amount of money invested in the stock. Reformulation of the problem. {#Ch3DPR} ============================= As remarked in [@Davis], the choice of the Potential Utility function is interesting since it leads to the homothetic property in the optimal value function, $$\label{homothetic} \varphi(\rho x, \rho y,t)=\rho^{\gamma} \varphi (x,y,t), \quad \rho>0.$$ This property is used in [@DaiYi] to reduce the dimensionality of the problem. Setting $z=\frac{x}{y}, \ z \in \Omega=(-(1-\mu), \ \infty)$, a new function $G(x,t)=\varphi(x,1,t)$ is introduced in [@DaiYi], so that: $$\label{Checamborig} \varphi(x,y,t)=y^{\gamma}G\left(z,t\right), \quad w=\frac{1}{\gamma}\log(\gamma G), \quad v(z,t)=w_z(z,t).$$ In [@DaiYi], the authors prove that $v(z,t)$ is the solution of an one dimensional parabolic double obstacle problem with two free boundaries equivalent to (\[Chp3HJBecu1\]). Furthermore, it is also proved in [@DaiYi], that there exist two continuous monotonically increasing functions $$\label{Ch3FunfronC} \text{BR}^c_F,\text{SR}^c_F:[0,T]\rightarrow(-(1-\mu),+\infty],$$ such that $\text{BR}^c_F(t)>\text{SR}^c_F(t), \ \forall t\geq0$. The Buying and Selling Regions are characterized by $$\begin{aligned} \text{\textbf{SR}} &= \left\{(z,t)\in\Omega\times[0,T] \ \mid z\leq\text{SR}^c_F(t), t\in[0, \ T] \right\}, \\ \text{\textbf{BR}} &= \left\{(z,t)\in\Omega\times[0,T] \ \mid z\geq\text{BR}^c_F(t), t\in[0, \ T] \right\}. \\ \end{aligned}$$ Although other properties and explicit formulas are obtained in [@DaiYi], a complete analytical solution is still missing and numerical procedures have to be used, see, for example, [@Arregui]. Here, inspired by [@DaiYi], we take advantage of (\[homothetic\]) by working in polar coordinates $x=b \cos(\theta)$, $ y=b \sin(\theta)$. It is not difficult to show that (\[Chp3HJBecu1\])-(\[Chp3HJBecu3\]) are equivalent to $$\label{Ch3ecuinicpol} \min \left\{ -\varphi_t-\mathcal{L}\varphi,\ -(1-\mu)\mathcal{L}_1\varphi+ \mathcal{L}_2\varphi , (1+\lambda)\mathcal{L}_1\varphi -\mathcal{L}_2\varphi \right\}=0,$$ subject to : $$\label{finalcondition1} \varphi(b,\theta,T)=\begin{cases} U(b\cos(\theta)+(1-\mu)b\sin(\theta)), \quad \text{if} \ \theta>0, \\ U(b\cos(\theta)+(1+\lambda)b \sin(\theta)), \quad \text{if} \ \theta\le 0, \end{cases}$$ where $$\mathcal{L}_1\varphi=\cos(\theta)\varphi_b-\frac{\sin(\theta)}{b}\varphi_{\theta},\quad \mathcal{L}_2\varphi=\sin(\theta)\varphi_b+\frac{\cos(\theta)}{b}\varphi_{\theta}$$ and $$\begin{aligned} \mathcal{L}\varphi=& \frac{1}{2}\sigma^2\bigl (b\sin(\theta)\bigr)^2 \bigl(\sin^2(\theta)\varphi_{bb}+\frac{2\sin(\theta)\cos(\theta)}{b}\varphi_{b\theta}\\ &\phantom{\frac{1}{2}\sigma^2\bigl(b\sin(\theta)\bigr)^2} +\frac{\cos^2(\theta)}{b^2}\varphi_{\theta\theta} +\frac{\cos^2(\theta)}{b}\varphi_b-\frac{2\sin(\theta)\cos(\theta)}{b^2}\varphi_{\theta}\bigr) \\ & +\alpha b \sin(\theta)\bigl(\sin(\theta)\varphi_b+\frac{\cos(\theta)}{b}\varphi_{\theta}\bigr) +rb \cos(\theta)\bigl(\cos(\theta)\varphi_b-\frac{\sin(\theta)}{b}\varphi_{\theta}\bigr). \end{aligned}$$ Based on (\[homothetic\]), we conjecture a solution to (\[Ch3ecuinicpol\]) of the form: $$\varphi(b,\theta,t)=b^{\gamma} V(\theta,t).$$ Taking into account that $$\begin{aligned} \varphi_b &=\gamma b^{\gamma-1} V,&\varphi_{bb}&=\gamma(\gamma-1)b^{\gamma-2}V,&\varphi_{b\theta}&=\gamma b^{\gamma-1}V_{\theta},& \\ \varphi_{\theta} &=b^{\gamma} V_{\theta},&\varphi_{\theta \theta}&=b^{\gamma}V_{\theta \theta}, \quad &\varphi_t&=b^{\gamma} V_{t},& \\ \end{aligned}$$ and substituting in (\[Ch3ecuinicpol\]), (\[finalcondition1\]), we see that $V(\theta,t)$ satisfies, $$\begin{aligned} \label{Ch3enunprobpolar} \min & \left\{-V_t-g_2(\theta)V_{\theta \theta}-g_1(\theta)V_{\theta}-g_0(\theta) V, -V_{\theta}+ \gamma\frac{(1+\lambda)\cos(\theta)-\sin(\theta)}{(1+\lambda)\sin(\theta)+\cos(\theta)}V, \right. \nonumber\\ & \left. \ V_{\theta}- \gamma\frac{(1-\mu)\cos(\theta)-\sin(\theta)}{(1-\mu)\sin(\theta)+\cos(\theta)}V \right\}=0, \ \theta \in(\beta_1, \beta_2),\ t\in[0,T).\end{aligned}$$ subject to: $$\label{Ch3enunprobpolar2} V(\theta,T)= \begin{cases} \frac{1}{\gamma}\left(\cos(\theta)+(1-\mu)\sin(\theta)\right)^{\gamma}, \quad \text{if} \ \theta>0, \\ \frac{1}{\gamma}\left(\cos(\theta)+(1+\lambda)\sin(\theta)\right)^{\gamma}, \quad \text{if} \ \theta\le 0. \end{cases}$$ The functions $g_i$, $i=0,1,2$, are given by $$\begin{aligned} g_0(\theta) &= \gamma \Bigl(\bigl(\frac{1}{2}\sigma^2\sin^2(\theta)(\gamma-1)\sin^2(\theta)+\cos^2(\theta)\bigr)+\alpha\sin^2(\theta)+r\cos^2(\theta) \Bigr),\\ g_1(\theta) &= (\gamma-1)\sigma^2\cos(\theta)\sin^3(\theta)+(\alpha-r)\sin(\theta)\cos(\theta), \\ g_2(\theta) &= \frac{1}{2}\sigma^2\sin^2(\theta)\cos^2(\theta). \\ \end{aligned}$$ The Solvency Region in the new coordinates is given by: $$\label{Ch3changpolcoord} b \in [0,\ \infty), \quad \theta \in(\beta_1, \ \beta_2)$$ where $$\beta_1 = \arctan\left(\frac{-1}{1+\lambda}\right), \quad \beta_2 = \arctan\left(\frac{-1}{1-\mu}\right)+\pi. \\$$ This formulation has several advantages over the formulation of [@DaiYi]. As in [@DaiYi], the problem is one dimensional ((\[Ch3enunprobpolar\])-(\[Ch3enunprobpolar2\]) do not depend of $b$), but in our case the domain is bounded ($\theta\in(\beta_1, \ \beta_2)$) . Furthermore, the operators involved in (\[Ch3enunprobpolar\]) are linear in $V$, whereas in [@DaiYi], the equations contains a nonlinear term. Next, we characterize the buying and selling regions in terms of the polar coordinates. First, let us observe that $$\label{Ch3formularelacionpolarorig} \begin{aligned} v(z,t) &=-\left(\frac{V_{\theta}(\theta,t)\sin^2(\theta)-\gamma\sin(\theta)\cos(\theta)V(\theta,t)}{\gamma V(\theta,t)}\right), \\ z &=\cot(\theta), \end{aligned}$$ where $v(z,t)$ is the function defined in (\[Checamborig\]). Let us define the functions $\text{BR}_F$ and $\text{SR}_F$ by $$\text{BR}^c_F(t)=\cot\left(\text{BR}_F(t)\right), \quad \text{SR}^c_F(t)=\cot\left(\text{SR}_F(t)\right), \quad t\in [0,T],$$ where $\text{BR}^c_F$ and $\text{SR}^c_F$ are the boundaries of the buying and selling regions in cartesian coordinates defined in (\[Ch3FunfronC\]). The following proposition is an immediate consequence of the results in [@DaiYi]. \[Ch3tradpropertapolar\] Functions $\text{SR}_F$, $\text{BR}_F$ are monotonically decreasing functions. It holds that $\text{BR}_F(t)<\text{SR}_F(t)$ and that $$\text{BR}_F(t)=0, \quad t\in[\hat{t}_0, \ T],\quad \hat{t}_0=T-\frac{1}{\alpha-r}\log\frac{1+\lambda}{1-\mu}.$$ If $\alpha-r-(1-\gamma)\sigma^2>0$, then $\text{BR}_F(\hat{t}_1)=\frac{\pi}{2}$, with $$\hat{t}_1=T-\frac{1}{\alpha-r-(1-\gamma)\sigma^2}\log\frac{1+\lambda}{1-\mu}.$$ It holds that $\underset{t\rightarrow T}{\lim} \cot\left(\text{SR}_F(t)\right)=(1-\mu)x_{M}$, where $$x_M=-\frac{\alpha-r-(1-\gamma)\sigma^2}{\alpha-r}$$ is the Merton line. If $T\rightarrow\infty$, there exist two values $\text{BR}_s, \text{SR}_s \in (\beta_1, \beta_2)$, such that $$\begin{aligned} \underset{t\rightarrow 0^{+}}{\lim} \text{BR}_F(t) &= \text{BR}_s, \\ \underset{t\rightarrow 0^{+}}{\lim} \text{SR}_F(t) &= \text{SR}_s. \end{aligned}$$ The limit values $\text{BR}_s$ and $\text{SR}_s$ are defined by $$\cot(\text{BR}_s)=-\frac{a}{a+\frac{k}{k-1}}(1+\lambda),\quad \cot(\text{SR}_s)=-\frac{a}{a+k}(1-\mu),$$ where $a$ and $k$ are the constants defined in [@DaiYi Theorem 6.1]. The functions $\text{SR}_F$, $\text{BR}_F$ satisfy (see also [@Gaton Proposition 3.4.2]), $$\beta_1<0\leq \text{BR}(t) \leq \text{SR}(t) \leq \text{SR}_s<\beta_2, \quad t\in[0,T].$$ It is now easy to see that, for $t\in [0,T]$, the buying, selling and no transaction region can be described by as follows: **1.** The buying region is defined by $ \text{BR}=(\beta_1,\ \text{BR}_F(t)]$. In $\text{BR}$ the value function satisfies $$\label{Chp3BRexpformula} V_{\theta}= \gamma\frac{(1+\lambda)\cos(\theta)-\sin(\theta)}{(1+\lambda)\sin(\theta)+\cos(\theta)}V,$$ **2.** The Selling region is defined by $\text{SR}=[\text{SR}_F(t), \beta_2)$. In $\text{SR}$ the value function satisfies $$\label{Chp3SRexpformula} V_{\theta}= \gamma\frac{(1-\mu)\cos(\theta)-\sin(\theta)}{(1-\mu)\sin(\theta)+\cos(\theta)}V.$$ **3.** The No Transaction Region is defined by $\text{NT}=(\text{BR}_F(t), \text{SR}_F(t))$. In $\text{NT}$, $V$ satisfies of the following partial differential equations $$\label{Notranstaction} V_t+g_2(\theta)V_{\theta \theta}+g_1(\theta)V_{\theta}+g_0(\theta) V=0.$$ We remark that if the buying ($\text{BR}_F(t)$) and Selling ($\text{SR}_F(t)$) frontiers are known, we can compute the value function $V(\theta,t)$ in $\text{BR}$ and $\text{SR}$ explicitly by a simple integration of equations (\[Chp3BRexpformula\]) and (\[Chp3SRexpformula\]) respectively. For $\beta_1<\theta<\text{BR}_F(t)$, we have $$\label{Ch3exfsrbr1} V(\theta,t)=V(\text{BR}_F(t),t) \left(\frac{(1+\lambda)\sin(\theta)+\cos(\theta)}{(1+\lambda)\sin(\text{BR}_F(t))+\cos(\text{BR}_F(t))}\right)^{\gamma},$$ and for $\text{SR}_F(t)<\theta<\beta_2$, $$\label{Ch3exfsrbr2} V(\theta,t)=V(\text{SR}_F(t),t) \left(\frac{(1-\mu)\sin(\theta)+\cos(\theta)}{(1-\mu)\sin(\text{SR}_F(t))+\cos(\text{SR}_F(t))}\right)^{\gamma}.$$ ![\[Ch3stationaryCN2\] Value function (numerical solution) for $(\theta,t)\in[\beta_1,\beta_2]\times[0,30]$. The colour code is blue if $(\theta,t)$ is in the buying region, green in the no transactions region and red in the selling region.](Ch3graf30anos2-eps-converted-to.pdf){width="12cm" height="5cm"} Figure \[Ch3stationaryCN2\] represents the value function in perspective (left) and from above (right) in $[\beta_1, \beta_2]$, for a maturity of $T=30$ years. The Figure shows the numerical values obtained for the function $V(\theta,t)$ with the method described in Section \[Ch3NM\]. We have coloured the function depending in whether $(\theta,t)$ is in the Buying, Selling or No Transactions region. We can visually check the expected monotonicity of the Buying and Selling frontiers studying from above (right) the two curves which divide the different colours (red-green and green-blue). The Buying frontier remains constant ($\text{BR}_F(t)=0$) for a certain period near maturity and the stationarity value of both frontiers as we move away from maturity is also observable. Numerical Method {#Ch3NM} ================ The numerical method described in this section is constructed upon the following strategy. Let $\pi\in A_{0}(x,y)$ denote an admissible trading strategy where $x$ and $y$ are the amount of money in the bank and stock accounts at $t=0$. Let $\alpha_1\in(\beta_1,\text{BR}_s)$ and $\alpha_2\in(\text{SR}_s,\beta_2)$ and define $$A^{\alpha_1, \alpha_2}_0(x,y)=\left\{\pi\in A_0(x,y) \ \mid \ \text{arccot}\left({x^\pi}/{y^\pi}\right)\in (\alpha_1, \alpha_2)\right\}$$ where $x^\pi$, $y^{\pi}$ are the amounts in the bank and stock accounts if strategy $\pi$ is followed. Proposition \[Ch3tradpropertapolar\] implies that $\pi^{o}\in A^{\alpha_1, \alpha_2}_0(x,y)$ where $\pi^{o}$ denotes the optimal trading strategy solving (\[Ch3defvarpphi\]). Therefore, the optimal value function can be computed as the solution of (\[Ch3enunprobpolar\])-(\[Ch3enunprobpolar2\]) in $(\alpha_1,\alpha_2)\times[0,T]$ subject to the boundary conditions: $$\label{Ch3theorboundcond} \begin{aligned} V_{\theta}\left(\alpha_1,t\right)&=V\left(\alpha_1,t\right)\gamma \frac{(1+\lambda)\cos(\alpha_1)-\sin(\alpha_1)}{(1+\lambda)\sin(\alpha_1)+\cos(\alpha_1)}, \\ V_{\theta}\left(\alpha_2,t\right)&=V\left(\alpha_2,t\right)\gamma\frac{(1-\mu)\cos(\alpha_2)-\sin(\alpha_2)}{(1-\mu)\sin(\alpha_2)+\cos(\alpha_2)}. \end{aligned}$$ These conditions are equivalent to a mandatory buying or selling the stock if $\theta$ reaches $\alpha_1$ or $\alpha_2$ respectively (see formulas (\[Chp3BRexpformula\])-(\[Chp3SRexpformula\])). The solution can be extended to $(\beta_1,\beta_2)\times[0,T]$ taking into account that for $t\in[0,T]$: $(\beta_1,\alpha_1)\subset \text{BR}$ and $(\alpha_2,\beta_2)\subset \text{SR}$, so that we can compute $V(\theta,t)$ with (\[Ch3exfsrbr1\]) in $\text{BR}$ and (\[Ch3exfsrbr2\]) in $\text{SR}$. The adaptive mesh {#Ch3MACCMAM} ----------------- Let $N_t$ be a nonnegative integer and let us define the time mesh $\left\{t_l\right\}_{l=0}^{N_t}$ by $$\label{Ch3timediscret} t_l=l\Delta t, \quad l=0,1,...,N_t, \quad \Delta t = \frac{T}{N_t}.$$ The spatial mesh will depend on the time step. The main idea is to adapt the mesh in such a way that it evolves through time following the approximate location of the buying and selling frontiers (i.e. evolving as the green zone in Figure \[Ch3stationaryCN2\]). To this end, let $\delta\in(0, 1/2)$ be a control parameter. We define $$\label{Ch3paramk} \begin{aligned} k_1 &=\frac{\beta_2-\text{SR}_s}{\beta_2-\text{BR}_F(T)}, \\ K & =\min\{\delta, k_1, \ \text{BR}_F(T)-\beta_1\}, \end{aligned}$$ where $\text{SR}_s=\text{arccot}\left(\text{SR}^c_s\right)\in[\beta_1,\beta_2]$ is the stationary state of the Selling frontier (see Proposition \[Ch3tradpropertapolar\]). For $N\in \mathbb{N}$, let us consider the $N+1$ Chebyshev nodes in $[-1, 1]$, $$\label{Ch3chebynodes} \tilde{\theta}_j=\cos\left(\frac{\pi j}{N}\right), \quad j=0,1,...,N.$$ We define the integer $j_K\in\{0,1,2,...,N\}$ as the unique integer such that $$\label{Ch3consjk} \left|\tilde{\theta}_{N-j_K}-\tilde{\theta}_{N}\right|\leq 2K < \left|\tilde{\theta}_{N-(j_{K}+1)}-\tilde{\theta}_{N}\right|.$$ Note that $j_K$ is well defined because $0<K\leq\delta<1/2$. From the definition of the Chebyshev nodes, it is easy to check that it exists $N_0$ such that for all $N\geq N_0$, $j_K\geq 1$. Let us suppose that at time $t=t_l$, we know the locations of the buying and selling frontiers, $BR_F(t_l)$ and $SR_F(t_l)$. Given $N_{\theta}\in\mathbb{N}$, $N_{\theta}>N_0$ we define an interval $I(t_l)$ by: $$\label{Ch3forint1} I(t_l)=\left[0, \ \frac{2}{\tilde{\theta}_{j_K}-\tilde{\theta}_{N_{\theta}}}\text{SR}_F(t_l) \right],$$ if $\text{BR}_F(t_l)=0$, or, in case $\text{BR}_F(t_l)>0$, $$\label{Ch3forint2} I(t_l)=\left[\text{BR}_F(t_l)-M\frac{\tilde{\theta}_{N_{\theta}-j_K}-\tilde{\theta}_{N_{\theta}}}{\tilde{\theta}_{j_K}-\tilde{\theta}_{N_{\theta}-j_K}}, \ SR_F(t_l)+M\frac{\tilde{\theta}_0-\tilde{\theta}_{j_K}}{\tilde{\theta}_{j_K}-\tilde{\theta}_{N_{\theta}-j_K}} \right].$$ Here $M=\text{SR}_F(t_l)-\text{BR}_F(t_l)$. We remark that, with this definition, the interval $I(t_l)$ always contains the no transaction region $[\text{BR}_F(t_l),\text{SR}_F(t_l)]$ and it is contained in the solvency region $[\beta_1,\beta_2]$. Furthermore, $\text{BR}_F(t_l)$ and $\text{SR}_F(t_l)$ are always one of the $N_{\theta}+1$ Chebyshev nodes in the interval $I(t_l)$, while restriction $N_{\theta}>N_0$ implies that $\text{SR}_F(t_l)$ is an interior point of $I(t_l)$. The control parameter $\delta$ guaranties that a maximum of $100\delta$% of interval $I(t_l)$ is contained in the Selling Region, another maximum of $100\delta$% of $I(t_l)$ in the Buying Region, whereas a minimum $100(1-2\delta)$% of $I(t_l)$ is contained in the No Transactions Region. Note also that for $t_{N_t}=T$, the values $\text{BR}_F(T)=0$ and $\text{SR}_F(T)$ are, of course, known data (see Proposition \[Ch3tradpropertapolar\]), whereas for $t_l<T$, we have to substitute $BR_F(t_l)$ and $SR_F(t_l)$ by some approximation that we will denote $BR^{\textbf{N}}_F(t_l)$ and $SR^{\textbf{N}}_F(t_l)$ where $\textbf{N}=(N_{\theta},N_t)$. We will describe in Subsection \[Ch3MACCMCC\] how to compute them prior to the construction of the interval $I(t_l)$. Next proposition proves that, for $N_t$ big enough, $\text{BR}_F(t_{l-1}), \text{SR}_F(t_{l-1})\in I(t_l)$, so that we can compute recursively the intervals $I(t_j)$ for $j=N_t,N_t-1,\dots, 0$. \[Ch3propoinclusnteninter\] Let $NT(t)=[BR_F(t), \ SR_F(t)]$ where $BR_F(t)$ and $SR_F(t)$ are the exact location of the Buying and Selling frontiers. For any $N_{\theta}>N_0$, where $N_0$ is the restriction which guarantees that $\text{SR}_F(t)$ will be in the interior of $I(t)$, compute $I(t)$ with (\[Ch3forint1\]) or (\[Ch3forint2\]). It exists $N_1>0$ such that for any time mesh $\left\{t_l\right\}_{l=0}^{N_t}, \ N_t>N_1$ given by (\[Ch3timediscret\]), it holds $$NT(t_{l_0-1}), \ NT(t_{l_0})\subset I(t_{l_0}).$$ for any $t_{l_0}\in\left\{t_l\right\}_{l=0}^{N_t}$. From [@DaiYi], we know that $\text{SR}^c_F(t)\in \mathscr{C}^{\infty}[0,T)$. Therefore, $\text{SR}_F(t)=\text{arccot}\left(\text{SR}^c_F(t)\right)\in(\beta_1, \ \beta_2)$ is $\mathscr{C}^{\infty}[0,T)$. Let $k$ from (\[Ch3paramk\]) be fixed. Since $\text{SR}_F(t)$ is in the interior of $I(t)$, it will exist $\Delta t_{k}$ such that for all $ \Delta t<\Delta t_{k}$: $$\label{Ch3condicionencontrarfron} SR_F(t-\Delta t)\in I(t), \quad t \in [0, T).$$ This guarantees that for any equally spaced time mesh $\{t_l\}_{l=0}^{N_t}$, with $N_t>1/\Delta t_{k}$, $SR_F(t_{l-1})\in I(t_l)$. To finish the proof, note that from Proposition \[Ch3tradpropertapolar\], $\text{BR}_F(t_{l})\leq\text{BR}_F(t_{l-1})$ and that $\text{BR}_F(t_{l-1})\leq \text{SR}_F(t_{l-1})$, so the result follows directly from the definition of $I(t_l)$. Chebyshev collocation Method. {#Ch3MACCMCC} ----------------------------- Let us suppose that we know an approximation of the function value $V^{\textbf{N}}(\theta,t_{l})$, $\theta\in (\beta_1, \beta_2)$ and approximate values of $\text{BR}^{\textbf{N}}_F(t_l)$ and $\text{SR}^{\textbf{N}}_F(t_l)$ at time $t=t_l$. For $N_{\theta}$ big enough (Proposition \[Ch3propoinclusnteninter\]), we can compute $I(t_l)=[\alpha^{t_l}_1, \alpha^{t_l}_2]$ defined as in (\[Ch3forint1\]) if $\text{BR}^{\textbf{N}}_F(t_l)=0$, or with (\[Ch3forint2\]) otherwise. For $t\in[t_{l-1},t_{l}]$, we define the function $\hat{V}$ as the function value which gives the expected terminal value when the trading strategy is to perform no transactions if $\theta\in(\alpha^{t_l}_1, \alpha^{t_l}_2)$, to buy the stock if $\theta=\alpha^{t_l}_1$ and to sell the stock if $\theta=\alpha^{t_l}_2$, subject to $\hat{V}(\theta,t_{l})=V^{\textbf{N}}(\theta,t_{l})$. Therefore, $\hat{V}$ is the solution of the equation $$\label{Ch3ecudifnotranbis} -\hat{V}_t+g_2(\theta)\hat{V}_{\theta \theta}+g_1(\theta)\hat{V}_{\theta}+g_0(\theta) \hat{V}=0,$$ subject to $$\label{Ch3ecudifnotranboundcondbis} \begin{aligned} \hat{V}_{\theta}\left(\alpha^{t_l}_1,t\right)&=\hat{V}\left(\alpha^{t_l}_1,t\right)\gamma \frac{(1+\lambda)\cos(\alpha^{t_l}_1)-\sin(\alpha^{t_l}_1)}{(1+\lambda)\sin(\alpha^{t_l}_1)+\cos(\alpha^{t_l}_1)}, \\ \hat{V}_{\theta}\left(\alpha^{t_l}_2,t\right)& =\hat{V}\left(\alpha^{t_l}_2,t\right)\gamma\frac{(1-\mu)\cos(\alpha^{t_l}_2) -\sin(\alpha^{t_l}_2)}{(1-\mu)\sin(\alpha^{t_l}_2)+\cos(\alpha^{t_l}_2)}, \\ \hat{V}(\theta,t_l)&=V^{\textbf{N}}(\theta,t_{l}). \end{aligned}$$ Let us consider the $N_{\theta}+1$ Chebyshev nodes in $I(t_l)$ $$\label{Ch3cambvariable} \theta_j=\frac{\alpha^{t_l}_2-\alpha^{t_l}_1}{2}\tilde{\theta}_j+\frac{\alpha^{t_l}_2+\alpha^{t_l}_1}{2}, \ j=0,1,...,N_{\theta},$$ where $\tilde{\theta}_j$ are the Chebyshev points (\[Ch3chebynodes\]). The numerical approximation $\hat{V}^{\textbf{N}}(\theta,t_{l-1})$, $\theta\in(\theta_{N_{\theta}},\theta_0)$ to the function $\hat{V}$ is the collocation polynomial [@Canuto] of degree $N_{\theta}$ defined for $j=1,...,N_{\theta}-1$ by: $$\label{Ch3collocationmethod} \frac{\hat{V}^{\textbf{N}}(\theta_j,t_{l-1})-\hat{V}^{\textbf{N}}(\theta_j,t_l)}{\Delta t} =L\left(\frac{\hat{V}^{\textbf{N}}(\theta_j,t_{l-1})+\hat{V}^{\textbf{N}}(\theta_j,t_l)}{2}\right).$$ subject to $$\label{Ch3valorcondvencheby} \hat{V}^{\textbf{N}}(\theta_j,t_l)=V^{\textbf{N}}(\theta_j,t_l), \quad j=1,2,...,N_{\theta}-1,$$ with (Neumann) boundary conditions $$\label{Ch3collocationmethodcons} \begin{aligned} \hat{V}^{\textbf{N}}_{\theta}(\theta_{N_{\theta}},t_{l-1}) &=V^{\textbf{N}}(\theta_{N_{\theta}},t_{l})\gamma \frac{(1+\lambda)\cos(\theta_{N_{\theta}}) -\sin(\theta_{N_{\theta}})}{(1+\lambda)\sin(\theta_{N_{\theta}})+\cos(\theta_{N_{\theta}})}, \\ \hat{V}^{\textbf{N}}_{\theta}(\theta_0,t_{l-1}) &=V^{\textbf{N}}(\theta_0,t_l)\gamma\frac{(1-\mu)\cos(\theta_0) -\sin(\theta_0)}{(1-\mu)\sin(\theta_0)+\cos(\theta_0)}. \end{aligned}$$ where $$\label{Ch3operatorl} L\bigl(\hat{V}^{\textbf{N}}(\theta)\bigr)= g_2(\theta)\frac{\partial^2 \hat{V}^{\textbf{N}}}{\partial \theta^2}+ g_1(\theta)\frac{\partial \hat{V}^{\textbf{N}}}{\partial \theta}+g_0(\theta) \hat{V}^{\textbf{N}}.$$ The equations (\[Ch3collocationmethod\])-(\[Ch3collocationmethodcons\]) define a dense system of linear equations to find the values of $\hat{V}^{\textbf{N}}_{\theta_j}$, $j=0,\dots, N_\theta$. However, the fact that, with relative few nodes for the spatial mesh we can achieve a very good precision, makes this method competitive with respect to a finite differences method, see Section \[Ch3numresults\]. Let us define $$\label{Ch3auxencheby} \begin{aligned} P^{(\textbf{N},l-1)}_1(\theta) &= \hat{V}^{\textbf{N}}_{\theta}(\theta,t_{l-1}) - \hat{V}^{\textbf{N}}(\theta,t_{l-1})\cdotp \gamma\frac{(1+\lambda)\cos(\theta)-\sin(\theta)}{(1+\lambda)\sin(\theta)+\cos(\theta)}, \\ P^{(\textbf{N},l-1)}_2(\theta) &=\hat{V}^{\textbf{N}}_{\theta}(\theta,t_{l-1}) -\hat{V}^{\textbf{N}}(\theta,t_l)\cdotp \gamma\frac{(1-\mu)\cos(\theta)-\sin(\theta)}{(1-\mu)\sin(\theta)+\cos(\theta)}, \end{aligned}$$ which are explicit functions because $\hat{V}^{\textbf{N}}$ is a known polynomial in $\theta$. In (\[Ch3auxencheby\]), we compare (see [@Gaton Subsection 3.5.3]) whether it is better to not perform transactions or to buy the stock (resp. sell the stock). If polynomial $P^{(\textbf{N},l-1)}_1>0$ (resp. $P^{(\textbf{N},l-1)}_2>0$) it is better to not perform transactions rather than buy (resp. sell) the stock. The numerical approximation to the Buying and Selling frontiers is given by: $$\label{frontiers} \begin{aligned} \text{BR}^{\textbf{N}}_F(t_{l-1}) &= \min\left\{\beta: P^{(\textbf{N},l-1)}_1(\theta)\geq 0, \theta\in[\beta,\alpha^{t_l}_2) \right\}, \\ \text{SR}^{\textbf{N}}_F(t_{l-1}) &= \max\left\{\beta: P^{(\textbf{N},l-1)}_2(\theta)\geq 0, \theta\in(\alpha^{t_l}_1,\beta] \right\}, \\ \end{aligned}$$ Once we know the location of the frontiers and the function value in that points, we can compute the approximate function value through the following explicit formulas where we have used the notation $B_{l}=\text{BR}^{\textbf{N}}_F(t_{l})$, $S_{l}=\text{SR}^{\textbf{N}}_F(t_{l})$ $$\label{VN1} V^{\textbf{N}}(\theta^{t_{l-1}}_j,t_{l-1}) =\hat{V}^{\textbf{N}}(B_{l-1},t_{l-1}) \left[\frac{(1+\lambda)\sin(\theta^{t_{l-1}}_j) +\cos(\theta^{t_{l-1}}_j)}{(1+\lambda)\sin(B_{l-1})+\cos(B_{l-1})}\right]^{\gamma},$$ if $ \theta^{t_{l-1}}_j<B_{t_{l-1}}$, $$\label{VN2} V^{\textbf{N}}(\theta^{t_{l-1}}_j,t_{l-1})=\hat{V}^{\textbf{N}}(\theta^{t_{l-1}}_j,t_{l-1}), \quad B_{l-1}\leq\theta^{t_{l-1}}_j\leq S_{l-1},$$ $$\label{VN3} V^{\textbf{N}}(\theta^{t_{l-1}}_j,t_{l-1})= \hat{V}^{\textbf{N}}(S_{t_{l-1}},t_{l-1}) \left[\frac{(1-\mu)\sin(\theta^{t_{l-1}}_j) +\cos(\theta^{t_{l-1}}_j)}{(1-\mu)\sin(S_{l-1})+\cos(S_{l-1})}\right]^{\gamma},$$ if $\theta^{t_{l-1}}_j>S_{l-1}$. Then the complete algorithm reads as follows: - Fix a number $N_t$ and a number $N_{\theta}$ big enough such that Proposition \[Ch3propoinclusnteninter\] holds. Compute $\Delta t = \frac{T}{N_t}$ and $\{t_l\}_{l=0}^{N_t}$ as in (\[Ch3timediscret\]). Define $\textbf{N}=(N_{\theta},N_t)$. Set $l=N_{t}$ and compute $I(t_{N_t})$ with formula (\[Ch3forint1\]) Compute $V^{\textbf{N}}(\theta,T)$, $\theta\in I(t_{N_t})$, as the Chebyshev interpolation polynomial in $\{\theta^{T}_j\}_{j=0}^{N_{\theta}}$ of function $V(\theta,T)$, given by (\[Ch3enunprobpolar2\]), where $\{\theta^{T}_j\}_{j=0}^{N_{\theta}}$ denote the Chebyshev nodes in $I(t_{N_t})$. - Compute the polynomial $\hat{V}^{\textbf{N}}(\theta,t_{l-1})$ solving the collocation equations (\[Ch3collocationmethod\]) with final condition (\[Ch3valorcondvencheby\]) and boundary conditions (\[Ch3collocationmethodcons\]). - Locate the buying and selling frontiers $\text{BR}^{\textbf{N}}_F(t_{l-1})$ and $\text{SR}^{\textbf{N}}_F(t_{l-1}) $ using (\[frontiers\]). - Compute the interval $I(t_{l-1})$ with (\[Ch3forint1\]) if $\text{BR}^{\textbf{N}}_F(t_{l-1})=0$ or with (\[Ch3forint2\]) otherwise. Compute the numerical approximation $V^{\textbf{N}}$ at time $t_{l-1}$ with formulae (\[VN1\]), (\[VN2\]) and (\[VN3\]). - Set $l=l-1$ and stop if $l=0$ or, otherwise, proceed to **Step 1**. Numerical Results {#Ch3numresults} ================= We consider the parameter values as in the first experiment in [@Arregui]. For $t\in[0, 4]$ let: $$\sigma=0\ldotp25, \ \ r=0\ldotp03, \ \ \alpha=0\ldotp10, \ \ \gamma=0\ldotp5, \ \ \lambda=0\ldotp08, \ \ \mu=0\ldotp02,$$ The following figure shows the numerical solution $V^{\textbf{N}}(\theta,t)$. \[Ch3v1Valfun4anos\] ![Value of $V^{\textbf{N}}(\theta,t)$ for $t\in{[0, 4]}$. The colour code is blue if $(\theta,t)$ is in the buying, green in the no transactions and red in the selling region.](Ch3graf4anos2-eps-converted-to.pdf "fig:"){width="12cm" height="5cm"} We have colored the function depending in whether $(\theta,t)$ is in the Buying, Selling or No Transactions region. As in Figure \[Ch3stationaryCN2\], we can visually check the properties from Proposition \[Ch3tradpropertapolar\]. First, we establish the criteria employed in the experiments to build the spatial mesh. We have fixed the control parameter $\delta=0\ldotp1$, so that, at least $80$% of the interval corresponds to the No Transactions Region. The particular choice of $\delta$ does not affect the rate of convergence of the error. In order to compare the performance of the spectral method with other numerical methods, we have also implemented a Central Differences (CD) based method in order to solve the PDE in Step 2 (see Subsection \[Ch3MACCMCC\]). The formal study of the error will be conducted for the cases where explicit formulas are available, comparing the results of the Central differences and Chebyshev methods. The rest of the properties given in [@DaiYi], although not included, were also checked. Value of the function in $v(0,t)$ {#Ch3numresultsvalpimed} --------------------------------- We consider $v(z,t)$ defined in (\[Checamborig\]). For $z=0$, we can explicitly compute $v(0,t)$ with [@DaiYi (3.9)]. In Figure \[Ch3analiticalsolv0t\] we plot the value of $v(0,t)$ for $t\in[0,4]$. ![\[Ch3analiticalsolv0t\] Analytical solution of $v(0,t), \ t\in[0,4]$.](Ch3analiticalsolv0t-eps-converted-to.pdf){width="6.5cm" height="5cm"} The value $z=0$ corresponds in polar coordinates to $\theta=\frac{\pi}{2}$. A numerical solution $v^{\textbf{N}}(0,t_i)$ can be computed explicitly using $V^{\textbf{N}}\left(\frac{\pi}{2},t_i\right)$ and formula (\[Ch3formularelacionpolarorig\]), which relates the function in polar coordinates and in the original variables. $$\begin{aligned} v^{\textbf{N}}(z,t) &=-\left(\frac{V^{\textbf{N}}_{\theta}(\theta,t)\sin^2(\theta)-\gamma\sin(\theta)\cos(\theta)V^{\textbf{N}}(\theta,t)}{\gamma V^{\textbf{N}}(\theta,t)}\right), \\ z &=\cot(\theta). \end{aligned}$$ The following Figure compares the difference between the analytical solution $v(0,t), \ t\in[0,4]$ and the numerical solution obtained with the Chebyshev method for $N_{\theta}=256$ (left) and $N_{\theta}=2048$ (right). Both pictures are in the same scale and we can observe that the error reduces for increasing value of $N_{\theta}$. ![\[Ch3valpimedcomcheb\] Value $v(0,t)-v^{\textbf{N}}(0,t), \ t\in[0,4]$ where $v^{\textbf{N}}$ was computed with the Chebyshev method with $N_{\theta}=256$ (left) and $N_{\theta}=2048$ (right).](Ch3valpimedcomcheb-eps-converted-to.pdf){width="14cm" height="5.5cm"} In both pictures of Figure \[Ch3valpimedcomcheb\] we can observe an error discontinuity at time $\hat{t}_1$. From [@DaiYi (3.9)], we know that the function $v(0,t)$ is not derivable (respect time) at instant $\hat{t}_1$. The same phenomena can be observed in the numerical experiments in [@Arregui]. We can also see that some oscillations appear at time $\hat{t}_0$ where we change the kind of adaptive mesh $I(t_i)$ (see Subsection \[Ch3MACCMAM\]). We proceed to check the rate of error convergence. We define the Root of the Mean Square Error as $$\label{Ch3errrorcuadmed} \text{RMSE}_{\{N_{\theta},N_t\}}\left(v^{\textbf{N}}\right)=\sqrt{\frac{1}{N_t+1}\sum_{l=0}^{N_t} \left(v^{\textbf{N}}(0,t_l)-{v}(0,t_l)\right)^2}.$$ Figure \[Ch3converrtesptempvalpimed\] shows the convergence of spatial error (left) for $\Delta t=3\ldotp9 \cdotp 10^{-4}$ and different number of spatial nodes $N_{\theta}$. The right side shows the convergence of temporal error for $N_{\theta}$ fixed and different values of $N_t$. ![\[Ch3converrtesptempvalpimed\] Spatial (left) and Temporal (right) Error convergence of $v^{\textbf{N}}$ in logarithmic scale of the Central Differences (blue) and Chebyshev (red) methods.](Ch3converrtesptempvalpimed-eps-converted-to.pdf){width="12.5cm" height="5.3cm"} In the left side of Figure \[Ch3converrtesptempvalpimed\] we have plotted, in logarithmic scale, the number $N_{\theta}$ of spatial nodes versus the value $\text{RMSE}_{\{N_{\theta},N_t\}}\left(v^{\textbf{N}}\right)$. The slope of the regression line of the CD method (plotted in blue) is $-1\ldotp 80$ and of the Chebyshev method (plotted in red) is $-1\ldotp85$. The spectral convergence that we could expect in the Chebyshev method does not occur due to the regularity of the problem. In the right hand side of Figure \[Ch3converrtesptempvalpimed\], we have plotted, in logarithmic scale, the number $N_{t}$ of time steps versus the value $\text{RMSE}_{\{N_{\theta},N_t\}}\left(v^{\textbf{N}}\right)$. The slope of the regression line of the CD method (solid-blue) is $-2\ldotp 31$ as it could be expected from an order 2 method. The slope of the Chebyshev method (solid-red) is $-1\ldotp4$. We note that for large values of $N_t$ we reach very soon the error limit marked by the size of $N_{\theta}$. We carry out a second experiment doubling the value of $N_{\theta}$ (right-dashed-blue/red) to check that the lowest value reached by the temporal error was given by the size of the spatial mesh. Depending on the error tolerance, we might need a big value for $N_{\theta}$ in the CD method but much smaller in the Chebyshev method. This makes that, depending on the required precision, Chebyshev performs better in computational cost than CD. This will be studied below. Location of the Buying Region frontier at time $\hat{t}_1$ {#Ch3numresultsfronpimed} ---------------------------------------------------------- From Proposition \[Ch3tradpropertapolar\], we know that in polar coordinates $\text{BR}_F(\hat{t}_1)=\frac{\pi}{2}$. Given a number of time steps $N_t$, we look for $t_{l_1}\in\{t_l\}_{l=0}^{N_t}$ which is nearest to $\hat{t}_1$ and define the Absolute Error (just for this experiment) as: $$\text{Absolute Error}_{\textbf{N}}(\hat{t}_1)=\left|\text{BR}^{\textbf{N}}_F(t_{l_1})-\frac{\pi}{2}\right|.$$ The next figure shows the convergence of spatial error (left) for $\Delta t=3\ldotp9 \cdotp 10^{-4}$ and different number of spatial nodes $N_{\theta}$. The right side shows the convergence of temporal error for $N_{\theta}=2048$ (Chebyshev) and $4960$ (CD) and different values of $N_t$. ![\[Ch3converrtesptempfronpimed\] Spatial (left) and Temporal (right) Error (semilogarithmic scale) of instant when $BR_F=\frac{\pi}{2}$ with the CD (blue) and Chebyshev (red) methods.](Ch3converrtesptempfronpimed-eps-converted-to.pdf){width="12.5cm" height="5.3cm"} The spatial error (left) reduces as we increase the value of $N_{\theta}$. At equal number of nodes, the Chebyshev method gives much smaller errors than the CD method. Concerning the temporal error, the results are step shaped because of the definition of Absolute Error and the time partition when $\Delta t$ is halved. Each time partition is included in the following one and $t_{i_1}$ sometimes changes and sometimes not. The temporal error reduces as we increase the value of $N_{t}$. As in the spatial error, the Chebyshev method outperforms the CD method. First instant when is optimal to have a positive amount of the stock. {#Ch3numresultsfronnocero} --------------------------------------------------------------------- From Proposition \[Ch3tradpropertapolar\], we know that $\text{BR}_F(t)=0, \quad t\geq \hat{t}_0$ where $\hat{t}_0$ is explicitly computable. Given a number of time steps $N_t$, we look for $t_{l_0}\in\left\{t_l\right\}_{l=0}^{N_t}$ such that $$t_{l_0}\geq \hat{t}_0 > t_{l_0+1}$$ For the Chebyshev method, the $\text{BR}^{\textbf{N}}_F$ may be bigger than 0 a few time steps prior to $l_0$. We note that in the Chebyshev method, the lower limit of $I(t_l), \ t_l\in[\hat{t}_0,T]$ is the Buying frontier. In left picture of Figure \[Ch3v1BuyingfrontierCheby\], we have plotted the numerical estimation of the Buying Frontier with the Chebyshev method for, $N_{\theta}=256$ (blue), $N_{\theta}=512$ (red), $N_{\theta}=1024$ (green) $N_{\theta}=2048$ (black). In the right picture we zoom around $\hat{t}_0$. ![\[Ch3v1BuyingfrontierCheby\] Numerically computed Buying Frontier with the Chebyshev method for $t\in[0,4]$ (left) and zoom around $\hat{t}_0$ (right).](Ch3v1BuyingfrontierCheby-eps-converted-to.pdf){width="13cm" height="5.5cm"} Let $k\geq 0$ be the biggest value such that $$\text{BR}^{\textbf{N}}_F(t_{l_0+k})> 0.$$ If $k>0$, the location of the Buying Frontier oscillates around 0 for $t_l\in\{t_{l_0+k}, ..., t_{l_0+1}\}$ and for $t_{l}< t_{l_0}$ when it behaves as we could expect from Proposition \[Ch3tradpropertapolar\]. Numerical experiments show that it is better to let $\text{BR}^{\textbf{N}}_F(t_l)$ oscillate around 0 rather than imposing $\text{BR}^{\textbf{N}}_F(t_l)=\max\{\text{BR}^{\textbf{N}}_F(t_l), \ 0\}$. The oscillations observed in Figure \[Ch3v1BuyingfrontierCheby\] are generated by the imposition of the Neumann conditions. The boundary error is controlled by $N_t$ and $N_{\theta}$, but the spatial error is dominant in this experiment. The instant when the numerical solution begins to oscillate is always very close to $\hat{t}_0$ $\left(\left|t_{l_0-k}-\hat{t}_0\right|\leq 1\ldotp5\cdotp10^{-3}\right) $ and the size of the oscillations reduces as $N_{\theta}$ increases. These oscillations are the error that we are going to study. They include all the negative values (since the Buying Frontier must be always positive) and any positive value for discrete times larger than $\hat{t}_0$. Thus, we define, for this method and experiment, the absolute error (AE) as $$\text{AE}^{Ch}=\max\left\{\left|\underset{l=0,1,...,N_t}{\min}\left\{\text{BR}^{\textbf{N}}_F(t_l)\right\}\right| \ , \left|\underset{l=l_0+1,l=l_0+2,...,N_t}{\max}\left\{\text{BR}^{\textbf{N}}_F(t_l) \right\}\right|\right\}.$$ We fix $\Delta t=3\ldotp9 \cdotp 10^{-4}$ and compute the absolute error for several values for $N_{\theta}$). In Figure \[Ch3v1ConvespfronnoceroCheby\] we plot, in logarithmic scale, the value of $N_{\theta}$ versus the absolute error. As we can see the error is rapidly reduced by increasing $N_\theta$. ![\[Ch3v1ConvespfronnoceroCheby\] Spatial error convergence of the first instant when it is optimal to have a positive amount of stock (Chebyshev method).[]{data-label="figura"}](Ch3v1ConvespfronnoceroCheby-eps-converted-to.pdf){width="6cm" height="5cm"} Stationary state ---------------- $BR_F$ and $SR_F$ tend to a stationary state as $T\rightarrow \infty$ that can also be computed explicitly (see Proposition \[Ch3tradpropertapolar\]). Computed with the same model parameters as before but for $T=30$ years (see Figure \[Ch3stationaryCN2\]), frontiers have stabilized a few years before reaching $t=0$ at: $$\begin{aligned} & \text{Buying Frontier:} \ 1\ldotp8626 \ \ (1\ldotp8622 \ \text{exact value}) \\ & \text{Selling Frontier:} \ 2\ldotp1559 \ \ (2\ldotp1561 \ \text{exact value}) \end{aligned}$$ computed with the Chebyshev method ($\Delta t=10^{-4}$, $N_{\theta}=512$). We define the absolute error (for this experiment) as $$\text{Absolute Error}=\left|\text{BR}^{\textbf{N}}_F(0)-BR_s\right|$$ We study the spatial ($\Delta_t=10^{-3}$ and several values for $N_{\theta}$) and temporal ($N_{\theta}=4960$ for the CD, $N_{\theta}=512$ for the Chebyshev method, and several values for $N_t$) error convergence. In Figure \[Ch3v1estacconv\] we plot, in logarithmic scale, the value of $N_\theta$ (left) versus the absolute value of the error and the value of $N_t$ (right) versus the absolute value of the error for both methods. ![\[Ch3v1estacconv\] Spatial (left) and Temporal (right) error convergence, in logarithmic scale, of the Stationary State of the Buying Frontier for the CD(blue) and Chebyshev (red) methods.](Ch3v1estacconv-eps-converted-to.pdf){width="13.5cm" height="6cm"} In this experiment, temporal error is dominant compared with respect to the spatial error in the Chebyshev method. In the case of the CD method, the error depends more in both the spatial and temporal discretizations. On the left side picture, we can see that the Chebyshev method reaches the error marked by the time discretization with the smallest number of nodes Therefore, if a high precision is required, Chebyshev will perform better than the Central Differences method. The error behaviour of the Selling Frontier is similar to the one of the Buying Frontier. Performance Analysis {#Ch3numresultsperformance} -------------------- In this section we compare the relative performance of the pseudospectral and finite difference methods. First of all, we fix several time and spatial discretization parameters: (i) $\Delta t\in[0.02, \ 3\ldotp9^{-4}]$ (ii) $N_{\theta}\in[141, \ 1024]$ (Chebyshev) (iii) $N_{\theta}\in[300, \ 6000]$ (Central Differences) and solve the problem with all the combinations of the different discretizations for both methods. The lower and upper bounds of $N_{\theta}$ in the Central Differences method can be taken smaller or bigger. The criteria that we have employed is such that the numerical error varies between $10^{-4}$ and $10^{-8}$. The same reads for the upper bound of $N_{\theta}$ in the Chebyshev method. We point that during the implementation of the method, we observed that if $N_{\theta}$ was not big enough, the location of the frontiers may oscillate (due to the Gibbs effect or to the fact that the polynomials are not accurate enough), complicating the location of $BR^{\textbf{N}}_F$ and $SR^{\textbf{N}}_F$ in (\[frontiers\]). The Chebyshev spectral method is effective once enough resolution has been reached. This behaviour is typical of high order methods, see [@Frutos]. The employment of the adaptive interval $I(t_l)$ and an enough amount of interpolation nodes avoids the oscillations and allows to obtain just one numerical approximation of $BR^{\textbf{N}}_F$ and $SR^{\textbf{N}}_F$ in (\[frontiers\]). The oscillations may appear if the following (empirical) bounds are violated $$\label{Ch3empirconstr} \Delta t > 0.1, \quad \Delta t < \frac{C}{N_{\theta}^{C_1}},$$ where $C_1 \geq 1$ and numerical experiments suggest that $C_1$ might be a growing function of $N_{\theta}$. The lowest value of $N_{\theta}$ in the Chebyshev method was chosen so that no oscillations appear. If a smaller number of interpolation nodes is chosen, the solution oscillates and the error worsens. We plot the value of $\text{RMSE}_{\{N_{\theta},N_t\}}\left(v^{\textbf{N}}(0,t)\right)$ (\[Ch3errrorcuadmed\]) versus the computational time employed in computing $v^{\textbf{N}}$ for each different spatial and temporal meshes in logarithmic scale. ![\[Ch3v1performanceerror\] Performance comparison of the Error at $v^{\textbf{N}}(0,t)$. In logarithmic scale, we plot (left), the value of RMSE versus the total computational costs of CD (blue) and Chebyshev (red) methods and their respective lower enveloping curves (right).](Ch3v1performanceerror-eps-converted-to.pdf){width="13.5cm" height="6cm"} The left-side picture of Figure \[Ch3v1performanceerror\] represents the cloud of results for the different discretizations of each method. The right-side, which is more visual, represents the lower convex enveloping curve. With the right-side picture, we can obtain an approximate behaviour of the evolution of the error versus the required computational time to reach that precision. We fix the error tolerance that we require for our problem and find which method and spatial and time discretization reaches it first. As we can see, the CD method (blue in Figure \[Ch3v1performanceerror\]) performs better if we do not require a high precision. If a higher precision is required, Chebyshev (red in Figure \[Ch3v1performanceerror\]) performs better than CD. A similar behaviour can be observed if we compare the errors of the rest of cases where we have explicit formulas. Conclusions {#Ch3Conclus} =========== The homothetic property of the Potential Utility function has been used to restate the investment problem in polar coordinates. This has allowed us to give an equivalent formulation of the problem in a bounded spatial domain. Although some of the numerical difficulties that appear with the parabolic double obstacle problem are avoided, other problems may appear if we employ spectral methods. The Gibbs effect, which comes from the fact that the objective function is continuous but not differentiable at maturity, can complicate the location of the frontiers, but this issue can be circumvented by the employment of a time-adapted spatial mesh (Subsection \[Ch3MACCMAM\]). Simpler methods, as Central Differences, are not affected by the Gibbs effect and they are easier to implement. Nevertheless, they require more computational work if high precisions are needed. Further work may include the extension of the model including a consumption term or the design of spectral methods to optimal investment problems with other Utility functions, like the Exponential Utility. Furthermore, through the Indifference Pricing technique (see [@Carmona] and [@Davis2]), these kind of models can be applied to option valuation. [99]{} Achdou Y., Pironneau O. (2007): Finite Element Method for Option Pricing. Université Pierre et Marie Curie. Arregui I., Vázquez C., [*Numerical solution of an optimal investment problem with proportional transaction costs*]{}, Journal of Computational and Applied Mathematics, 236 (2012), 2923-2937. Ben-Ameur H., J. de Frutos, T. Fakhfakh and Diaby V., [*Upper and Lower Bounds for Convex Value Functions of Derivative Contracts*]{}, Economic Modelling, 34 (2013), 69-75. Breton, M. and de Frutos, J. , [*Option Pricing under GARCH Processes by PDE Methods*]{}, Operations Research, 58 (2010), 1148-1157. Breton, M. and de Frutos, J., [*Approximation of Dynamic Programs*]{}, in Handbook of Computational Finance, 633-649, Jin-Chuan Duan, James E. Gentle, and Wolfgang Härdle(eds), Springer, 2012. C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, [*Spectral Methods. Fundamentals in single domains*]{}, Springer, Berlin, 2006. R. Carmona, [*Indifference Pricing*]{}, Princeton University Press, Princeton, 2009. Chiarella, C., El-Hassan, N. and A. Kucera, A [*Evaluation of American option prices in a path integral framework using Fourier-Hermite series expansion*]{} , Journal of Economic Dynamics and Control 23 (1999), 1387-1424. Cvitanić J., Karatzas I., [*Hedging and Portfolio Optimization under Transaction Costs: A Martingale Approach*]{}, Mathematical Finance, 6 (1996), 113-165. Davis M.H.A., Norman A.R., [*Portfolio selection with transaction costs*]{}, Mathematics of Operations Research, 15 (1990), 676-713. Davis M.H.A., Panas V.G., Zariphopoulou T. [*European Option Pricing with transaction costs*]{}, SIAM Journal of Control and Optimization, 31 (1993), 470-493. Day M., Yi F., [*Finite-Horizon Optimal Investment with Transaction Costs: A Parabolic Double Obstacle Problem*]{}, Journal of Differential Equations, 246 (2009), 1445-1469. Duan J.C., Gentle J.E. and Härdle W.(eds) [*Handbook of Computational Finance*]{}, Springer, 2012. Duan J.C., Simonato (1998): Empirical Martingale Simulation. Management Science, 44, 1218-1233. de Frutos, J., [*A Spectral Method for bonds*]{}, Computers and Operations Research, 35 (2008), 64-75. Gatón, V., [*Cuatro ensayos sobre valoración de derivados y estrategias de inversión*]{}, Ph.D. thesis, University of Valladolid, Valladolid, 2016. Lyuu Y.,Wu C. (2005): On accurate and Provably Efficient GARCH Option Pricing Algorithms. Quantitative Finance, 2, 181-198. Magill M.J.P., Constatinides G.M., [*Portfolio selection with transaction costs*]{}, Journal of Economic Theory, 13 (1976), 245-263. Merton R.C., [*Optimal consumption and portfolio rules in a continuous time model*]{}, Journal of Economic Theory, 3 (1971), 373-413. Zhang, B. and Oosterlee, C. W.[*Pricing of early-exercise Asian options under Lévy processes based on Fourier cosine expansions*]{}, Appl. Numer. Math., 78 (2014), 14-30. Ritchen P, Trevor R. (1999): Pricing Options under Generalized GARCH and Stochastic Volatility Processes. The Journal of Finance, 54, 377-402. Shreve S.E., Soner H.M., [*Optimal investment and consumption with transaction costs*]{}, Annals of Applied Probablity, 4 (1994), 609-692. Stentof L. (2004): Pricing American Options when the underlying asset follows GARCH processes. Journal of Empiriccal Ginance, 12, 576-611. [^1]: Instituto de Matemáticas (IMUVA), Universidad de Valladolid, Paseo de Belén 7, Valladolid, Spain. e-mail:frutos@mac.uva.es [^2]: Instituto de Matemáticas (IMUVA), Universidad de Valladolid, Paseo de Belén 7, Valladolid, Spain. e-mail:vgaton@mac.uva.es [^3]: Research supported by Spanish MINECO under grants MTM2013-42538-P and MTM2016-78995-P. The first author acknowledges the support of European Cooperation in Science and Technology through COST Action IS1104.
--- author: - | \ Dept. of Physics and Astronomy,\ York University, Toronto, Ontario, M3J 1P3, Canada\ E-mail: - | N. Garron\ Centre for Mathematical Sciences,\ School of Computing, Electronics and Mathematics,\ Plymouth University, Plymouth, PL4 8AA, United Kingdom\ E-mail: - | A. T. Lytle\ SUPA, School of Physics and Astronomy,\ University of Glasgow, Glasgow, G12 8QQ, United Kingdom\ E-mail: - The RBC and UKQCD collaborations bibliography: - 'KKbar2015.bib' title: Neutral Kaon mixing beyond the Standard Model --- Introduction ============ Neutral kaon mixing provides a description of indirect CP violation in the Standard Model (SM), which was discovered in the decay of $K_L\rightarrow \pi\pi$ in 1964 [@PhysRevLett.13.138]. In the SM this mixing is mediated by the W-boson and one of the two leading order contributions is given by the box diagram on the left of Diagram.\[pic\]. In lattice simulations we cannot directly measure the contribution of this diagram but the operator product expansion (OPE) allows us to separate the low and high energy scales and compute the low energy, non-perturbative matrix element from the effective vertex shown on the right. In the SM there is only one effective vertex, if we generalise to different mediating particles we can have a larger basis of operators as we have a larger possibility of dirac-color structures. Measuring these new operators can give a model-independent insight into how beyond the Standard Model (BSM) theories could interact with QCD and help constrain the scale of new physics [@Bona:2007vi]. Collaboration $B_K$ $B_2$ $B_3$ $B_4$ $B_5$ --------------------------- --------- --------- --------- --------- --------- RBC-UKQCD [@Boyle:2012qb] 0.53(2) 0.43(5) 0.75(9) 0.69(7) 0.47(6) ETM [@Bertone:2012cu] 0.51(2) 0.47(2) 0.78(4) 0.75(3) 0.60(3) ETM [@Carrasco:2015pra] 0.51(2) 0.46(3) 0.79(5) 0.78(5) 0.49(4) SWME [@Jang:2014aea] 0.52(2) 0.53(2) 0.77(6) 0.98(6) 0.75(8) : Previous collaboration results for the bag parameters in $\overline{\text{MS}}$ renormalised at $\mu=3\,\text{GeV}$, statistical and systematic errors have been added in quadrature.[]{data-label="tab:prev_evals"} Recently, several groups have measured a dimensionless prescription of these BSM operators (their bag parameters) in modern, dynamical fermion simulations. Their results are shown in Tab.\[tab:prev\_evals\]. There is some tension between these measurements; our previous work [@Boyle:2012qb] used a single lattice spacing, $nf=2+1$ flavours and renormalised the operators non-perturbatively using the intermediate, exceptional, RI-MOM scheme. The works [@Bertone:2012cu; @Carrasco:2015pra] also used the RI-MOM scheme, have $n_f=2$ and $n_f=2+1+1$ flavours respectively and performed an $a^2\rightarrow0$ extrapolation. [@Jang:2014aea] have $n_f=2+1$ flavours, renormalised their operators perturbatively and also performed an $a^2\rightarrow0$ extrapolation. We use the domain wall fermion (DWF) prescription with $n_f=2+1$ dynamical fermion flavours. This prescription has good chiral symmetry properties and leading $\mathcal{O}(a^2)$ scaling. We extend our previous work [@Boyle:2012qb] by adding a second lattice spacing to quantify discretisation effects and by using both exceptional and non-exceptional kinematics for our intermediate renormalisation schemes. Background ========== Operators --------- The $\Delta S=2$ operators we consider are defined in the SUSY basis [@Gabbiani:1996hi] ($a$ and $b$ are color indices and Dirac indices have been suppressed), $$\label{eq:operators} \begin{gathered} O_1 = {\bigl[ \bar{s}_a\gamma_\mu(1-\gamma_5)d_a \bigr] \, \bigl[ \bar{s}_b\gamma_\mu(1-\gamma_5)d_b \bigr]}, \\ O_{2} = {\bigl[ \bar{s}_a(1-\gamma_5)d_a \bigr] \, \bigl[ \bar{s}_b(1-\gamma_5)d_b \bigr]},\quad O_{3} = {\bigl[ \bar{s}_a(1-\gamma_5)d_b \bigr] \, \bigl[ \bar{s}_b(1-\gamma_5)d_a \bigr]}, \\ O_{4} = {\bigl[ \bar{s}_a(1+\gamma_5)d_a \bigr] \, \bigl[ \bar{s}_b(1+\gamma_5)d_b \bigr]},\quad O_{5} = {\bigl[ \bar{s}_a(1+\gamma_5)d_b \bigr] \, \bigl[ \bar{s}_b(1+\gamma_5)d_a \bigr]}. \end{gathered}$$ NPR --- Our operators need to be renormalised, we perform this non-perturbatively in a scheme accessible to the lattice using both exceptional (RI-MOM) and non-exceptional kinematics (RI-SMOM) [@Martinelli:1994ty; @Aoki:2010pe]. The operators mix multiplicatively under renormalisation, the pattern of which for DWF is that of the continuum i.e. $O_1$ belongs to a $(27,1)$ irreducible representation of $\text{SU}(3)_L\times \text{SU}(3)_R$, whereas $O_2$ and $O_3$ transform like $(6,\bar{6})$ and $O_4$ and $O_5$ like $(8,8)$ [@Garron:2012ex]. Schematically, we compute a renormalisation matrix that mixes operators and has renormalisation condition that at the scale $\mu$ our Landau gauge fixed vertex function matches its tree level perturbation theory result, $$\begin{gathered} O_i^{\overline{\text{MS}}}(\mu) = C_{ij}^{\overline{\text{MS}}\leftarrow \text{MOM}}(\mu)\left(\lim_{a^2\rightarrow0}\frac{Z_{jk}^{\text{MOM}}(\mu)}{Z_q^2}O_k(a)\right),\\ Z^{\text{MOM}}(\mu) P(\Lambda(p^2))|_{p^2=\mu^2} = \text{tree}. \end{gathered}$$ As discussed in [@Lytle:2014tsa] the RI-MOM scheme suffers from pion pole contamination [@Giusti:2000jr]. We must subtract these un-physical poles from our renormalisation matrix to obtain physical results, this is a non-trivial procedure and can allow for difficult to quantify systematics. The RI-SMOM scheme does not suffer from this problem. Measurement types {#sec::meases} ----------------- We intend to measure various dimensionless quantities based upon the effective operators of Eq.\[eq:operators\]. In [@Donini:1999nn] and [@Babich:2006bh] ratios were suggested that give directly the BSM to SM contribution at the physical point, $$R_i(\mu) = \left[ \frac{f_K^2}{m_K^2}\right]_{\text{Expt.}}\left[ \frac{m_K^2}{f_K^2} \frac{\langle \bar{K}^0| O_i(\mu) | K^0 \rangle }{ \langle \bar{K}^0 | O_1(\mu) | K^0 \rangle }\right]_{\text{Latt.}}.$$ Groups also measure the bag parameters, which give the ratio of the matrix element to its vacuum saturation approximation (VSA), $$B_K(\mu) = \frac{\langle \bar{K}^0 | O_i(\mu) | K^0 \rangle}{\frac{8}{3}m_K^2 f_K^2},\quad B_i(\mu) = \frac{\langle \bar{K}^0 | O_i(\mu) | K^0 \rangle}{N_i m_K^2 f_K^2 \left( \frac{m_K}{m_u(\mu)+m_s(\mu)}\right)^2}.$$ With normalisation factors $N_i=\left(-\frac{5}{3},\frac{1}{3},2,\frac{2}{3}\right)$. In [@Becirevic:2004qd] and [@Bae:2013tca] combinations of bag parameters were suggested such that the leading chiral logarithms of these quantities in chiral perturbation theory cancel, we call these the golden combinations, $$\begin{gathered} G_{23}(\mu)=\frac{3B_2(\mu)}{5B_2(\mu)-2B_3(\mu)},\quad G_{45}(\mu)=\frac{B_4(\mu)}{B_5(\mu)},\\ G_{24}(\mu)=B_2(\mu)B_4(\mu),\quad G_{21}(\mu)=\frac{B_2(\mu)}{B_{K}(\mu)}. \end{gathered}$$ The intention of this work is not only to assess the intermediate-scheme dependence but also to measure the various chiral and discretisation effects of these dimensionless quantities. Methodology =========== Volume $a^{-1}\:[\text{GeV}]$ $am^{\text{sea}}_{ud} \, (= am^{\text{val}}_{ud})$ $m_\pi\:[\text{MeV}]$ -------- ------------------------ ---------------------------------------------------- ------------------------ 1.785(5) 0.005, 0.01, 0.02 340, 430, 560 $am^{\text{sea}}_{s}$ $am^{\text{val}}_{s}$ $am^{\text{phys}}_{s}$ 0.04 0.04, 0.035, 0.03 0.03224(18) $a^{-1}\:[\text{GeV}]$ $am^{\text{sea}}_{ud} \, (= am^{\text{val}}_{ud})$ $m_\pi\:[\text{MeV}]$ 2.383(9) 0.004, 0.006, 0.008 300, 360, 410 $am^{\text{sea}}_{s}$ $am^{\text{val}}_{s}$ $am^{\text{phys}}_{s}$ 0.03 0.03, 0.025 0.02477(18) : Summary of our lattice ensembles, more details can be found in [@Aoki:2010dy]. For the coarse lattice ($a^{-1}=1.785\text{ GeV}$) we use $155,152$ and $146$ measurements for the $am=0.005,0.01$ and $0.02$ ensembles respectively, although in the final analysis the $am=0.02$ ensemble is omitted. For the fine lattice we perform $129, 186$ and $208$ measurements for the $am=0.004, 0.006$ and $0.008$ ensembles respectively. The most recent values of $a^{-1}$ and the physical light and strange quarks can be found in [@Blum:2014tka].[]{data-label="tab:lattparam"} We use Coulomb gauge fixed wall sources, the coarse ensemble was fixed to this gauge using the time-slice by time-slice FASD algorithm of [@Hudspith:2014oja], the fine ensemble data was generated as part of the analysis of [@Aoki:2010pe]. For the non-perturbative renormalisation we use momentum sources [@Gockeler:1998ye] and partially twisted boundary conditions [@Martinelli:1994ty]. The ensembles considered in this work are at heavier pion mass than the physical point, we use unitary light valence quarks to extrapolate to the physical pion mass and partially quenched strange quarks to interpolate to the physical strange mass. Results ======= Fig.\[fig:chiral\_continuum\] illustrates the chiral and continuum behaviour of the various quantities from Sec.\[sec::meases\] renormalised at $\,3\text{ GeV}$ in the RI-SMOM scheme. The coarse ensemble’s data and chiral fit is in red, the fine ensemble’s is in black and the combined chiral-continuum result is blue with the physical point defined as a filled blue symbol. For the bag parameters we see reasonably linear behaviour in $m_\pi^2$ as we approach the chiral limit (Figs.\[fig::cc\_b145\] and \[fig::cc\_b23\]) and large discretisation effects upon taking the $a^2\rightarrow0$ limit. For the ratios (Fig.\[fig::cc\_r\]) we see consistent behaviour with our previous work [@Boyle:2012qb], i.e. large ratios of BSM to SM matrix elements. We note that upon taking the $a^2\rightarrow0$ limit these ratios are larger than those of our fine ensemble. As expected, the approach to the chiral limit for the golden combinations is particularly flat (Figs.\[fig::cc\_G1\] and \[fig::cc\_G3\]) but we do measure large discretisation effects, particularly for the quantity $G_{23}$. More pronounced discretisation effects are measured for these quantities in the RI-MOM scheme but are not shown in Fig.\[fig:chiral\_continuum\]. Collaboration $B_K$ $B_2$ $B_3$ $B_4$ $B_5$ ------------------------------ --------- --------- --------- --------- --------- RBC-UKQCD [@Boyle:2012qb] 0.53(2) 0.43(5) 0.75(9) 0.69(7) 0.47(6) ETM [@Bertone:2012cu] 0.51(2) 0.47(2) 0.78(4) 0.75(3) 0.60(3) ETM [@Carrasco:2015pra] 0.51(2) 0.46(3) 0.79(5) 0.78(5) 0.49(4) SWME [@Jang:2014aea] 0.52(2) 0.53(2) 0.77(6) 0.98(6) 0.75(8) RBC-UKQCD ($\text{RI-MOM}$) 0.53(1) 0.42(1) 0.66(5) 0.75(3) 0.56(5) RBC-UKQCD ($\text{RI-SMOM}$) 0.53(1) 0.49(2) 0.74(7) 0.92(2) 0.71(4) : **Preliminary** results for the bag parameters matched to $\overline{\text{MS}}$ at $\mu=3\,\text{GeV}$ via the intermediate $\text{RI-MOM}$ and $\text{RI-SMOM}$ schemes. Statistical and systematic errors have been added in quadrature, we have not included the perturbative matching systematic in our error. Our collaboration’s most up to date and accurate evaluation of $B_K$ should be taken from [@Blum:2014tka].[]{data-label="tab:comparison"} In Tab.\[tab:comparison\] we compare the results of this work (the two rows at the bottom of the table) with the results of other collaborations from Tab.\[tab:prev\_evals\]. For the RI-MOM intermediate scheme we are consistent with our previous evaluation at a single lattice spacing and in decent agreement with the most recent ETM evaluation apart from some slight tension in $B_3$. For the RI-SMOM intermediate scheme[^1] we are in considerably better agreement with the most recent results of the SWME collaboration. Conclusions =========== We have attempted to address a tension between different evaluations of the various $\Delta S=2$ matrix elements required for $K^0 -\bar{K}^0$ mixing in and beyond the Standard Model. With the addition of a second lattice spacing we were able to evaluate our discretisation errors and with the use of non-exceptional kinematics in our renormalisation procedure we appear to have an answer as to where this tension originates. It seems that the RI-MOM non-perturbative renormalisation procedure, and the pion pole subtraction this scheme requires, induces a systematic that was previously not well understood. Acknowledgements ================ R. J. H is supported by NSERC of Canada, N. G. is funded by the Leverhulme Trust, research grant RPG-2014-11. A. T. L. is supported by the STFC. The coarse ensemble propagator inversions were performed on the STFC funded DiRAC BG/Q system in the Advanced Computing Facility at the University of Edinburgh. [^1]: We would like to thank C. Lehner for computing the conversion factors in $\overline{\text{MS}}$ for the $(6,\bar{6})$ operators
--- abstract: | Type Ia supernovae (SNe Ia) can be calibrated to be good standard candles at cosmological distances. We propose a supernova pencil beam survey that could yield between dozens to hundreds of SNe Ia in redshift bins of 0.1 up to $z=1.5$, which would compliment space based SN searches, and enable the proper consideration of the systematic uncertainties of SNe Ia as standard candles, in particular, luminosity evolution and gravitational lensing. We simulate SNe Ia luminosities by adding weak lensing noise (using empirical fitting formulae) and scatter in SN Ia absolute magnitudes to standard candles placed at random redshifts. We show that flux-averaging is powerful in reducing the combined noise due to gravitational lensing and scatter in SN Ia absolute magnitudes. The SN number count is not sensitive to matter distribution in the universe; it can be used to test models of cosmology or to measure the SN rate. The SN pencil beam survey can yield a wealth of data which should enable accurate determination of the cosmological parameters and the SN rate, and provide valuable information on the formation and evolution of galaxies. The SN pencil beam survey can be accomplished on a dedicated 4 meter telescope with a square degree field of view. This telescope can be used to conduct other important observational projects compatible with the SN pencil beam survey, such as QSOs, Kuiper belt objects, and in particular, weak lensing measurements of field galaxies, and the search for gamma-ray burst afterglows. author: - 'Yun Wang[^1]' title: Supernova pencil beam survey --- \#1\#2[3.6pt]{} astro-ph/9806185\ September 10, 1999\ to appear in ApJ, 531, \#2 (March 10, 2000) Introduction ============ Toward the end of the millennium, cosmology has matured into a phenomenological science. Observational data now dominates aesthetics in the evaluation of cosmological models. Of fundamental importance is the determination of cosmological parameters, in particular, the Hubble constant $H_0$, the matter density faction $\Omega_m$, and the density fraction contributed by the cosmological constant $\Omega_{\Lambda}$. Observation of distant Type Ia supernovae (SNe Ia) has become an increasingly powerful means of measuring cosmological parameters ([@Perl97], 1998; [@Riess98]; [@Schmidt98]), because SNe Ia can be calibrated to be good standard candles at cosmological distances. ([@Riess95]) A type Ia SN is the thermonuclear explosion of a carbon-oxygen white dwarf in a binary when the rate of the mass transfer from the companion star is high. The SN explosion blows the white dwarf completely apart. The radioactive decay of the isotopes $^{56}$Ni and $^{56}$Co is responsible for much of the light emitted. The SN lightcurve reaches a maximum about 15 days after the explosion and then declines slowly over years. A SN can outshine the galaxy in which it lies. Two independent groups ([@Riess98; @Perl99]) have made systematic searches for SNe Ia for the purpose of measuring cosmological parameters. Their preliminary results seem to indicate a low matter density universe, possibly with a sizable cosmological constant. Even though the uncertainty in these results is large, they clearly demonstrate that the observation of SNe Ia can potentially become a reliable probe of cosmology. However, there are important systematic uncertainties of SNe Ia as standard candles, in particular, luminosity evolution and gravitational lensing. To constrain the evolution of SN Ia peak absolute luminosities, we need a large number of SNe Ia at significantly different redshifts (low $z$ and $z>1$), which is not available at present. Both groups have assumed a smooth universe in their data analysis, although they include lensing in their error budgets. Since we live in a clumpy universe, the effect of gravitational lensing must be taken into account adequately for the proper interpretation of SN data. At present, the small number of observed high $z$ SNe Ia prevents adequate modeling of gravitational lensing effects. In this paper, we propose a pencil beam survey of SNe Ia that could yield between dozens to hundreds of SNe Ia in redshift bins of 0.1 up to $z=1.5$ (see §3) which would allow the proper modeling of gravitational lensing, as well as a quantitative understanding of luminosity evolution. Such a survey would yield a wealth of data which can be used to make accurate measurement of cosmological parameters and the SN rate, and provide powerful constraints on various aspects of the cosmological model. In §2, we consider the weak lensing of SNe Ia. We simulate SN Ia luminosities by adding weak lensing noise and scatter in SN Ia absolute magnitudes to standard candles placed at random redshifts. We show how flux-averaging reduces the combined noise of gravitational lensing and scatter in SN Ia absolute magnitudes. In §3, we show that the number count of SNe from a pencil beam survey is not sensitive to matter distribution in the universe; it can be used as a test of models of cosmology and SN progenitors, or to measure the SN rate accurately. In §4, we discuss the observational feasibility of the SN pencil beam survey. §5 contains conclusions. Weak lensing of supernovae ========================== In a SN Ia Hubble diagram, one must use distance-redshift relations to make theoretical predictions. Unlike angular separations and flux densities, distances are not directly measurable, but they are indispensable theoretical intermediaries. The distance-redshift relations depend on the distribution of matter in the universe. In a smooth Friedmann-Robertson-Walker (FRW) universe, the metric is given by $ds^2=dt^2-a^2(t)[dr^2/(1-kr^2)+r^2 (d\theta^2 +\sin^2\theta \,d\phi^2)]$, where $a(t)$ is the cosmic scale factor, and $k$ is the global curvature parameter ($\Omega_k =1-\Omega_m-\Omega_{\Lambda}=-k/H_0^2$). The comoving distance $r$ is given by \[eq:r(z)\] r(z)= { |\_k|\^[1/2]{} \_0\^z dz’\^[-1/2]{} }, where “sinn” is defined as sinh if $\Omega_k>0$, and sin if $\Omega_k<0$. If $\Omega_k=0$, the sinn and $\Omega_k$’s disappear from Eq.(\[eq:r(z)\]), leaving only the integral. The angular diameter distance is given by $d_A(z)=r(z)/(1+z)$, and the luminosity distance is given by $d_L(z)=(1+z)^2 d_A(z)$. However, our universe is clumpy rather than smooth. According to the focusing theorem in gravitational lens theory, if there is any shear or matter along a beam connecting a source to an observer, the angular diameter distance of the source from the observer is [*smaller*]{} than that which would occur if the source were seen through an empty, shear-free cone, provided the affine parameter distance (defined such that its element equals the proper distance element at the observer) is the same and the beam has not gone through a caustic. An increase of shear or matter density along the beam decreases the angular diameter distance and consequently increases the observable flux for given $z$. (Schneider, Ehlers, & Falco 1992) The observation of SNe Ia at $z >1$ is important for the determination of cosmological parameters, but the dispersion in SN Ia luminosities due to gravitational lensing can become comparable to the intrinsic dispersion of SNe Ia absolute magnitudes because the optical depth for gravitational lensing increases with redshift. Direction dependent smoothness parameter ---------------------------------------- If only a fraction $\tilde{\alpha}$ (known as the smoothness parameter) of the matter density is smoothly distributed, the largest possible distance (for given redshift) for light bundles which have not passed through a caustic is given by the appropriate solution to the following equation: \[eq:DR\] g(z) + \_m (1+z)\^5 D\_A=0, where $g(z) \equiv (1+z)^3 \sqrt{ 1+ \Omega_m z+ \Omega_{\Lambda} [(1+z)^{-2} -1] }$. ([@Kantow98]) The $\Omega_{\Lambda}=0$ form of Eq.(\[eq:DR\]) has been known as the Dyer-Roeder equation. ([@DR73; @Sch92]) Fig.1(a) shows magnitude versus redshift for the three cosmological models considered by Riess et al. (1998), SCDM ($\Omega_m=1$, $\Omega_{\Lambda}=0$), OCDM ($\Omega_m=0.2$, $\Omega_{\Lambda}=0$), and $\Lambda$CDM ($\Omega_m=0.2$, $\Omega_{\Lambda}=0.8$). For each cosmological model, the upper curve represents the completely clumpy universe (empty beam, $\tilde{\alpha}=0$), while the lower curve represents the completely smooth universe (filled beam, $\tilde{\alpha}=1$). Fig.1(b) shows the same models relative to smooth OCDM ( filled beam, $\tilde{\alpha}=1$), the middle curve for each model now represents a universe with half of the matter smoothly distributed (half-filled beam, $\tilde{\alpha}=0.5$). Clearly, at $z>1$, there is degeneracy of distances in a flat clumpy universe and an open smooth universe, and also in an open clumpy universe and a flat smooth universe with a sizable cosmological constant, as has been noted by a number of previous authors. ([@Kantow98; @Linder98; @Holz98b]) We can generalize the angular diameter distance $D_A(z)$ by allowing the smoothness parameter $\tilde{\alpha}$ to be [*direction dependent*]{}, i.e., a property of the [*beam*]{} connecting the observer and the standard candle. The smoothness parameter $\tilde{\alpha}$ essentially represents the amount of matter that causes weak lensing of a given source. Since matter distribution in our universe is inhomogeneous, we can think of our universe as a mosaic of cones centered on the observer, each with a different value of $\tilde{\alpha}$. This reinterpretation of $\tilde{\alpha}$ implies that we have $\tilde{\alpha}>1$ in regions of the universe in which there are above average amounts of matter which can cause magnification of a source. ([@Wang99a]) In order to derive a unique mapping between the distribution in distances and the distribution in the direction dependent smoothness parameter for given redshift $z$, we [*define*]{} the direction dependent smoothness parameter $\tilde{\alpha}$ to be the solution of Eq.(\[eq:DR\]) for given distance $D_A(z)$. At given redshift $z$, the magnification of a source can be expressed in terms of the apparent brightness of the source $f(\tilde{\alpha}|z)$, or in terms of the angular diameter distance to the source $D_A(\tilde{\alpha}|z)$: \[eq:mu\] = = \^2, where $f (\tilde{\alpha}=1|z)$ and $D_A(\tilde{\alpha}=1|z)$ are the flux of the source and angular diameter distance to the source in a completely smooth universe (filled beam), and $\tilde{\alpha}$ is the direction dependent smoothness parameter. Since distances are not directly measurable, we should interpret Eq.(\[eq:mu\]) as defining a unique mapping between the magnification of a standard candle at redshift $z$ and the direction dependent smoothness parameter $\tilde{\alpha}$ at $z$; $\tilde{\alpha}$ parametrizes the direction dependent matter distribution in a well-defined manner. From the magnification distributions of standard candles at various redshifts, $p(\mu|z)$, with $z=$0.5, 1, 1.5, 2, 2.5, 3, 5, found numerically by Wambsganss et al. for $\Omega_m=0.4$, $\Omega_{\Lambda}=0.6$ ([@Wamb97; @Wamb99]), Wang (1999a) has obtained simple empirical fitting formulae for the distribution of $\tilde{\alpha}$: \[eq:p(alpha)\] p(|z)=C\_[norm]{} , where $C_{norm}$, $\tilde{\alpha}_{peak}$, $w$, and $q$ depend on $z$ and are independent of $\tilde{\alpha}$. They are given by \[eq:aq\] C\_[norm]{}(z) &=& 10\^[-2]{} ,\ \_[peak]{}(z) &=& 1.01350 -1.07857 ( ) +2.05019 ()\^2 -2.14520 ()\^3,\ w(z) &=& 0.06375 + 1.75355 ( ) - 4.99383 ()\^2 + 5.95852 ()\^3,\ q(z) &=& 0.75045 +1.85924 ( ) -2.91830 ()\^2 +1.59266 ()\^3. $C_{norm}(z)$ is the normalization constant for given $z$. The parameter $\tilde{\alpha}_{peak}(z)$ indicates the average smoothness of the universe at redshift $z$, it increases with $z$ and approaches $\tilde{\alpha}_{peak}(z)=1$ (filled beam) at $z=5$; the parameter $w(z)$ indicates the width of the distribution in the direction dependent smoothness parameter $\tilde{\alpha}$, it decreases with $z$. The $z$ dependences of $\tilde{\alpha}_{peak}(z)$ and $w(z)$ are as expected because as we look back to earlier times, lines of sight become more filled in with matter, and the universe becomes smoother on the average. The parameter $q(z)$ indicates the deviation of $p(\tilde{\alpha}|z)$ from Gaussianity (which corresponds to $q=0$). Models with different cosmological parameters should lead to somewhat different matter distributions $p(\tilde{\alpha}|z)$. In the context of weak lensing of standard candles, we expect the cosmological parameter dependence to enter primarily through the magnification $\mu$ to direction dependent smoothness parameter $\tilde{\alpha}$ mapping at given $z$ (the same $\tilde{\alpha}$ corresponds to very different $\mu$ in different cosmologies). Flux averaging of SN luminosities --------------------------------- Gravitational lensing noise in the Hubble diagram can be reduced by appropriate flux averaging of SNe Ia in each redshift bin. Because of flux conservation, the average flux of a sufficient number of SNe Ia at the same $z$ from the same field should be the same as the true flux of the SNe Ia without gravitational lensing if the sample is complete. It is convenient to compare the distance modulus of SNe Ia, $\mu_0$, with the theoretical prediction \_0\^p= 5( )+25, where $d_L(z)$ is the luminosity distance. Before flux-averaging, we convert the distance modulus $\mu_0(z_i)$ of SNe Ia into “fluxes”, $f(z_i)=10^{-\mu_0(z_i)/2.5}$. We then obtain “absolute luminosities”, {${\cal L}(z_i)$}, by removing the redshift dependence of the “fluxes”, i.e., (z\_i) = 4d\_L\^2(z\_i|H\_0,\_m, \_)f(z\_i), where $(H_0,\Omega_m, \Omega_{\Lambda})$ are the best-fit cosmological parameters derived from the unbinned data set {$f(z_i)$}. We then flux-average over the “absolute luminosities” {${\cal L}_i$} in each redshift bin. The set of best-fit cosmological parameters derived from the binned data is applied to the unbinned data {$f(z_i)$} to obtain a new set of “absolute luminosities” {${\cal L}_i$}, which is then flux-averaged in each redshift bin, and the new binned data is used to derive a new set of best-fit cosmological parameters. This procedure is repeated until convergence is achieved. This iteration should lead to the optimal removal of gravitational lensing noise and the accurate determinations of the cosmological parameters. Wang (1999b) has applied this method to analyze the combined data from the two groups ([@Riess98; @Perl99]). To illustrate how flux-averaging can reduce the dispersion in SN Ia luminosities caused by weak lensing, let us simulate the data by drawing $N_{SN}$ random points from the redshift interval $[z_1^0,z_2^0]$, each point represents a standard candle. We add weak lensing noise by giving each standard candle a direction dependent smoothness parameter $\tilde{\alpha}$ (corresponding to $\mu= \left[ D_A(\tilde{\alpha}=1)/D_A(\tilde{\alpha}) \right]^2$) drawn at random from the distribution $p(\tilde{\alpha}|z)$ given in §2.1. The scatter in SN Ia absolute magnitudes, $\Delta m_{abm}$, can be written as m\_[abm]{}=m\_[int]{}+m\_[obs]{}, where $\Delta m_{int}$ is the intrinsic scatter and $\Delta m_{obs}$ is the observational noise. We assume that both intrinsic scatter and observational noise are Gaussian distributed in magnitude, with dispersions $\sigma_{int}$ and $\sigma_{obs}$ respectively. Then the total scatter in SN Ia absolute magnitudes is also Gaussian distributed, i.e., p(m\_[abm]{})= , with $\sigma_{abm}=\sqrt{\sigma_{int}^2+\sigma_{obs}^2}$. We take $\sigma_{abm}=0.20$. The absolute luminosity of each SN Ia extracted from the data is (z)&=&10\^[-m\_[obs]{}/2.5]{}(|z)[L]{}\_[int]{}(z),\ &=& 10\^[-m\_[obs]{}/2.5]{}(|z) { 10\^[-m\_[int]{}/2.5]{} [L]{}(=1|z)}\ &=& [L]{}(=1|z)(|z) 10\^[-m\_[abm]{}/2.5]{} . We have used ${\cal L}_{int}(z)=10^{-\Delta m_{int}/2.5}\, {\cal L}(\tilde{\alpha}=1|z)$. For each SN Ia, the total noise is m = -2.5 ( ). Let us average the fluxes of all SNe Ia in the redshift bin $[z_1^0,z_2^0]$: = \_[i=1]{}\^[N\_[SN]{}]{} [L]{}(z\_i). The flux averaged noise is (m )\_[avg]{}= -2.5 ( ), where =. Table 1 lists the means and dispersions (in the form of mean$\pm$dispersion) of $\Delta m$ (which are $\langle \Delta m\rangle$ and $\sigma=\sqrt{ \langle \left[\Delta m-\langle \Delta m\rangle\right]^2\rangle}\,$), and $(\Delta m )_{avg}$ (which are $\langle(\Delta m )_{avg}\rangle$ and $\sigma_{avg}=\sqrt{\langle \left[(\Delta m)_{avg}-\langle (\Delta m)_{avg} \rangle\right]^2\rangle}\,$) for various redshift bins with $N_{SN}$=2, 4, and 9 SNe Ia in each bin, for $10^4$ random samples. We have taken $\Omega_m=0.4$, $\Omega_{\Lambda}=0.6$. [cccccc]{} $z$ & $\langle \Delta m\rangle\pm \sigma$ & $N_{SN}$=2 & $N_{SN}$=4 & $N_{SN}$=9\ $[0.5, 0.6]$ & 0.001 $\pm 0.200 $ & -0.007$\pm 0.141$& -0.012 $ \pm 0.101 $& -0.015$\pm 0.067$\ $[1, 1.1]$ & $ 0.002 \pm 0.204 $ & -0.007 $\pm $ 0.145 & -0.013 $\pm $ 0.103 & -0.015 $\pm $ 0.069\ $[1.5, 1.6]$ & 0.003$ \pm $0.210 & -0.006$\pm $ 0.149 & -0.012 $\pm $ 0.107 & -0.015 $\pm $ 0.071\ $[1.5, 2]$ & 0.003$ \pm $0.213 & -0.006$\pm $ 0.151 & -0.012 $\pm $ 0.108 & -0.015 $\pm $ 0.072\ $[2, 2.5]$ & 0.005 $ \pm 0.218 $ & -0.005 $\pm $0.155 & -0.012 $\pm $ 0.111 & -0.015 $\pm $ 0.074\ $[2.5, 3]$ & 0.006 $ \pm $0.223 & -0.005$\pm $0.159 & -0.012$\pm $0.114 & -0.015$\pm $0.076\ $[3, 3.5]$ & 0.007 $ \pm $0.228 & -0.004$\pm $ 0.162 & -0.012$\pm $0.117 & -0.015$\pm $ 0.078\ $[3.5, 4]$ & 0.007 $ \pm 0.233 $ & -0.004$\pm $0.166 & -0.012$\pm $0.119 &-0.015 $\pm $ 0.080\ $[4,4.5]$ & 0.008 $\pm $0.238 & -0.004$\pm $0.170 & -0.012$\pm $0.122 & -0.015$\pm $ 0.082\ $[4.5,5]$ & 0.008$ \pm $0.243 & -0.004$\pm $ 0.174 & -0.012$\pm $ 0.125& -0.016$\pm $ 0.084\ The dispersion decreases roughly as $1/\sqrt{N_{SN}}$. The dispersion would reduce by 30% if the sample contains 2 SNe Ia; 50% if the sample contains 4 SNe Ia. Even though gravitational lensing noise increases with redshift, the combined gravitational lensing and SN Ia absolute magnitude scatter noise in the redshift interval $z=[4.5,5]$ can be reduced to the same level as in the absence of lensing by flux averaging over two SNe Ia. Note that the flux averaged luminosities are biased towards slightly higher luminosities. This is as expected, because we have assumed that the intrinsic scatter and observational noise in the SN Ia absolute magnitude are Gaussian in [*magnitude*]{}. It is straightforward to show that the mean of $10^{-\Delta m_{abm}/2.5}$ is $\exp\left[(\ln 10/2.5)^2\,\sigma_{abm}^2 /2\right]$, which corresponds to a bias of $-\sigma_{abm}^2\, \ln 10 /5\simeq -0.018$ for $\sigma_{abm}=0.2$. If we assume that the intrinsic scatter and observational noise are Gaussian in luminosity, the flux averaged luminosities become unbiased. Supernova number count ====================== The SN number count is most sensitive to the SN rate as a function of $z$, which depends on the specific rate of SNe, as well as the number density of galaxies and their luminosity distribution. The frequency of SNe is a key parameter for describing the formation and the evolution of galaxies; the winds driven by SNe tune the energetics, and their production of metals determines the chemical evolution of galaxies and of clusters of galaxies ([@Ferrini93; @Renzini93]). The SN II rate is related (for a given initial mass function) to the instantaneous stellar birthrate of massive stars because SNe II have short-lived progenitors; the SN Ia rate follow a slower evolutionary track, and can be used to probe the past history of star formation in galaxies. Accurate measurements of the SN rates at intermediate redshifts are important for understanding galaxy evolution, cosmic star formation rate and the nature of SN Ia progenitors. ([@Madau98a; @Ruiz98; @Sadat98; @Yung98; @Madau98b]) The SN rates are very uncertain at present due to the small number of SNe discovered in systematic searches. ([@Van94; @Pain96]). Kolatt & Bartelmann (1997) have estimated the SN Ia average rate per proper time unit per comoving volume to be \[eq:SNrate\] n\_[SN]{}(z)=(100)\^[-1]{} (h\^[-1]{})\^[-3]{}, A=0.0136, B=0.067, for $q_0=0.5$. Changing $q_0$ to lower values leads to lower comoving densities but higher luminosities of galaxies at high $z$ ([@Lilly95]). Eq.(\[eq:SNrate\]) has been derived assuming that all SNe reside in galaxies, neglecting the redshift dependence of the specific SN rate (number per unit luminosity per unit time), and using the number density of galaxies (as function of redshift) and the Schechter function parameters derived from the Canada France redshift survey ([@Lilly95]), the APM survey ([@Loveday92]), and the AUTOFIB survey ([@Ellis96]). We use Eq.(\[eq:SNrate\]) for all cosmological models considered in this paper, because it is a very crude and conservative estimate, and we use it for the purpose of illustration only. The expected number of SNe Ia in a field of angular area $\theta^2$ for an effective observation duration of $\Delta t$ up to redshift $z$ is N(z)&=& \^2 \^z\_0 r\^2(z’) ,\ &=& 22.4 ()\^2 () \_0\^[z]{}dz’ \^2 , where $dr_p=-cdt$ is the proper distance interval, $r$ is comoving distance, and the factor $(1+z)^{-1}$ accounts for the cosmological time dilation. Note that the number counts fundamentally probe a different aspect of the global geometry of the universe than do the distance measures ([@Carroll92]). Fig.2 shows the number of SNe Ia expected per 0.1 redshift interval as function of redshift for the same three cosmological models as in Fig.1, SCDM ($\Omega_m=1$, $\Omega_{\Lambda}=0$), OCDM ($\Omega_m=0.2$, $\Omega_{\Lambda}=0$), and $\Lambda$CDM ($\Omega_m=0.2$, $\Omega_{\Lambda}=0.8$). For a one square degree field, and an effective observation duration of one year, the total numbers of expected SNe Ia are 464, 899, and 1705 for SCDM, OCDM, and $\Lambda$CDM respectively for $z$ up to 1.5. The number of SNe which are strongly lensed by galaxies is given by N\_[lensed]{}(z)&=& \^z\_0 dz’ (z’) ,\ &=& 3.8110\^[-2]{} ()\^2 () ()\ && \_0\^[z]{}dz’ \^5 . We have used the optical depth for gravitational lensing by galaxies ([@Turner90; @Fuku91]) (z) \^3, where $F$ parametrizes the gravitational lensing effectiveness of galaxies (as singular isothermal spheres). Fig.3 shows the number of strongly lensed SNe as function of survey depth $z$, for the same cosmological models as in Fig.1 and Fig.2. For a one square degree field, and an effective observation duration of one year, the total numbers of strongly lensed SNe Ia are 0.2, 0.6, and 1.8 for SCDM, OCDM, and $\Lambda$CDM respectively for $z$ up to 1.5. Fig.2 and Fig.3 show that the SN number counts from a pencil beam survey can be used to measure the SN rate at high redshifts and perhaps to probe cosmology. Because of the large number of SNe in each redshift bin and the smallness of gravitational lensing optical depth, the SN number count should be insensitive to matter distribution in the universe, and should therefore provide a robust probe of the cosmological model. Note that the strongly lensed SNe can be easily removed from the survey sample, because they appear as unusually bright SNe. The number of strongly lensed SNe is very sensitive to the cosmological model and might be used to further constrain the cosmological model. But given the small number of strongly lensed SNe Ia expected in any realistic observational program, their usefulness may be limited. The SN number counts provide a combined measure of the cosmological parameters and the SN rate. Fig. 4 shows the parameter dependence of the SN Ia number count per 0.1 redshift interval ($z-0.05,z+0.05$) as function of redshift $z$. Note that the dependences of the SN number count on the SN rate (parametrized by $A$ and $B$) and the cosmological parameters are not degenerate and can be differentiated in principle. In practice, we are ignorant of the functional form of the SN rate as a function of $z$. Hence, we may apply the measurements of the cosmological parameters as priors to the SN number counts to obtain a direct measure of the SN rate in the universe for $z$ up to 1.5, which can be used as a powerful constraint on the cosmological model. For a one square degree field, and an effective observation duration of one year, the SN rate per 0.1 redshift interval can be determined to 14-16%, 9-11%, 7-8% at $1<z<1.5$ for SCDM, OCDM, and $\Lambda$CDM respectively. Observational feasibility ========================= A pencil beam survey would be efficient in the discovery of SNe at $z\ga 1$ through the combination of data from successive nights and the comparison of the latest frame of images with all previous frames. SNe Ia are quite faint at $z\sim 1.5$, with AB magnitude in the I band of $I_{AB}\sim 26$. Because of the UV suppression due to line blanketing and the apparent IR suppression in the rest frame SN Ia spectrum, one should use a passband that corresponds to the wavelength range of 3000$\AA-10000\AA$ in the SN rest frame; this means using the I, J, H, or K passbands to observe the $z>1$ SNe. SN searches from space (HST and NGST) are limited by small fields of view, a large scale SN search is only possible from the ground, where one is limited to the I band by the atmosphere. Here we discuss the observational feasibility of a pencil beam survey of SNe Ia up to $z\sim 1.5$ in terms of exposure times in the I band; although multiple band photometry should be obtained to constrain extinction and evolution of the SNe. Using values for the photometric parameters from the Sloan Digital Sky Survey (1 arcsec seeing, effective sky brightness of 20.3 mag/arcsec$^2$ in the I band, seeing-dominated PSF, etc), we find that the exposure time for a point source with AB magnitude of $I_{AB}$ can be written as t = 13.94 [hours]{} ()\^2 ()\^2 10\^[0.8 (I\_[AB]{}-26)]{} where $S/N$ is the signal-to-noise ratio, $D$ is the aperture of the telescope. The above equation shows that the supernova pencil beam survey can be accomplished from a modest dedicated 4 meter telescope, which can image the same fields (two pencil beams should be observed to keep the fields close to zenith) every night, which can lead to the discovery of SNe Ia up to $z=1.5$ via appropriate combination of data from successive nights, and the light curves of SNe Ia at $z < 1.5$. It should be feasible to monitor the faintest SNe Ia from the Keck 10 meter telescope or the HST. Spectra of SNe are required to determine whether they are type Ia. SNe Ia have a Si absorption line at $\sim 4000\,\AA$ that may be used to identify the $1<z\la 1.5$ SNe Ia in ground-based observations. The follow-up spectroscopy can be attempted on the Keck Low-Resolution Imaging Spectrometer (LRIS). Assuming that we use the 300 grooves/mm grating which provide dispersions of 4.99$\,\AA$ per 48$\,\mu$m (2 pixels on the CCD), we find the exposure time for 0.5 arcsec seeing to be t&=&28 [hours]{} ()\^2 ( ) 10\^[0.4 \[ 2(I\_[AB]{}-26)-(I\^[sky]{}\_[AB]{}-21)\]]{},\ &=&4.44 [hours]{} ()\^2 ( ) 10\^[0.4 \[ 2(I\_[AB]{}-25)-(I\^[sky]{}\_[AB]{}-21)\]]{},\ &=&12.33 [hours]{} ()\^2 ( ) 10\^[0.4 \[ 2(I\_[AB]{}-25)-(I\^[sky]{}\_[AB]{}-21)\]]{}, where $W$ is the width of the slit, and $I^{sky}_{AB}$ is the sky brightness in the I band in units of mag/arcsec$^2$. Note that the Si absorption feature at $\sim 4000\,\AA$ is not deep enough to be useful in a very noisy ($S/N=3$) spectrum. Clearly, spectroscopic follow-up of the $z\sim 1.5$ ($I_{AB}\sim 26$) SNe Ia will require substantial observational resources; NICMOS or NGST will be more suitable for the spectroscopy of the SNe Ia at the highest redshifts discovered from the ground. The Keck LRIS can be used to obtain the spectra of the SNe Ia at more modest redshifts ($z\sim 1.3$, $I_{AB}\sim 25$). A dedicated 4 meter telescope with a square degree field of view can be used to conduct other important scientific projects compatible with the SN pencil beam survey, such as QSOs, Kuiper belt objects, and in particular, weak lensing and the search of gamma-ray burst (GRB) afterglows. Weak lensing is a powerful tool in mapping the mass distribution in the universe. The large field of view and the depth of a pencil beam survey would be ideal for weak lensing measurements of field galaxies, which can be used to constrain the large scale structure in the universe. GRBs are perhaps the most energetic astrophysical events in the universe. Currently, there are many competing theories to explain GRBs. GRB afterglows contain valuable information, a statistically significant sample of GRB afterglows can provide strong constraints on the GRB theory. If beaming is involved in GRBs, we expect most of the GRB afterglows not to be associated with observable bursts. Schmidt et al. (1998) have observed a few optical transients which were too short in duration to be SNe; these could be GRB afterglows. Since present observational data seem to indicate that GRBs have associated host galaxies, the detection of host galaxies associated with short optical transients would support the interpretation of the latter as GRB afterglows. Since the GRB host galaxies are typically fainter than $R=25$, a pencil beam survey would be ideal in detecting these host galaxies of candidate GRB afterglows. Summary and discussion ====================== We have proposed a pencil beam survey of SNe Ia which can yield from tens to hundreds of SNe Ia per 0.1 redshift interval for $z$ up to 1.5, which would enable the quantitative consideration of the systematic uncertainties of SNe Ia as standard candles, in particular, luminosity evolution and gravitational lensing. Using the Perlmutter et al. “batch” search technique repetitively over the same field in the sky, the pencil beam survey would be efficient in the discovery of SNe at $z\ga 1$ by allowing the comparison of the latest frame of images (which may consist of combined data from successive nights) with all previous frames. The direct product of such a survey is the SN number count as a function of redshift (see §3), which is a combined measure of the cosmological parameters and the SN rate. When the measurements of the cosmological parameters are applied as priors to the number count, we obtain a direct measure of the SN rate, which is a key parameter in the formation and evolution of galaxies. The non-type-Ia SNe discovered by the pencil beam survey may be comparable to the type Ia SNe in number. The most important and straightforward application of the data from the SN pencil beam survey is to reduce the gravitational lensing noise in a SN Ia Hubble diagram via flux averaging. We have simulated SN Ia luminosities by adding weak lensing noise (using empirical fitting formulae given by Wang 1999a) and scatter in SN Ia absolute magnitudes to standard candles placed at random redshifts. We have shown that flux-averaging is powerful in reducing the combined noise of gravitational lensing and SN Ia absolute magnitude scatter (see §2). Because of the non-Gaussian nature of the luminosity distribution of SNe Ia at given $z$ due to weak lensing, the large number of SNe Ia in a given redshift interval at high $z$ is essential for the proper modeling and removal of the gravitational lensing effect. The SN Ia luminosity distribution in each redshift interval can be used to constrain the cosmological model (in particular, the fraction of matter in compact objects) by comparison with predictions of numerical ray-shooting. ([@Holz98]) The completeness of the SN Ia sample determines the effectiveness of the removal of gravitational lensing noise from the SN Ia Hubble diagram and the amount of information contained in the SN Ia luminosity distribution in each redshift interval. Note that the magnitude limit of the survey can lead to observational bias against the most distant demagnified SNe, therefore, the SNe which are close to the magnitude limit of the survey should not be used to probe cosmology in the manner described in this paper. To ensure the maximum usefulness of the data, scrupulous attention will have to be paid to photometric calibration, uniform treatment of nearby and distant samples, and an effective way to deal with reddening. ([@Riess98]) Note that although we can remove or reduce the effect of gravitational lensing on the SN Ia Hubble diagram, other systematics can affect the observed luminosity of SNe Ia. For example, grey dust, an evolution of the reddening law, or evolution in the peak absolute luminosity of SNe Ia. While it is possible to constrain grey dust and determine dust evolution through multi-band photometry at significantly different redshifts, the dimming with $z$ in peak absolute luminosity of SNe Ia is degenerate with the effect of low matter density. ([@Aguirre99; @Riess99; @DLW99; @Wang99b]) Evolution will remain a caveat in the usage of SNe Ia as cosmological standard candles, unless one can somehow correct for the effect of evolution. It is critical to obtain up to hundreds of SNe Ia at $z>1$ to constrain luminosity evolution, because we expect the luminosity evolution and low matter density to affect the distance modulus of SNe Ia through different functionals of $z$, which should become distinguishable at $z>1$. We have proposed a survey cutoff of $z=1.5$ mainly for two reasons. First, going to higher redshift makes obtaining the spectra of the SNe (which are needed to distinguish different types of SNe) practically impossible from the ground; even obtaining the spectra of $z\sim 1.5$ SNe Ia may prove impractical from the ground (as it would require several clear nights per spectrum on the Keck, see §4). Observers have already demonstrated that SNe Ia up to $z=1$ can be found in the ongoing searches. ([@Goo95; @Garna98; @Perl99]) At $z\sim 1.5$, the SNe Ia should be $\sim 2.5$ magnitudes fainter than at $z\sim 1$, it should be possible to discover them on the ground through deep imaging, and the follow-up spectroscopy can be done on NICMOS or NGST (see §4). Second, predicted rest-frame SN Ia rate per comoving volume as function of redshift seems to peak at $z\sim 1$. ([@Yung98; @Sadat98]) A pencil beam survey of SNe up to $z=1.5$ will enable the accurate determination of the SN rate as function of redshift in the redshift region important for studying cosmic star formation rate and the SN Ia progenitor models. Although the Next Generation Space Telescope can detect SNe at as high redshifts as they exist ([@Stockman98]), the estimated rate of detection is of order 20 SNe II per 4$\times$4 arcmin$^2$ field per year in the interval $1<z<4$ ([@Madau98b]), and the detection rate of SNe Ia is likely smaller. Thus a ground-based pencil beam survey of SNe is essential to complement the space-based SN searches. The most challenging aspect of the SN pencil beam survey is obtaining spectra for the SNe Ia at redshifts close to 1.5. Instead of waiting for future space equipments, we may find innovative ways of obtaining spectra from the ground. The referee has pointed out that since we can find multiple SNe at the same time on the same one square degree field (the number of SNe depends on cosmology), it may be possible to get multiple spectra at once with fibers. The nominal numbers we have used for the proposed SN pencil beam survey, a one square degree field, a depth of $z=1.5$, and an effective observation duration of one year (which is equivalent to several years of actual observation), are optimistic but not implausible. The large sky coverage and the long effective observation duration will probably require a large consortium of existing and new SN search teams through, for example, a dedicated 4 meter telescope which can be used at the same time for other important observational projects compatible with the SN pencil beam survey (see §4). The goal of going up to $z=1.5$ in spectroscopy will require support from the Keck LRIS and NICMOS/NGST. We conclude by noting that a SN pencil beam survey can yield enormous scientific return. The observational efforts directed towards a SN pencil beam survey should be very rewarding. It is a pleasure for me to thank Joachim Wambsganss for generously providing unpublished magnification distributions; Zeljko Ivezic and Todd Tripp for explaining technical details concerning photometry and spectroscopy; Saul Perlmutter for communicating details of the current supernova search by the Supernova Cosmology Project; Ed Turner for a careful reading of a draft of the manuscript and for helpful comments, the referee for encouraging and useful comments; Christophe Alard, Jim Gunn, David Hogg, Rocky Kolb, Robert Lupton, Michael Strauss, and Tony Tyson for helpful discussions. Aguirre, A.N. 1999, astro-ph/9904319 Carroll, S.M., Press, W.H., & Turner, E.L. 1992, ARA&A, 30, 499 Drell, P.S.; Loredo, T.J.; & Wasserman, I. 1999, astro-ph/9905027 Dyer, C.C., & Roeder, R.C. 1973, , 180, L31. Ellis, R.S., Colless, M., Broadhurst, T., Heyl, J., & Glazebrook, K. 1996, MNRAS, 280, 235 Ferrini, F., & Poggianti 1993, , 410, 44 Frieman, J.A. 1997, Comments Astrophys., 18, 323, astro-ph/9608068. Fukugita, M., & Turner, E.L. 1991, MNRAS, 253, 99 Garnavich, P.M. et al. 1998, , 493, L53 Goobar, A. & Perlmutter, S. 1995, , 450, 14 Holz, D.E., & Wald, R.M. 1998, Phys. Rev. D, 58, 063501. Holz, D.E. 1998, , 506, L1. Lilly, S.J., Tresse, L., Hammer, F., Crampton, D., & Le Fèvre, O. 1995, , 455, 108. Linder, E.V. 1998, , 497, 28 Loveday, J., Peterson, B.A., Efstathiou, G., Maddox, S.J. 1992, , 390, 338 Kantowski, R., Vaughan, T., & Branch, D. 1995, , 447, 35 Kantowski, R. 1998, , 507, 483 Kolatt, T.S., & Bartelmann, M. 1998, MNRAS, 296, 763 Madau, P. 1998, in D’Odorico, S., Fontana, A. & Giallongo, E. eds, PASP, The Young Universe: Galaxy Formation and Evolution at Intermediate and High Redshift Madau, P. 1998, Della Valle, M., & Panagia, N. 1998, MNRAS 297, L17 Metcalf, R.B. 1999, MNRAS, 305, 746 Pain, R. et al. 1996, , 473, 356 Perlmutter, S., et al. 1997, , 483, 565 Perlmutter, S.; Aldering, G.; Goldhaber, G.; Knop, R.A.; Nugent, P.; Castro, P.G.; Deustua, S; Fabbro, S; Goobar, A; Groom, D.E.; Hook, I.M.; Kim, A.G.; Kim, M.Y.; Lee, J.C.; Nunes, N.J.; Pain, R.; Pennypacker, C.R.; Quimby, R.; Lidman, C.; Ellis, R.S.; Irwin, M.; McMahon, R.G.; Ruiz-Lapuente, P.; Walton, N.; Schaefer, B.; Boyle, B.J.; Filippenko, A.V.; Matheson, T.; Fruchter, A.S.; Panagia, N.; Newberg, H.J.M.; & Couch, W.J. 1999, , 517, 565 Renzini, A., Ciotti, L., D’Ercole, A., & Pellegrini, S. 1993, , 419, 52 Riess, A.G., Press, W.H., & Kirshner, R.P. 1995, , 438, L17 Riess, A.G.; Filippenko, A.V.; Challis, P.; Clocchiatti, A.; Diercks, A.; Garnavich, P.M.; Gilliland, R.L.; Hogan, C.J.; Jha, S.; Kirshner, R.P.; Leibundgut, B.; Phillips, M.M.; Reiss, D.; Schmidt, B.P.; Schommer, R.A.; Smith, R.C.; Spyromilio, J.; Stubbs, C.; Suntzeff, N.B.; & Tonry, J 1998, AJ, 116, 1009 Riess, A.G., et al. 1999, astro-ph/9907037 Ruiz-Lapuente, P. & Canal, R. 1998, , 497, L57 Sadat, R.; Blanchard, A.; Guiderdoni, B.; & Silk, J. 1998, Astron. Astrophys., 331, L69 Schneider, P., Ehlers, J., & Falco, E.E. 1992, Gravitational Lenses, Springer-Verlag, Berlin Schmidt, B.P., et al. 1998, , 507, 46 Stockman, H.S.; Stiavelli, M.; Im, M.; & Mather, J. 1998, in Smith, E.P.,& Koratkar, A. eds, ASP Conf. Ser. Vol. 133, Science with the Next Generation Space Telescope, Astron. Soc. Pac., San Francisco Turner, E.L. 1990, , 365, L43 van den Bergh, S., & McClure, R.D. 1994, , 425, 205 Wambsganss, J., Cen, R., Xu, G., & Ostriker, J.P. 1997, , 475, L81 Wambsganss, J. 1999, private communication. Wang, Y. 1999a, astro-ph/9901212, , in press Wang, Y. 1999b, astro-ph/9907405, , accepted Yungelson, L. & Livio, M. 1998, , 497, 168 [^1]: Present address: Dept. of Physics, 225 Nieuwland Science Hall, University of Notre Dame, Notre Dame, IN 46556-5670. email: Yun.Wang.92@nd.edu
--- abstract: 'We show that E0 emission in $\alpha$ + $^{12}$C fusion at astrophysically interesting energies is negligible compared to E1 and E2 emission.' author: - 'G. Baur' - 'K. A. Snover' - 'S. Typel' title: 'E0 emission in $\alpha$ + $^{12}$C fusion at astrophysical energies' --- 10000 The $^{12}$C + $\alpha \rightarrow ^{16}$O capture reaction, sometimes called the “Holy Grail" of nuclear astrophysics, determines the ratio of $^{16}$O to $^{12}$C at the end of helium burning in stars, which is very important for stellar evolution and nucleosynthesis [@rolfs]. Nucleosynthesis requires [@weaver] a total S-factor for this reaction of about 170 keV b at a center-of-mass energy $E_{c.m.}$ = 0.3 MeV, the center of the Gamow window. The results of many experiments over more than 3 decades, extrapolated to the Gamow window, show that single-photon emission is dominated by E1 and E2 decay to the $^{16}$O ground-state, with approximately equal intensity and a combined S-factor S(0.3) approaching the value quoted above [@hammer]. The corresponding cross sections are $\sigma_{E1}$(0.3) $\approx \sigma_{E2}$(0.3) $\approx$ 1.4 x 10$^{-17}$ b. In this paper we examine the possible role of E0 emission, which has not, to our knowledge, been addressed previously. We note that if E0 emission were important, it would have escaped observation in $^{12}$C + $\alpha \rightarrow ^{16}$O capture measurements since they were made by detecting the emitted $\gamma$-rays, and the e$^+$e$^-$ pairs produced by E0 emission would not result in a sharp gamma line near the transition energy. First, we estimate the ratio of direct E0 and direct E2 emission, following Snover and Hurd [@snover]. There, a general relation for direct E0 emission was derived, and for $^3$He + $^4$He fusion at low energies a simple relation was obtained for the direct cross section ratio $\sigma_{E0} / \sigma_{E2}$, which was shown to be negligibly small. This occurs primarily because E0 emission is suppressed by an additional power of $\alpha$, the fine structure constant, relative to E2 emission. However, in $^{12}$C + $\alpha \rightarrow ^{16}$O$_{g.s.}$ there are several factors that enhance the relative importance of E0 emission: 1) E0 emission occurs by s-wave capture, whereas E1 and E2 emission arise from p-wave and d-wave capture, respectively; 2) E1 emission is isospin-inhibited; and 3) the higher transition energy results in larger E0/E1 and E0/E2 phase-space factor ratios. In low-energy $^3$He + $^4$He fusion, E0 and E2 direct capture occur between the same initial and final states (p-waves), and as a result the direct capture radial matrix elements cancel in the cross section ratio. In $^{12}$C + $\alpha \rightarrow ^{16}$O$_{g.s.}$, however, the radial matrix elements are different since the initial states are different. In analogy with Eq. 11 of  [@snover] we obtain $$\frac{\sigma_{E0}}{\sigma_{E2}} = \frac{4\pi}{5} \frac{f_{E0}}{f_{E2}} \frac{|R_{00}|^2}{|R_{02}|^2}, \label{ratio}$$ where $R_{l_f l_i}$ is the magnitude of the radial integral of $r^2$ between the initial continuum state with orbital angular momentum $l_i$ and the final bound state with $l_f = 0$. The quantities $f_{EL}$ are given by [@snover] $$f_{E0}(E) = \frac{e^4}{27(\hbar c)^6}b(S)(E-2mc^2)^3(E+2mc^2)^2, \label{fE0}$$ and $$f_{E2}(E) = \frac{4\pi e^2}{75(\hbar c)^5}E^5 \label{fE2}$$ where $E = E_{c.m.} + Q$ is the transition energy, $Q$ = 7.16 MeV, $$b(S) = \frac{3\pi}{8}\left(1-\frac{S}{4}-\frac{S^2}{8}+\frac{S^3}{16}-\frac{S^4}{64}+\frac{5S^5}{512}\right)$$ and $S = (E-2mc^2)/(E+2mc^2)$. We estimate $|R_{00}|^2/|R_{02}|^2 = P_0/P_2 = 18$ at $E_{c.m.}$ = 0.3 MeV, where $P_{l_i}$ is the penetrability due to the Coulomb and angular momentum barriers evaluated at the radius $R$ = 1.3(A$_1^{1/3}$+A$_2^{1/3}$) fm = 5 fm. This yields 4.3 x 10$^{-3}$ for the direct (i.e., nonresonant) E0/E2 cross section ratio at 0.3 MeV. This estimate for $|R_{00}|^2/|R_{02}|^2$ assumes the capture takes place at the nuclear radius and is not affected by the nuclear interaction between ${}^{12}$C and the $\alpha$ particle in the continuum. However, at low collision energies the effective radius may be larger, due to the importance of extranuclear capture, which would reduce $|R_{00}|^2/|R_{02}|^2$. In addition, the total E2 capture cross section in the Gamow window is dominated by the tail of the subthreshold 6.92 MeV 2$^+$ state, and this effect is also not included above. We have improved on the above estimate by carrying out potential model calculations of E0 and E2 emission in $^{12}$C + $\alpha \rightarrow ^{16}$O$_{g.s.}$. Using a real Woods-Saxon potential with radius parameter $r_0$ = 1.25 fm and diffuseness $a $= 0.65 fm, we find $V$ = 63.87 MeV to bind the $N$ = 2, $L$ = 0 $0_1^+$ ground-state at the measured energy. Here $N$ and $L$ are determined from the relation $2N + L = \Sigma (2n_j + l_j)$ where $n_j$ and $l_j$ are the shell model quantum numbers of the 4 nucleons (0p or 1s0d) that make up the alpha particle state with quantum numbers $N,L$ in the $\alpha$-nucleus potential. Since the 6.05 MeV $0_2^+$ state and the 6.92 MeV $2_1^+$ state are members of the same 4p-4h rotational band, with the particles in the 1s0d shell, they should both have $2N + L$ =8 and hence $N$ = 4 for the $0_2^+$ state and 3 for the $2_1^+$ state. We find $V$ = 122.74 MeV (122.03 MeV) to bind the $0_2^+$ ($2_1^+$) states with these node numbers at the correct energy, and thus we use $V(l_i=0)$ = 122.74 MeV and $V(l_i=2)$ = 122.03 MeV for the $l_i$ = 0 and 2 scattering states, respectively, and $V(l_f=0)$ = 63.87 MeV for the final state. We note that these scattering potentials are similar to the real Woods-Saxon potential that fits the rainbow scattering region in intermediate energy $\alpha$ - $^{12}$C elastic scattering [@goldberg]. ![Dashed curve and left scale: E2 S-factor; solid curve and right scale: E0/E2 radial matrix element ratio; vs. $E_{c.m.}$.[]{data-label="figure"}](fig_1.eps){width="48.00000%"} With these potentials, we obtain the E2 S-factor shown in Fig. \[figure\]. This curve is within a factor of 2 of the measured E2 S-factors below E$_{c.m.}$ = 2 MeV, and has $S_{E2}$(0.3) = 85 keV b, in agreement with the value 81 $\pm$ 22 keV b obtained by Hammer et al. [@hammer] from an extrapolated R-matrix fit to E2 data (other modern E2 fits that we are aware of yield $S_{E2}$(0.3) values within a factor of 2 of these values). Our potential model results for $|R_{00}|^2/|R_{02}|^2$ are also shown in Fig. \[figure\]. We obtain a value of 1.1 for the ratio at 0.3 MeV. This may be compared to the value 3.2 calculated with a pure $l_i$ = 0 Coulomb scattering wave, indicating that the interior and exterior contributions to the E0 matrix element interfere destructively. A calculation with $V(l_i=0)$ = 122.03 MeV, which artificially enhances the contribution of the subthreshold $0_2^+$ state by moving it 0.2 MeV closer to threshold, yields a ratio of 2.0 at 0.3 MeV. With $|R_{00}|^2/|R_{02}|^2$ = 1.1, our calculated E0/E2 cross section ratio is 2.6 x 10$^{-4}$. Taking S$_{E2}$(0.3) = 80 keV b, this corresponds to $$S_{E0}(0.3) = 0.02 \mbox{ keV b}. \label{se0}$$ E$_x$(MeV) $\theta_{\alpha_0}^2$ M(fm$^2$) $\sigma_{E0}$(0.3)(b) Ratio ----------------- ----------------------- ----------- ----------------------- ---------------------- 6.05 $\leq$ 0.7 3.55 $\leq$ 1.6x10$^{-21}$ $\leq$ 1.2x10$^{-4}$ 12.05 0.0036$^,$ 4.03 1.0x10$^{-25}$ 7.8x10$^{-9}$ 14.03 0.031$^,$ 3.3 2.9x10$^{-24}$ 2.2x10$^{-7}$ 25 $\leq$ 1.0 9.0 $\leq$ 1.0x10$^{-22}$ $\leq$ 7.3x10$^{-6}$ potential model 2.6x10$^{-4}$ : 0$^+$ resonance tail and potential model contributions to E0 emission at 0.3 MeV.[]{data-label="data"} Tails of higher lying 0$^+$ resonances may also contribute to the E0 cross section. In Table  \[data\] we show the 0$^+$ excited states of $^{16}$O with known ground-state monopole decay strengths [@tilley]. Also shown for each state is the reduced $\alpha_0$ width in units of the Wigner Limit, the monopole decay matrix element, the E0 cross section at 0.3 MeV based on a Breit-Wigner extrapolation using the s-wave penetrability, and the ratio of the E0 cross section to the total E2 cross section at 0.3 MeV. We show an estimate for the 6.05 MeV $0_2^+$ state for completeness, even though its effect on the cross section is included in the potential model calculations. We also show an upper limit for the contribution of the tail of an isoscalar Giant Monopole Resonance located at E$_x$ = 25 MeV with 83% of the isoscalar energy weighted sum rule [@bohr] (the remaining 17% resides in the other 0$^+$ states shown in Table  \[data\]). None of the resonance tail contributions from states above 6.05 MeV are significant compared to the E0 cross section calculated in the potential model. E0 emission to excited final states in $^{16}$O is negligible due to the small phase space factor. Hence our best estimate for the E0 contribution to the astrophysical S-factor for $^{12}$C + $\alpha$ capture is given by Eq. \[se0\] above. Two-photon emission is also negligible, based on the measured branching ratio for this process in the decay of the 6.05 MeV 0$^+$ state [@watson]. We conclude that electromagnetic processes other than single-photon emission do not contribute significantly to the astrophysical rate for $^{12}$C + $\alpha$ fusion. We thank C. Rolfs for bringing this problem to our attention, and the U.S. DOE, Grant \# DE-FG02-97ER41020 for financial support. See e.g. C. E. Rolfs and W. E. Rodney, Cauldrons in the Cosmos, (U. Chicago Press, 1988). T. A. Weaver and S. E. Woosley, Phys. Rep. [**227**]{}, 65 (1993). See also T. Rauscher et al., Astrophys. J. [**576**]{}, 323 (2002); S. E. Woosley et al., Nucl. Phys. A [**718**]{}, 3c (2003). J. W. Hammer et al., Nucl. Phys A [**768**]{}, 353c (2005). K. A. Snover and A. E. Hurd, Phys. Rev. C [**67**]{}, 055801 (2003). P. Descouvemont, D. Baye and P.-H. Heenen, Nucl. Phys. A [**430**]{}, 426 (1984). D. A. Goldberg, Phys. Lett. [**55**]{} B, 59 (1975). D. R. Tilley, H. R. Weller and C. M. Cheves, Nucl Phys. A [**564**]{}, 1 (1993). A. Bohr and B. R. Mottelson, Nuclear Structure (Benjamin, Reading MA, 1975) Vol. II, Eqs. (6-178). The IS sum rule is 1/2 of the combined IS + IV sum rule given here. B. A. Watson et al., Phys. Rev. Lett. [**35**]{}, 1333 (1975).
--- abstract: 'We introduce a structural approach to study Lagrangian submanifolds of the complex hyperquadric in arbitrary dimension by using its family of non-integrable almost product structures. In particular, we define local angle functions encoding the geometry of the Lagrangian submanifold at hand. We prove that these functions are constant in the special case that the Lagrangian immersion is the Gauss map of an isoparametric hypersurface of a sphere and give the relation with the constant principal curvatures of the hypersurface. We also use our techniques to classify all minimal Lagrangian submanifolds of the complex hyperquadric which have constant sectional curvatures and all minimal Lagrangian submanifolds for which all, respectively all but one, local angle functions coincide.' address: - 'H. Li H. Ma, Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, P.R. China' - 'J. Van der Veken L. Vrancken, KU Leuven, Department of Mathematics, Celestijnenlaan 200B – Box 2400, 3001 Leuven, Belgium' - 'L. Vrancken, Université Polytechnique Hauts-de-France, Campus du Mont Houy, 59313 Valenciennes Cedex 9, France' - 'X. Wang, School of Mathematical Sciences and LPMC, Nankai University, Tianjin, 300071, P. R. China; Mathematical Sciences Institute, Australian National University, Canberra, ACT 2601 Australia' author: - Haizhong Li - Hui Ma - Joeri Van der Veken - Luc Vrancken - Xianfeng Wang title: Minimal Lagrangian submanifolds of the complex hyperquadric --- Introduction ============ In this paper, we investigate the geometry of Lagrangian submanifolds of the complex hyperquadric $Q^n$, which is a homogeneous complex $n$-dimensional Kähler manifold. The study of Lagrangian submanifolds originates from symplectic geometry and classical mechanics. Let $(N,\omega)$ be a $2n$-dimensional symplectic manifold with a symplectic form $\omega$, we call a submanifold $f:M\to(N,\omega)$ Lagrangian, if $f^{*}\omega=0$ and the dimension of $M$ is half the dimension of $N$. In particular, if $N$ is a Kähler manifold, then $N$ admits complex, Riemannian and symplectic structures which are compatible with each other, and the condition $f^{*}\omega=0$ is equivalent to the complex structure $J$ of $N$ interchanging the tangent and the normal spaces. The study of Lagrangian submanifolds of Kähler manifolds is a classic topic and was initiated in the 1970’s by Chen and Ogiue [@CO1974]. For a review on Riemannian geometry of Lagrangian submanifolds we refer to [@Chen2001; @Chen2011] and the references therein. The simplest examples of Kähler manifolds are complex space forms which have constant holomorphic sectional curvatures, and the geometry of Lagrangian submanifolds of complex space forms have been widely studied and well understood in some sense. Meanwhile, Lagrangian submanifolds of other Kähler manifolds have not been deeply understood. The complex hyperquadric $Q^n$ is a compact complex hypersurface of the complex projective space $\mathbb C P^{n+1}$ defined by the homogeneous quadratic equation $z_0^2 +z_1^2 + \ldots + z_{n+1}^2 = 0$. It can be identified with the Grassmann manifold of oriented $2$-planes, is a compact Hermitian symmetric space of rank $2$ and provides a very good example of a Kähler-Einstein manifold. There is a fundamental fact which concerns the relation between Lagrangian geometry of the complex hyperquadric and hypersurface geometry of the unit sphere, that is, the Gauss map of any oriented hypersurface of the unit sphere $S^{n+1}$ is always a Lagrangian submanifold of the complex hyperquadric $Q^n$. In a special case, given an isoparametric hypersurface of the unit sphere, one can get a minimal Lagrangian submanifold of the complex hyperquadric by using its Gauss map. The minimality can be proved by applying Palmer’s nice formula involving the mean curvature form of the Lagrangian submanifold of the complex hyperquadric and the principal curvatures of the hypersurface of the unit sphere (see [@Palmer1994; @Palmer]). The geometry of Lagrangian submanifolds of $Q^n$ obtained by the Gauss map of isoparametric hypersurfaces of the unit sphere have been systematically studied from the point of view of Lie theory and Hamiltonian deformation theory by the second author and Ohnita (see [@MaOhnita1; @MaOhnita2; @MaOhnita3]), where they obtain a classification of all compact homogeneous Lagrangian submanifolds of the complex hyperquadric and determine the Hamiltonian stability for the Gauss maps of all the known homogeneous isoparametric hypersurfaces. Notwithstanding, the geometry of Lagrangian submanifolds of the complex hyperquadric is far from being well understood, especially from the geometric point of view. This motivates our study and the aim of this paper is to understand the geometry of Lagrangian submanifold of the complex hyperquadric in a more geometric way. It is well known that $Q^n$ carries a family of almost product structures, see for example [@R1]. In this paper, we introduce a new structural approach to study Lagrangian submanifolds of $Q^n$. By using one of the almost product structures, we define two symmetric operators and $n$ local *angle functions* on every such Lagrangian submanifold. It turns out that these angle functions have very nice relations with the second fundamental form and can determine most of the geometry of the Lagrangian submanifold. By use of our new approach, we obtain a correspondence theorem between minimal Lagrangian submanifolds of $Q^n$ with constant angle functions and the Gauss maps of isoparametric hypersurfaces of $S^{n+1}$. \[theo1\] Let $a: M^n \to S^{n+1}(1)$ be an isoparametric hypersurface with unit normal $b$ and principal curvatures $\lambda_1,\ldots,\lambda_n$. Then the Gauss map $G: M^n \to Q^n$ is a minimal Lagrangian immersion and the difference between any two local angle functions is constant. Moreover, - if the almost product structure $A \in \mathcal A$ is chosen as in Example \[ex1\] for the canonical horizontal lift $\hat G$ given in , then all the angle functions $\theta_1,\ldots,\theta_n$ are constant and, when put in the right order, they are given by $$\label{relation_lambdaj_thetaj_1} \lambda_j = \cot \theta_j;$$ - if the almost product structure $A \in \mathcal A$ is chosen as in Example \[ex2\], then again all the angle functions are constant and, when put in the right order, $$\label{relation_lambdaj_thetaj_2} \lambda_j = \cot(\theta_j+c)$$ for a real constant $c$ which is independent of the index $j$. Conversely, consider a minimal Lagrangian immersion $f: M^n \to Q^n$ of a simply connected manifold with constant angle functions $\theta_1,\ldots,\theta_n$. Then for every real constant $c$ with $\sin(\theta_j+c) \neq 0$ for all $j=1,\ldots,n$, there is an isoparametric immersion $M^n \to S^{n+1}(1)$ with Gauss map $f$, whose principal curvatures are given by . By applying Theorem \[theo1\], we can find all minimal Lagrangian submanifolds with constant local angle functions starting from isoparametric hypersurfaces of the unit sphere, see Corollary \[cor2\]. We also obtain the classification of Lagrangian submanifolds which are totally geodesic (Theorem \[theoTG\]) and the classification of minimal Lagrangian submanifolds for which all local angle functions are the same (Theorem \[theoSameAngles\]). Although Theorem \[theo1\] is stated only in the case of constant angle functions, from the proof of this theorem, one can get similar local correspondence between a general minimal Lagrangian submanifold of $Q^n$ and the Gauss map of a hypersurface of $S^{n+1}$, with the same relation between the angle functions of the Lagrangian submanifold and the principal curvatures of the hypersurface, while the angle functions and the principal curvatures are not necessarily constant. Note that the classification of minimal Lagrangian submanifolds of constant sectional curvature in complex space forms is a classic result proved by Ejiri [@Ejiri1982], using essentially the Gauss and Codazzi equations. In the case of the complex hyperquadric $Q^n$, it is impossible to get a similar classification following Ejiri’s method, as the curvature tensor of $Q^n$ is much more complicated than that of a complex space form and the Gauss equations become very difficult to solve directly. Therefore, we use a quite different approach, by making full use of the angle functions. We obtain the following classification theorem. \[theoCSCconclusion\] let $f:M^n \to Q^n$, $n \geq 2$, be a minimal Lagrangian immersion such that $M^n$ has constant sectional curvature $c$. Then $f$ is one of the following: - $f$ is the Gauss map of a part of the standard embedding $S^n(r) \to S^{n+1}(1)$; - $n=2$ and $f$ the Gauss map of a part of the standard embedding $S^1(r_1) \times S^1(r_2) \to S^3(1)$; - $n=3$ and $f$ is the Gauss map of a part of a tube around the Veronese surface in $S^4(1)$. The constant sectional curvatures in these three cases are $c=2$, $c=0$ and $c=1/8$ respectively. As another successful application of our new techniques, we classify all minimal Lagrangian submanifolds for which all but one local angle functions coincide. \[theoAllButOneSameAngles\] Let $f:M^n \to Q^n~(n\geq3)$ be a minimal Lagrangian submanifold for which $n-1$ local angle functions are the same. If $A \in \mathcal A$ is chosen as in Example \[ex2\], then $\theta_1=(n-1)\alpha \mod\pi$ and $\theta_2=\cdots=\theta_n=-\alpha \mod\pi$ for some local function $\alpha$. - If $\alpha=0\mod\pi$, then $f$ is the Gauss map of a part of the standard embedding $S^{n}(r)\to S^{n+1}(1)$. - If $\alpha$ is a non-zero constant modulo $\pi$, then $f$ is the Gauss map of a part of the standard embedding $S^{1}(r_1)\times S^{n-1}(r_2)\to S^{n+1}(1)$. - If $\alpha$ is not a constant modulo $\pi$, then $M^n$ must be a warped product $I\times_\rho S^{n-1}(1)$ with $\rho(\alpha)=|c_1(\sin n\alpha)^{-\frac{1}{n}}|$ for some positive constant $c_1$, and the angle function $\alpha$ satisfies the following first order ordinary differential equation: $$\label{6.1} (c_1(\sin (n\alpha))^{-\frac{1}{n}})^2(2+(\frac{d \alpha}{ds})^2(\sin{n\alpha})^{-2})=1,$$ where $\frac{d}{d s}=e_1$ is the tangent vector on the base $I$ of the warped product $ I\times_\rho S^{n-1}(1)$. Moreover, $f$ is locally isometric to the Gauss map of a rotational hypersurface of $S^{n+1}(1)$ with the profile curve $\gamma(\theta)\subset S^2(1)$ given by $$\label{6.2} \gamma(\theta)=\big(-\sin{\alpha}\sqrt{1-(\frac{d\alpha}{d\theta})^2},\cos{\alpha}\sin\theta-\sin{\alpha}\cos{\theta} \frac{d\alpha}{d\theta}, -\cos{\alpha}\cos\theta-\sin{\alpha}\sin{\theta} \frac{d\alpha}{d\theta} \big),$$ where $\alpha$ is the angle function and satisfies the following second order ordinary differential equation: $$\label{6.3} \frac{d^2\alpha}{d\theta^2}=(1-(\frac{d\alpha}{d\theta})^2)\cot{(n\alpha)},~|\frac{d\alpha}{d\theta}|<1.$$ As proposed in [@siffert], one can expect that our approach via Lagrangian submanifolds of $Q^n$, will provide some new understanding of the isoparametric theory in the unit sphere. For more results about isoparametric theory and its applications, we refer to [@GT], [@QT], [@TY] and the references therein. The paper is organized as follows. In Section 2, we review some basic definitions and properties of the complex hyperquadric $Q^n$. In Section 3, starting from an almost product structure on $Q^n$, we first define two endomorphisms on the tangent spaces of a Lagrangian submanifold $M^n$ of $Q^n$, and then introduce $n$ local angle functions on $M$. We give two examples to show that we can choose appropriate almost product structures to make the angle functions having special properties. We prove some relations among the angle functions, the second fundamental form and the Levi-Civita connection form. We also give the Gauss and Codazzi equations for a Lagrangian submanifold of $Q^n$. In Section 4, we study the correspondence between Lagrangian submanifolds of $Q^n$ and hypersurfaces of $S^{n+1}$, and prove Theorem \[theo1\]. Section 5 is devoted to the proof of Theorem \[theoCSCconclusion\], where Lemma \[lem5.1\] is the key step, which gives us more information about the second fundamental form and the angle functions, under the assumption of constant sectional curvature. In Section 6, we classify all minimal Lagrangian submanifolds with $n-1$ equal local angle functions by proving Theorem \[theoAllButOneSameAngles\]. **Acknowledgments:** This research was supported by the Tsinghua University - KU Leuven Bilateral Scientic Cooperation Fund and by the NSFC-FWO grant No.11961131001. H. Li is supported by NSFC No.11831005 and No.11671224. H. Ma is supported by NSFC No.11831005 and No.11671223. J. Van der Veken and L. Vrancken are supported by project 3E160361 of the KU Leuven Research Fund. J. Van der Veken is supported by the Excellence Of Science project G0H4518N of the Belgian government. X. Wang is supported by NSFC No.11571185 and the Fundamental Research Funds for the Central Universities, and she would also like to express her deep gratitude to the Mathematical Sciences Institute at the Australian National University for its hospitality and to Prof. Ben Andrews for his encouragement and help during her stay in MSI of ANU as a Visiting Fellow, while part of this work was completed. The authors would also like to thank the referees for carefully reading of this paper and providing some helpful suggestions. The geometry of the complex hyperquadric ======================================== Let $\mathbb{C} P^{n+1}(4)$ be the complex projective space of complex dimension $n+1$ equipped with the Fubini-Study metric $g_{FS}$ of constant holomorphic sectional curvature $4$. Then the Hopf fibration $$\label{def_pi} \pi : S^{2n+3}(1) \subset \mathbb{C}^{n+2} \to \mathbb{C} P^{n+1}(4): z \mapsto [z]$$ is a Riemannian submersion from the unit sphere of real dimension $2n+3$ to $\mathbb{C} P^{n+1}(4)$. Remark that for any $z \in S^{2n+3}(1)$ we have $\pi^{-1}\{[z]\} = \{e^{\sqrt{-1}t}z \ | \ t\in{\ensuremath{\mathbb{R}}}\}$ and $\ker (d\pi)_z = \mathrm{span} \{\sqrt{-1}z\}$. The complex structure $J$ on $\mathbb{C} P^{n+1}(4)$ is induced from multiplication by $\sqrt{-1}$ on $TS^{2n+3}(1)$ and it is well-known that $(\mathbb{C} P^{n+1}(4), g_{FS}, J)$ is a Kähler manifold. We define the *complex hyperquadric* of complex dimension $n$ as the following complex hypersurface of $\mathbb{C} P^{n+1}(4)$: $$\label{def_Qn} Q^n = \{ [(z_0,z_1,\ldots,z_{n+1})] \in \mathbb{C} P^{n+1}(4) \ | \ z_0^2 +z_1^2+ \cdots + z_{n+1}^2 = 0 \}.$$ If $Q^n$ is equipped with the induced metric $g_{FS}|_{Q^n}$, which we will denote by $g$, and the induced almost complex structure $J|_{Q^n}$, which we will again denote by $J$, then $(Q^n,g,J)$ is of course a Kähler manifold itself. The inverse image of $Q^n$ under the Hopf fibration is the Stiefel manifold $$V_2({\ensuremath{\mathbb{R}}}^{n+2}) = \left\{ u+\sqrt{-1}v \ \left| \ u,v\in{\ensuremath{\mathbb{R}}}^{n+2}, \ \langle u,u \rangle = \langle v,v \rangle = \frac 12, \ \langle u,v \rangle = 0 \right. \right\} \subset S^{2n+3}(1)$$ of real dimension $2n+1$, where $\langle \cdot,\cdot \rangle$ denotes the Euclidean inner product on ${\ensuremath{\mathbb{R}}}^{n+2}$. The normal space to $V_2({\ensuremath{\mathbb{R}}}^{n+2})$ in $S^{2n+3}(1)$ at a point $z$ is spanned by $\bar z$ and $\sqrt{-1}\bar z$, which implies that the normal space to $Q^n$ in $\mathbb{C} P^{n+1}(4)$ at a point $[z]$ is spanned by $(d\pi)_z(\bar z)$ and $J(d\pi)_z(\bar z)=(d\pi)_z(\sqrt{-1}\bar z)$, where $z$ is any representative of $[z]$. Remark that these vectors depend on the chosen representative $z$. We denote by $\mathcal A$ the set of all shape operators of $Q^n$ in $\mathbb{C} P^{n+1}(4)$ associated with unit normal vector fields. Then $\mathcal A$ is a collection of $(1,1)$-tensor fields on $Q^n$ and one can deduce the following (see for example [@R1] or [@Smyth1967])). \[lem1\] Any $A \in \mathcal A$ satisfies - $A^2=\text{Id}$, i.e., $A$ is involutive. - $A$ is symmetric. - $A$ anti-commutes with $J$. Properties (i) and (ii) in Lemma \[lem1\] are equivalent to saying that $\mathcal A$ is a family of almost product structures on $Q^n$. However, these almost product structures are not always integrable. In fact, we have the following result. \[[@Smyth1967]\]\[lem2\] Let $\xi$ be a unit normal vector field along $Q^n$ in $\mathbb{C} P^{n+1}(4)$ with corresponding shape operator $A \in \mathcal A$. Then there exists a non-zero one-form $s$ such that $$\begin{aligned} & \nabla^{\mathbb{C} P^{n+1}(4)}_X \xi = - AX + s(X) J\xi, \label{def_s}\\ & \nabla^{Q^n}_X A = s(X) JA \label{nabla_A} \end{aligned}$$ for all $X$ tangent to $Q^n$, where $\nabla^{\mathbb{C} P^{n+1}(4)}$ and $\nabla^{Q^n}$ are the Levi Civita connections of $\mathbb{C} P^{n+1}(4)$ and $Q^n$ respectively. The equation of Gauss for $Q^n$ as a submanifold of $\mathbb{C} P^{n+1}(4)$ yields the following expression for the Riemannian curvature tensor of $Q^n$: $$\begin{aligned} \label{R_Qn} R^{Q^n}(X,Y)Z = \ & g(Y,Z)X - g(X,Z)Y + g(X,JZ)JY - g(Y,JZ)JX + 2g(X,JY)JZ \\ & + g(AY,Z)AX - g(AX,Z)AY + g(JAY,Z)JAX - g(JAX,Z)JAY, \nonumber\end{aligned}$$ where $A$ is any element of $\mathcal A$ and $X,~Y,~Z\in TQ^n$. We can calculate straightforwardly from that $Q^n$ is a Kähler-Einstein manifold with Einstein constant $2n$. Lagrangian submanifolds of the complex hyperquadric =================================================== In the following sections, we consider an immersion $f: M^n \to Q^n$ of a manifold of real dimension $n$ into the complex hyperquadric of complex dimension $n$. If no confusion is possible, we will identify $M^n$ with its image and $(df)_p(T_pM^n)$ with $T_pM^n$ for every $p \in M^n$. Moreover, we will denote the metric on $M^n$ induced from the metric $g$ on $Q^n$, constructed above, again by $g$. As usual in complex geometry, we say that $f$ is *Lagrangian* if $J$ maps the tangent space to $M^n$ at any point into the normal space to $M^n$ at that point and vice versa. Fixing an almost product structure $A \in \mathcal A$ on $Q^n$, we can define at any point $p$ of a Lagrangian submanifold $M^n$ of $Q^n$ two endomorphisms $B$ and $C$ of $T_pM^n$ by putting $$\label{def_BC} AX = BX - JCX$$ for all $X \in T_pM$, i.e., $BX$ is the component of $AX$ tangent to $M^n$ and $CX$ is the image under $J$ of the component of $AX$ normal to $M^n$. With these definitions, we have the following. \[lem3\] $B$ and $C$ are symmetric endomorphisms of $T_pM^n$ which commute and satisfy $B^2+C^2=\mathrm{Id}$. Since $g(BX,Y)=g(AX,Y)$ and $g(CX,Y)=g(JAX,Y)$ for all $X,Y \in T_pM^n$, the endomorphisms $B$ and $C$ are symmetric because $A$ and $JA$ are symmetric. Furthermore, we have $X = A^2X = A(BX-JCX) = (B^2+C^2)X + J(BC-CB)X$ for an arbitrary $X \in T_pM^n$. Since the first term on the right hand side is tangent to $M^n$ and the second term on the right hand side is normal to $M^n$, we must have $(B^2+C^2)X=X$ and $(BC-CB)X=0$, which proves the result. Lemma \[lem3\] implies that $B$ and $C$ are simultaneously diagonalizable and that the sum of the squares of corresponding eigenvalues must be $1$. Therefore, there exist an orthonormal basis $\{e_1,\ldots,e_n\}$ of $T_pM^n$ and real numbers $\theta_1,\ldots,\theta_n$, defined up to an integer multiple of $\pi$, such that $$\label{BC} Be_j = \cos(2\theta_j) e_j, \qquad Ce_j = \sin(2\theta_j) e_j$$ for $j=1,\ldots,n$. The factor $2$ in front of the angles is just a choice for convenience, as it will simplify some of the expressions in the sequel. We can rewrite as $Ae_j = \cos(2\theta_j) e_j - \sin(2\theta_j) Je_j$. Working locally, we can regard $B$ and $C$ as symmetric $(1,1)$-tensor fields on $M^n$ which define a local orthonormal frame $\{e_1,\ldots,e_n\}$ and local angle functions $\theta_1,\ldots,\theta_n$ in a similar way as above. In general, these functions cannot be extended to global functions on $M^n$ and they are only determined up to an integer multiple of $\pi$. The following result states that changing $A \in \mathcal A$ will change the angle functions $\theta_1,\ldots,\theta_n$, but not the orthonormal frame $\{e_1,\ldots,e_n\}$. \[lem4\] Let $f: M^n \to Q^n$ be a Lagrangian immersion and $A_0,A \in \mathcal A$. Then there exists a function $\varphi: M^n \to {\ensuremath{\mathbb{R}}}$ such that, along the image of $f$, $$A = \cos\varphi \, A_0 + \sin\varphi \, JA_0.$$ If $\{e_1,\ldots,e_n\}$ is a local orthonormal frame such that $A_0e_j = \cos(2\theta^0_j) e_j - \sin(2\theta^0_j) Je_j$ for $j=1,\ldots,n$, then $Ae_j = \cos(2\theta_j) e_j - \sin(2\theta_j) Je_j$ for $j=1,\ldots,n$, with $$\label{relation_theta_j} \theta_j = \theta^0_j - \frac{\varphi}{2}.$$ Assume that $A_0$ and $A$ are the shape operators associated with unit normal vector fields $\xi_0$ and $\xi$ respectively. Since $Q^n$ is a Kähler submanifold of $\mathbb{C} P^{n+1}(4)$, there is a function $\varphi:M^n \to {\ensuremath{\mathbb{R}}}$ such that, at every point of $M^n$, $\xi = \cos\varphi \, \xi_0 + \sin\varphi \, J\xi_0$, which implies that $A = \cos\varphi \, A_0 + \sin\varphi \, JA_0$ along the image of $f$. Now assume that $A_0e_j = \cos(2\theta^0_j) e_j - \sin(2\theta^0_j) Je_j$ for $j=1,\ldots,n$. Then it follows from a straightforward computation that $Ae_j = \cos(2\theta^0_j-\varphi) e_j - \sin(2\theta^0_j-\varphi) Je_j$ for $j=1,\ldots,n$. There are a few possible choices for the almost product structure $A \in \mathcal A$ on $Q^n$ which are adapted to a given Lagrangian submanifold $f: M^n \to Q^n$. We present two of them in the next examples. \[ex1\] Assume that, apart from a Lagrangian immersion $f: M^n \to Q^n$, also a horizontal lift $\hat f: M^n \to V_2({\ensuremath{\mathbb{R}}}^{n+2})$ of $f$ is given. Remark that it follows from [@Reckziegel] that any Lagrangian immersion into $Q^n$ locally allows such a horizontal lift. If $M^n$ is simply connected, the horizontal lift can be defined globally. Since the normal space to $V_2({\ensuremath{\mathbb{R}}}^{n+2})$ in $S^{2n+3}(1) \subset \mathbb{C}^{n+2}$ at a point $z$ is the complex span of $\bar z$, one can take $$\xi_{f(p)}=(d\pi)_{\hat f(p)}\left(\overline{\hat{f}(p)}\right)$$ as a unit normal vector field to $Q^n$ along the image of $f$, and the corresponding shape operator is given by $$\label{A0} AX = -(d\pi)\left(\overline{\hat X}\right),$$ where $X$ is any vector tangent to $Q^n$ at a point $f(p)$ and $\hat X$ is its horizontal lift to $\hat f(p)$. \[ex2\] Given a Lagrangian immersion $f: M^n \to Q^n$, one can choose $A \in \mathcal A$ such that the associated local angle functions satisfy $$\label{sumangleszero} \theta_1 + \cdots + \theta_n = 0 \mod \pi.$$ Indeed, let $A_0 \in \mathcal A$ be an arbitrary almost product structure with associated local angle functions $\theta_1^0, \ldots , \theta_n^0$ and put $\varphi= 2(\theta^0_1 + \cdots + \theta^0_n)/n$. If we choose $A \in \mathcal A$ such that $A = \cos\varphi \, A_0 + \sin\varphi \, JA_0$ along the image of $f$, then it follows from that the local angle functions associated with $A$ satisfy . Remark that we will always work modulo $\pi$ for local angle functions, since they are only defined up to an integer multiple of $\pi$. The choice of $\varphi$, and hence of $A \in \mathcal A$, in Example \[ex2\] is not uniquely determined. Indeed, for any $k \in \{0,\ldots,n-1\}$, the function $\varphi= 2(\theta^0_1 + \cdots + \theta^0_n)/n + 2k\pi/n$ gives rise to a different $A \in \mathcal A$ for which the angle functions satisfy . Let $h$ be the second fundamental form of the Lagrangian immersion $f: M^n \to Q^n$, we define $$h_{ij}^k = g(h(e_i,e_j),Je_k)$$ for all $i,j,k=1,\ldots,n$, to be the components of $h$. A fundamental property of Lagrangian submanifolds of Kähler manifolds implies that the components $h_{ij}^k$ are totally symmetric in the three indices (cf. [@Oh1990; @Oh1993]). Furthermore, let $\nabla$ denote the induced connection on $M^n$ from the Levi Civita connection $\nabla^{Q^n}$ of $(Q^n,g)$, we define its connection forms by $$\omega_j^k(X) = g(\nabla_X e_j,e_k)$$ for all $j,k=1,\ldots,n$ and all $X$ tangent to $M^n$. Remark that this family of one-forms is anti-symmetric in the indices. The following proposition relates the angle functions, the components of the second fundamental form and the connection forms. \[prop1\] Let $M^n$ be a Lagrangian submanifold of $Q^n$ and assume that an almost product structure $A \in \mathcal A$ on $Q^n$ is fixed. Let $\{e_1,\ldots,e_n\}$ be a local orthonormal frame on $M^n$ constructed as above, then the following relations among the angle functions, the components of the second fundamental form and the connection forms hold: $$\begin{aligned} & e_i(\theta_j) = h_{jj}^i - \frac{s(e_i)}{2}, \label{intcond1} \\ & \sin(\theta_j-\theta_k) \omega_j^k(e_i) = \cos(\theta_j-\theta_k) h_{ij}^k, \label{intcond2} \end{aligned}$$ for all $i,j,k=1,\ldots,n$, with $j \neq k$. Here, $s$ is the one-form associated with $A$ as in Lemma \[lem2\]. Combining the splitting $AX=BX-JCX$ with formula , yields $$\begin{aligned} & (\nabla_X B)Y = s(X) CY + Jh(X,CY) + CJh(X,Y),\label{nablab} \\ & (\nabla_X C)Y = -s(X) BY - Jh(X,BY) - BJh(X,Y)\label{nablac} \end{aligned}$$ for all vector fields $X$ and $Y$ tangent to $M^n$. Evaluating these expressions for $X=e_i$ and $Y=e_j$ gives us two equalities between vectors. Comparing the components in $e_j$ gives respectively $$\begin{aligned} & -2 \sin(2\theta_j)e_i(\theta_j) = s(e_i) \sin(2\theta_j) - 2\sin(2\theta_j)h_{ij}^j, \\ & 2 \cos(2\theta_j)e_i(\theta_j) = -s(e_i) \cos(2\theta_j) + 2\cos(2\theta_j)h_{ij}^j. \end{aligned}$$ Since either $\sin(2\theta_j) \neq 0$ or $\cos(2\theta_j) \neq 0$, we conclude . On the other hand, comparing the components in $e_k$ for some $k \neq j$ gives respectively $$\begin{aligned} & (\cos(2\theta_j)-\cos(2\theta_k)) \omega_j^k(e_i) = -(\sin(2\theta_j)+\sin(2\theta_k)) h_{ij}^k, \\ & (\sin(2\theta_j)-\sin(2\theta_k)) \omega_j^k(e_i) = (\cos(2\theta_j)+\cos(2\theta_k)) h_{ij}^k, \end{aligned}$$ or, equivalently, $$\begin{aligned} & -\sin(\theta_j+\theta_k)\sin(\theta_j-\theta_k) \omega_j^k(e_i) = -\sin(\theta_j+\theta_k)\cos(\theta_j-\theta_k) h_{ij}^k, \\ & \cos(\theta_j+\theta_k)\sin(\theta_j-\theta_k) \omega_j^k(e_i) = \cos(\theta_j+\theta_k)\cos(\theta_j-\theta_k) h_{ij}^k. \end{aligned}$$ Since either $\sin(\theta_j+\theta_k) \neq 0$ or $\cos(\theta_j+\theta_k) \neq 0$, we conclude . \[cor1\] Let $f: M^n \to Q^n$ be a minimal Lagrangian immersion for which the sum of the local angle functions is constant. This can for example be achieved by choosing $A \in \mathcal A$ as in Example \[ex2\]. Then the one-form $s$ associated with $A$ vanishes on tangent bundle of $M^n$. In particular, for all $X$ tangent to $M$, one has $\nabla^{Q^n}_X A=0$ and, if $A$ is the shape-operator associated with a normal vector field $\xi$ along $Q^n$, also $\nabla^{\perp}_X \xi = 0$, where $\nabla^{\perp}$ is the normal connection of $Q^n$ in $\mathbb{C} P^{n+1}(4)$. Denote the angle functions by $\theta_1,\ldots,\theta_n$. Then $$\label{phi_const} 0 = e_i \left( \theta_1 + \cdots + \theta_n \right) = h_{11}^i + \cdots + h_{nn}^i - n \frac{s(e_i)}{2} = - n \frac{s(e_i)}{2}$$ for any $i=1,\ldots,n$, where we used and the fact that $h_{11}^i + \cdots + h_{nn}^i$ is $n$ times the $Je_i$-component of the mean curvature vector. Hence $s(e_i)=0$ for $i=1,\ldots,n$ and thus $s$ vanishes on tangent bundle of $M^n$. The rest of the statement now follows directly from Lemma \[lem2\]. To end this section, we state the equations of Gauss and Codazzi for a Lagrangian submanifold of $Q^n$. Let $f: M^n \to Q^n$ be a Lagrangian immersion with second fundamental form $h$. Define $B$ and $C$ as above for any choice of $A \in \mathcal A$. Finally, denote by $R$ the Riemannian curvature tensor of $M^n$ and by $\overline\nabla$ the Levi-Civita connection. Then $$\label{gauss} \begin{aligned} g(R(X,Y)Z,W) = \ & g(Y,Z)g(X,W) - g(X,Z)g(Y,W) \\ & + g(BY,Z) g(BX,W) - g(BX,Z) g(BY,W) \\ & + g(CY,Z) g(CX,W) - g(CX,Z) g(CY,W) \\ & + g(h(Y,Z),h(X,W)) - g(h(X,Z),h(Y,W)) \end{aligned}$$ and $$\label{codazzi} \begin{aligned} (\overline\nabla h)(X,Y,Z) - (\overline\nabla h)(Y,X,Z) = \ & g(CY,Z) JBX - g(CX,Z) JBY \\ & - g(BY,Z) JCX + g(BX,Z) JCY \end{aligned}$$ for any vector fields $X$, $Y$, $Z$ and $W$ tangent to $M^n$. These follow immediately from the general forms of the equations of Gauss and Codazzi, $$\begin{aligned} & g(R(X,Y)Z,W) = g(R^{Q^n}(X,Y)Z,W) + g(h(Y,Z),h(X,W)) - g(h(X,Z),h(Y,W)), \\ & (\overline\nabla h)(X,Y,Z) - (\overline\nabla h)(Y,X,Z) = (R^{Q^n}(X,Y)Z)^{\perp}, \end{aligned}$$ where the superscript $\perp$ denotes the component normal to $M^n$, by using and . If $\{e_1,\ldots,e_n\}$ is the local orthonormal frame constructed above and $\theta_1,\ldots,\theta_n$ are the local angle functions, then it follows from and that the sectional curvature of the plane spanned by $e_i$ and $e_j$ is given by $$\label{seccurv} \begin{aligned} K_{ij} &= g(R(e_i,e_j)e_j,e_i) \\ &= 2 \cos^2(\theta_i-\theta_j) + g(h(e_i,e_i),h(e_j,e_j)) - g(h(e_i,e_j),h(e_i,e_j)) \\ &= 2 \cos^2(\theta_i-\theta_j) + \sum_{k=1}^n \left( h_{ii}^k h_{jj}^k - (h_{ij}^k)^2 \right) \\ &= 2 \cos^2(\theta_i-\theta_j) + \sum_{k=1}^n \left( \left( e_k(\theta_i)+\frac{s(e_k)}{2} \right) \left(e_k(\theta_j)+\frac{s(e_k)}{2}\right)-(h_{ij}^k)^2 \right) \end{aligned}$$ for any $i,j = 1,\ldots,n$, with $i\neq j$. Lagrangian submanifolds of $Q^n$ and hypersurfaces of $S^{n+1}(1)$ ================================================================== Let $a: M^n \to S^{n+1}(1) \subset {\ensuremath{\mathbb{R}}}^{n+2}$ be an immersion and denote by $b$ a unit normal vector field along this immersion, tangent to $S^{n+1}(1)$. Let $\lambda_1,\ldots,\lambda_n$ be the principal curvatures and denote by $\{e_1,\ldots,e_n\}$ a local orthonormal frame given by principal directions such that $Se_j = \lambda_j e_j$ for $j=1,\ldots,n$, where $S$ is the shape operator associated with $b$. The Gauss map of the hypersurface $a$ is given by $$\label{def_G} G: M^n \to Q^n: p \mapsto [a(p)+\sqrt{-1} b(p)].$$ Remark that $$\label{def_Ghat} \hat G: M^n \to V_2({\ensuremath{\mathbb{R}}}^{n+2}): p \mapsto \frac{1}{\sqrt 2}(a(p)+\sqrt{-1}b(p))$$ is a map into the Stiefel manifold $V_2({\ensuremath{\mathbb{R}}}^{n+2})$ such that $G = \pi \circ \hat G$, which shows that $G$ indeed takes values in $Q^n$. In fact, $\hat G$ is horizontal since $$\label{dGhat} (d\hat G)e_j = \frac{1}{\sqrt 2}(1-\sqrt{-1}\lambda_j) e_j$$ is perpendicular to $\sqrt{-1}\hat G$ for all $j=1,\ldots,n$. It also follows from that $ (d\hat G)e_j $ is perpendicular to $\sqrt{-1}(d\hat G)e_k $ for all $j,k=1,\ldots,n$, which implies that $G$ is Lagrangian. The map $\hat G$ is, up to multiplication with a factor $e^{\sqrt{-1}t}$ for some constant $t\in \mathbb R$, the unique horizontal lift of $G$. If the hypersurface $a$ is *isoparametric* in $S^{n+1}(1)$, i.e., if the principal curvatures $\lambda_1,\ldots,\lambda_n$ are constant, then the Gauss map is a minimal Lagrangian immersion. This follows either by a straightforward computation of the second order derivatives of $\hat G$ or from the following elegant formula from [@Palmer]: $$g(J\vec{H},\cdot) = -\frac{1}{n} \ d \left( \mathrm{Im} \left( \log \prod_{j=1}^n (1+\sqrt{-1}\lambda_j) \right) \right),$$ where $\vec{H}$ is the mean curvature vector of the Gauss map. As remarked in [@MaOhnita1], any Lagrangian immersion $f:M^n \to Q^n$ can locally be seen as the Gauss map of a hypersurface of $S^{n+1}(1)$. Indeed, inspired by , we can always take a horizontal lift $\hat f: M^n \to V_2({\ensuremath{\mathbb{R}}}^{n+2})$ such that ($\sqrt 2$ times) its real part is locally an immersion into $S^{n+1}(1)$. In the following, we prove Theorem \[theo1\]. **Proof of Theorem \[theo1\]:** As discussed above, it is well-known that the Gauss map $G: M^n \to Q^n: p \mapsto [a(p)+\sqrt{-1}b(p)]$ of an isoparametric hypersurface $a: M^n \to S^{n+1}(1)$ with unit normal $b$ is a minimal Lagrangian immersion. Let $A_0 \in \mathcal A$ be as in Example \[ex1\] for the lift $\hat G$ given in and let $A \in \mathcal A$ be arbitrary. Lemma \[lem4\] implies that, along the image of $G$, we have $A = \cos\varphi \, A_{0} + \sin\varphi \, JA_{0}$ for some function $\varphi: M^n \to {\ensuremath{\mathbb{R}}}$. Let $\{e_1,\ldots,e_n\}$ be a local orthonormal frame of principal directions for the immersion $a$ on $M^n$. Then, using and , $$\begin{aligned} A_0 &(dG)e_j = -(d\pi) \left( \overline{(d \hat G)e_j} \right) = -(d\pi) \left( \frac{1}{\sqrt 2}(1+\sqrt{-1}\lambda_j)e_j \right) \\ & = -(d\pi) \left( \frac{1-\lambda_j^2}{1+\lambda_j^2} \, (d \hat G)e_j + \frac{2\lambda_j}{1+\lambda_j^2} \, \sqrt{-1} (d \hat G)e_j \right) = \frac{\lambda_j^2-1}{\lambda_j^2+1} \, (dG)e_j - \frac{2\lambda_j}{\lambda_j^2+1} \, J (dG)e_j.\end{aligned}$$ This implies that the frame $\{e_1, \ldots, e_n \}$ diagonalizes the operators $B_0$ and $C_0$, associated with $A_0$ as explained above, and that the angle functions are determined by $$\label{theta0} \cos(2\theta^0_j) = \frac{\lambda_j^2-1}{\lambda_j^2+1}, \qquad \sin(2\theta^0_j) = \frac{2\lambda_j}{\lambda_j^2+1}.$$ implies that $\lambda_j=\cot\theta^0_j$. In particular, we obtain that $\theta^0_1,\ldots,\theta^0_n$ are constant. It follows from Lemma \[lem4\] that the angle functions associated with $A$ are given by $\theta_j = \theta_j^0 - \varphi/2$, which implies that the difference between any two of them is constant. Now assume that $A$ is chosen as in Example \[ex2\]. Since $\theta_1 + \cdots + \theta_n = 0 \mod \pi$, we obtain that $\varphi$ is also constant and $\lambda_j = \cot(\theta_j^0) = \cot(\theta_j + \varphi/2)$ for $j=1,\ldots,n$. Conversely, let $f: M^n \to Q^n$ be a minimal Lagrangian immersion with constant angle functions associated with some $A \in \mathcal A$. Since $M^n$ is simply connected, we can take a horizontal lift $\hat f: M^n \to V_2({\ensuremath{\mathbb{R}}}^{n+2})$ of $f$, which can be written as $\hat f = (a+\sqrt{-1}b)/\sqrt 2$ and hence defines two maps $a,b: M^n \to S^{n+1}(1)$ such that $a(p)$ and $b(p)$ are orthogonal for every $p \in M^n$. For every constant $t \in {\ensuremath{\mathbb{R}}}$, there is another horizontal lift of $f$, namely, $$\label{liftf} \hat f_t = \frac{e^{\sqrt{-1}t}}{\sqrt 2}(a+\sqrt{-1}b) = \frac{1}{\sqrt 2}(a_t+\sqrt{-1}b_t),$$ where $a_t= (\cos t) a - (\sin t) b$ and $b_t= (\sin t) a + (\cos t) b$. Now let $\xi$ be a unit normal vector field to $Q^n$ such that $A$ is the shape operator associated with $\xi$. We can lift the restriction of $\xi$ to the image of $f$ to a horizontal vector field along the image of $\hat f$, which can be written as $\hat\xi = e^{\sqrt{-1}\varphi}(a-\sqrt{-1}b)/\sqrt 2$ for some function $\varphi: M^n \to {\ensuremath{\mathbb{R}}}$. We know from Corollary \[cor1\] that, for every vector $X$ tangent to $M^n$, one has $\nabla^{\perp}_X \xi = 0$, where $\nabla^{\perp}$ is the normal connection of $Q^n$ in $\mathbb{C} P^{n+1}(4)$. This implies that also $\nabla^{\perp}_X \hat\xi = 0$, where $\nabla^{\perp}$ is now the normal connection of $V_2({\ensuremath{\mathbb{R}}}^{n+2})$ in $S^{2n+3}(1)$. Combining this with the expression for $\hat\xi$ yields that $\varphi$ is constant. If we lift $\xi$ to the immersion $\hat f_t$ rather than to $\hat f$, we obtain $$\label{liftxi} \hat{\xi}_t = e^{\sqrt{-1}t}\hat{\xi} = \frac{e^{\sqrt{-1}(\varphi+t)}}{\sqrt 2}(a-\sqrt{-1}b) = \frac{e^{\sqrt{-1}(\varphi+2t)}}{\sqrt 2}(a_t-\sqrt{-1}b_t).$$ It follows from and that $$\label{eq_ab} a_t = \frac{1}{\sqrt 2} (\hat f_t + e^{-\sqrt{-1}(\varphi+2t)} \hat{\xi}_t), \qquad b_t = -\frac{\sqrt{-1}}{\sqrt 2}(\hat f_t - e^{-\sqrt{-1}(\varphi+2t)} \hat{\xi}_t).$$ Let us investigate when $a_t$ is an immersion. Let $\{e_1,\ldots,e_n\}$ be an orthonormal frame such that $A(df)e_j = \cos(2\theta_j) (df)e_j - \sin(2\theta_j) J(df)e_j$. Since $(d\pi)(d\hat{\xi})e_j=-A(df)e_j$, we obtain $$\label{eq_da} \begin{aligned} (da_t)e_j &= \frac{1}{\sqrt 2} \left( (d\hat f_t)e_j - e^{-\sqrt{-1}(\varphi+2t)} \left( \cos(2\theta_j) (d\hat f_t)e_j - \sqrt{-1}\sin(2\theta_j) (d \hat f_t)e_j \right) \right) \\ &= \frac{1}{\sqrt 2} \left( 1-e^{-\sqrt{-1}(2\theta_j+\varphi+2t)} \right)(d\hat f_t)e_j, \end{aligned}$$ which means that $a_t$ is an immersion if and only if $2\theta_j + \varphi + 2t$ is not a multiple of $2\pi$ for any $j=1,\ldots,n$. Consequently, the equation for $a_t$ in defines an immersion into $S^{n+1}(1)$ for every choice of constant $t$, and hence of $c=\varphi/2+t$, satisfying $\sin(\theta_j + \varphi/2 + t) \neq 0$ for any $j=1,\ldots,n$. Moreover, $b_t$, as defined by , must be a unit normal to this hypersurface tangent to the sphere, since it is perpendicular to $a_t$ and also to $(da_t)e_j$ for all $j=1,\ldots,n$, as can be seen from . This means that $a_t$ is a hypersurface with Gauss map $f$. By comparing and , we see that the principal curvatures of this hypersurface are given by $\lambda_j = \cot(\theta_j + \varphi/2 + t)$. In particular, they are constant and hence $a_t$ is isoparametric. It follows from Theorem \[theo1\] that for a given Lagrangian immersion $f: M^n \to Q^n$, which is minimal and, after choosing $A$ as in Example \[ex2\], has constant angle functions, there are several isoparametric hypersurfaces of $S^{n+1}(1)$ with Gauss map $f$. It follows from the proof that if $a$ is such a hypersurface, with unit normal $b$, then $a_t = (\cos t) a + (\sin t) b$ will define a hypersurface with the same Gauss map for any $t \in \mathbb R$, provided that $a_t$ is an immersion. Indeed, if $a$ is $\sqrt 2$ times the real part of a horizontal lift $\hat f$, then $a_t$ is $\sqrt 2$ times the real part of the horizontal lift $\hat f_t = e^{\sqrt{-1}t} \hat f$. Remark that $a_t$ is a so-called *parallel hypersurface* of $a$ in $S^{n+1}(1)$. The formula $\lambda_j = \cot\theta_j$ appeared before in the theory of isoparametric hypersurfaces, for example in Münzner’s paper [@MunznerI]. Theorem \[theo1\] gives an interpretation for the angles $\theta_j$. We can now translate everything that is known about the classification of isoparametric hypersurfaces of spheres to the theory of minimal Lagrangian submanifolds of $Q^n$. For example, we have the following. \[cor2\] Let $f: M^n \to Q^n$ be a minimal Lagrangian immersion with constant angle functions. If $g$ is the number of different constant angle functions modulo $\pi$, then $g \in \{1,2,3,4,6\}$. Moreover, - if $g=1$, then $f$ is the Gauss map of a part of the standard embedding $S^n(r) \to S^{n+1}(1)$; - if $g=2$, then $f$ is the Gauss map of a part of the standard embedding $S^k(r_1) \times S^{n-k}(r_2) \to S^{n+1}(1)$; - if $g=3$, then $f$ is the Gauss map of a part of a tube around the standard embedding $\mathbb R P^2 \to S^4(1)$, $\mathbb C P^2 \to S^7(1)$, $\mathbb H P^2 \to S^{13}(1)$ or $\mathbb O P^2 \to S^{25}(1)$ (in the first case, the standard embedding is a Veronese embedding), which are known as Cartan’s isoparametric hypersurfaces. The first part of the statement follows from Münzner’s theorem [@MunznerI; @MunznerII] on the possible numbers of distinct principal curvatures of an isoparametric hypersurface of $S^{n+1}(1)$. Indeed, we know from Theorem \[theo1\] that $f$ is the Gauss map of an isoparametric hypersurface of $S^{n+1}(1)$ and that the number of different constant angle functions modulo $\pi$ equals the number of distinct principal curvatures of this hypersurface. The second part follows from the classification of isoparametric hypersurfaces of spheres for $g=1$, $g=2$ and $g=3$, known since the work of Cartan [@C]. The corollary above is only a partial result in the sense that we can translate everything which is known about the classification of isoparametric hypersurfaces of spheres to minimal Lagrangian submanifolds of $Q^n$, also for the cases $g=4$ and $g=6$. The following theorem states that the first two examples in Corollary \[cor2\] are the only totally geodesic Lagrangian submanifolds of $Q^n$. \[theoTG\] Let $f: M^n \to Q^n$ be a totally geodesic Lagrangian immersion. Then $f$ is the Gauss map of a part of the standard embedding $S^n(r) \to S^{n+1}(1)$ or $S^k(r_1) \times S^{n-k}(r_2) \to S^{n+1}(1)$. In the former case, the metric induced by $f$ gives $M^n$ constant sectional curvature $2$. In the latter case, the metric induced by $f$ does not give $M^n$ constant sectional curvature, unless $k=1$ and $n=2$, in which case $f$ makes $M^2$ flat. Choose $A \in \mathcal A$ as in Example \[ex2\]. Then it follows from Corollary \[cor1\] that the one-form $s$ vanishes on tangent bundle of $M^n$ and hence from that all the angle functions are constant. We then know from Theorem \[theo1\] that $f$ is the Gauss map of an isoparametric hypersurface of $S^{n+1}(1)$ and from Corollary \[cor2\] that the number of different constant angle functions modulo $\pi$ is $1$, $2$, $3$, $4$ or $6$. The equation of Codazzi for $X=e_i$, $Y=e_j$ en $Z=e_k$ yields $$\sin(2(\theta_i-\theta_j))(\delta_{jk}Je_i + \delta_{ik}Je_j) = 0,$$ which implies that $2\theta_i = 2\theta_j \mod \pi$ for all indices $i$ and $j$. This means that there can be at most two different angle functions modulo $\pi$ and we obtain the result from Corollary \[cor2\]. The claims about the sectional curvature follow from . The following theorem states that the first example in Corollary \[cor2\] describes the only family of minimal Lagrangian submanifolds of $Q^n$ for which all angle functions are equal. \[theoSameAngles\] Let $f: M^n \to Q^n$ be a minimal Lagrangian immersion such that all angle functions are equal. Then $f$ is the Gauss map of a part of the standard embedding $S^n(r) \to S^{n+1}(1)$. Assume that the angle functions appearing in the statement of the theorem correspond to $A_0 \in \mathcal A$ and choose the almost product structure $A \in \mathcal A$ as in Example \[ex2\]. By Lemma \[lem4\], the angle functions corresponding to $A$ are still all equal and, since their sum vanishes modulo $\pi$, they are all constant. By Theorem \[theo1\], the immersion is the Gauss map of a totally umbilical hypersurface of $S^{n+1}(1)$, which proves the theorem. The following theorem explicitly describes minimal Lagrangian immersions with constant angle functions in $Q^3$. \[theoCAdim3\] Let $f:M^3 \to Q^3$ be a minimal Lagrangian immersion and choose $A \in \mathcal A$ as in Example \[ex2\]. Assume that the corresponding angle functions are constant and denote by $g$ the number of different constant angle functions modulo $\pi$. Then $f$ is one of the following: - $f$ is the Gauss map of a part of the standard embedding $S^3(r) \to S^4(1)$ if $g=1$; - $f$ is the Gauss map of a part of the standard embedding $S^1(r_1) \times S^2(r_2) \to S^4(1)$ if $g=2$; - $f$ is the Gauss map of a part of one of Cartan’s isoparametric hypersurfaces: a tube around the Veronese surface in $S^4(1)$ if $g=3$. In the third case, the metric induced by $f$ gives $M^3$ constant sectional curvature $1/8$. Corollary \[cor1\] and equation imply that all components of the second fundamental form for which at least two indices are the same, vanish. The only possibly non-zero component of $h$ is thus $h_{12}^3$ and we distinguish two cases. *Case 1: $h_{12}^3 = 0$.* In this case, $f$ is totally geodesic and hence it is the Gauss map of the standard embedding $S^3(r) \to S^4(1)$ or $S^1(r_1) \times S^2(r_2) \to S^4(1)$ by Theorem \[theoTG\]. *Case 2: $h_{12}^3 \neq 0$.* In this case, it follows from equation that the constant angle functions $\theta_1$, $\theta_2$ and $\theta_3$ are mutually different. It then follows from Corollary \[cor2\] that $f$ is the Gauss map of one of Cartan’s isoparametric hypersurfaces of $S^4(1)$. Considering the $Je_1$-component of the Codazzi equation for $X=e_1$ and $Y=Z=e_2$, using and some elementary trigonometric identities, we obtain $$\label{(h123)_1} (h_{12}^3)^2 = -\cos(\theta_1-\theta_2) \sin(\theta_2-\theta_3) \sin(\theta_3-\theta_1).$$ Similarly, the $Je_2$-component of the Codazzi equation for $X=e_2$ and $Y=Z=e_3$ and the $Je_3$-component of the Codazzi equation for $X=e_3$ and $Y=Z=e_1$ yield $$\begin{aligned} & (h_{12}^3)^2 = -\sin(\theta_1-\theta_2) \cos(\theta_2-\theta_3) \sin(\theta_3-\theta_1), \label{(h123)_2} \\ & (h_{12}^3)^2 = -\sin(\theta_1-\theta_2) \sin(\theta_2-\theta_3) \cos(\theta_3-\theta_1). \label{(h123)_3} \end{aligned}$$ Combining equations –, yields $\{\theta_1,\theta_2,\theta_3\} = \{0,\pi/3,-\pi/3\}$, as always modulo $\pi$, and hence $(h_{12}^3)^2 = 3/8$. Finally, from , we obtain that the sectional curvature of any plane $\mathrm{span}\{e_i,e_j\}$ is given by $K_{ij} = 2\cos^2(\theta_i-\theta_j)-(h_{12}^3)^2 = 1/8$. Minimal Lagrangian submanifolds of $Q^n$ with constant sectional curvature ========================================================================== The classifications in Theorem \[theoTG\] and Theorem \[theoCAdim3\] include examples of minimal Lagrangian submanifolds of $Q^n$ with constant sectional curvature: the Gauss map of a round sphere and the Gauss map of one of Cartan’s examples. In this section, we prove Theorem \[theoCSCconclusion\], i.e., we classify all minimal Lagrangian submanifolds of $Q^n$ with constant sectional curvature, for arbitrary $n \geq 2$. This classification can be regarded as a counterpart of the classic result by Ejiri [@Ejiri1982] for the case of complex space form, while the proof is completely different from that in [@Ejiri1982]. General results --------------- We first prove some results which are valid for any dimension $n \geq 2$. This is the key step of the proof of Theorem \[theoCSCconclusion\]. \[lem5.1\] Let $f: M^n \to Q^n$ be a Lagrangian submanifold with constant sectional curvature. Assume that an almost product structure $A=\cos\varphi A_{\eta}+\sin\varphi JA_{\eta} \in \mathcal A$ is fixed on $Q^n$ and let $\{e_1,\ldots,e_n\}$ be a local orthonormal frame on $M^n$ diagonalizing the associated operators $B$ and $C$. Denote by $\theta_1,\ldots,\theta_n$ the angle functions as defined above. Then $$\label{CSCeq0} \begin{aligned} & \sin(\theta_i-\theta_j) \sin(\theta_i+\theta_j-2\theta_k) (\delta_{k \ell}h(e_i,e_j) + h_{ij}^{\ell} Je_k) \\ & + \sin(\theta_j-\theta_k) \sin(\theta_j+\theta_k-2\theta_i) (\delta_{i \ell}h(e_j,e_k) + h_{jk}^{\ell} Je_i) \\ & + \sin(\theta_k-\theta_i) \sin(\theta_k+\theta_i-2\theta_j) (\delta_{j \ell}h(e_i,e_k) + h_{ik}^{\ell} Je_j) = 0 \end{aligned}$$ for all $i,j,k,\ell = 1,\ldots,n$. In particular, $$\begin{aligned} \label{CSCeq1} & h_{ii}^k \sin(\theta_i-\theta_k) \sin(\theta_i+\theta_k-2\theta_j) = h_{jj}^k \sin(\theta_j-\theta_k) \sin(\theta_j+\theta_k-2\theta_i), \\ \label{CSCeq2} & h_{ij}^k \sin(\theta_i-\theta_j) \sin(\theta_i+\theta_j-2\theta_k) = 0 \end{aligned}$$ for $i,j,k = 1,\ldots,n$ mutually different, and $$\begin{aligned} \label{CSCeq3} & h_{ij}^k \sin(\theta_i-\theta_j) \sin(\theta_i+\theta_j-2\theta_{\ell}) = 0 \end{aligned}$$ for $i,j,k,\ell = 1,\ldots,n$ mutually different. We start by taking the covariant derivative of the Codazzi equation along a vector field $W$: $$\label{codazzider} (\overline\nabla^2 h)(W,X,Y,Z) - (\overline\nabla^2 h)(W,Y,X,Z) = (\overline\nabla T)(W,X,Y,Z),$$ where $T$ is the $(1,3)$-tensor field taking values in the normal bundle, given by $$\label{eqT} T(X,Y,Z) = g(CY,Z) JBX - g(CX,Z) JBY - g(BY,Z) JCX + g(BX,Z) JCY$$ for all $X$, $Y$ and $Z$ tangent to $M^n$. If we take a cyclic sum of over $W$, $X$ and $Y$, keeping $Z$ fixed, we see that the left hand side vanishes. Indeed, using the Ricci identity, denoting by $R^{\perp}$ the curvature tensor of the normal connection of $M^n$ in $Q^n$, by $R$ the curvature tensor of $M^n$ and by $c$ the constant sectional curvature of $M^n$, we have $$\begin{aligned} \lefteqn{\sum_{W,X,Y}^{\mathrm{cyclic}} \left( (\overline\nabla^2 h)(W,X,Y,Z) - (\overline\nabla^2 h)(W,Y,X,Z) \right) } \\ &=& \sum_{W,X,Y}^{\mathrm{cyclic}} \left( (\overline\nabla^2 h)(W,X,Y,Z) - (\overline\nabla^2 h)(X,W,Y,Z) \right) \\ &=& \sum_{W,X,Y}^{\mathrm{cyclic}} \left( R^\perp(W,X)h(Y,Z) - h(R(W,X)Y,Z) - h(Y,R(W,X)Z) \right) \\ &=& \sum_{W,X,Y}^{\mathrm{cyclic}} \left( -JR(W,X)Jh(Y,Z) - h(R(W,X)Y,Z) - h(Y,R(W,X)Z) \right) \\ &=& -c \sum_{W,X,Y}^{\mathrm{cyclic}} \left( g(X,Jh(Y,Z))JW - g(W,Jh(Y,Z))JX + g(X,Y)h(W,Z) \right. \\ & & \hspace{1.5cm} \left. - g(W,Y)h(X,Z) + g(X,Z)h(Y,W) - g(W,Z) h(Y,X) \right) \\ &=& 0. \end{aligned}$$ This implies that $$\label{eqnablaT0} \sum_{W,X,Y}^{\mathrm{cyclic}} (\overline\nabla T)(W,X,Y,Z) = 0.$$ From , we obtain immediately that $$\begin{aligned} (\overline\nabla T)(W,X,Y,Z) &=& g((\nabla_WC)Y,Z)JBX + g(CY,Z)J(\nabla_WB)X \\ && - g((\nabla_WC)X,Z) JBY - g(CX,Z) J(\nabla_WB)Y \\ && - g((\nabla_WB)Y,Z) JCX - g(BY,Z) J(\nabla_WC)X \\ && + g((\nabla_WB)X,Z) JCY + g(BX,Z) J(\nabla_WC)Y, \end{aligned}$$ which, by and , is equivalent to $$\begin{aligned} (\overline\nabla T)(W,X,Y,Z) &=& -g(Jh(W,BY),Z)JBX - g(BJh(W,Y),Z)JBX \\ && - g(CY,Z)h(W,CX) + g(CY,Z)JCJh(W,X) \\ && + g(Jh(W,BX),Z)JBY + g(BJh(W,X),Z)JBY \\ && + g(CX,Z)h(W,CY) - g(CX,Z)JCJh(W,Y) \\ && - g(Jh(W,CY),Z)JCX - g(CJh(W,Y),Z)JCX \\ && - g(BY,Z)h(W,BX) + g(BY,Z)JBJh(W,X) \\ && + g(Jh(W,CX),Z)JCY + g(CJh(W,X),Z)JCY \\ && + g(BX,Z)h(W,BY) - g(BX,Z)JBJh(W,Y). \end{aligned}$$ Remark that the terms involving $s(W)$ cancel two by two. When taking the cyclic sum over $W$, $X$ and $Y$, the expression simplifies to $$\label{eqnablaT1} \begin{aligned} \sum_{W,X,Y}^{\mathrm{cyclic}} (\overline\nabla T)(W,X,Y,Z) = \sum_{W,X,Y}^{\mathrm{cyclic}} ( & -g(Jh(W,BY),Z)JBX - g(CY,Z)h(W,CX) \\ & + g(Jh(W,BX),Z)JBY + g(CX,Z)h(W,CY) \\ & - g(Jh(W,CY),Z)JCX - g(BY,Z)h(W,BX) \\ & + g(Jh(W,CX),Z)JCY + g(BX,Z)h(W,BY) ). \\ \end{aligned}$$ By combining and for $W=e_i$, $X=e_j$, $Y=e_k$ and $Z=e_{\ell}$, we obtain . By taking $i$, $j$ and $k$ mutually different and $\ell=k$ in , we obtain , and up to renaming the indices. \[propCSC1\] Let $f:M^n \to Q^n$ be a minimal Lagrangian immersion such that $M^n$ has constant sectional curvature and choose $A \in \mathcal A$ as in Example \[ex2\]. Then the local angle functions are either all the same or mutually different modulo $\pi$. In the former case, the immersion is totally geodesic and is the Gauss map of a part of the standard embedding $S^n(r) \to S^{n+1}(1)$. Assume first that all angle functions are the same modulo $\pi$. Then the last part of the proposition follows from Theorem \[theoSameAngles\]. Now consider the case that at least two angle functions are different modulo $\pi$. We have to prove that this implies that *all* the angle functions are mutually different modulo $\pi$. This is trivial for $n=2$, so we assume from now on that $n \geq 3$. Proceeding by contradiction, we assume that $\theta_1 = \cdots = \theta_m \mod \pi$ for some $m \in \{2,\ldots,n-1\}$ and that $\theta_{\ell} \neq \theta_1 \mod \pi$ for all $\ell > m$. *Step 1: If $X,Y \in \mathrm{span}\{e_1,\ldots,e_m\}$ and $X \perp Y$, then $h(X,Y)=0$.* After changing the orthonormal frame $\{e_1,\ldots,e_m\}$ if necessary, we may assume that $X$ is a scalar multiple of $e_1$ and $Y$ is a scalar multiple of $e_2$. It suffices to prove that $h_{12}^{\ell}=0$ for any $\ell \in \{1,\ldots,n\}$. If $\ell \leq m$, putting $i=\ell$, $j=1$ and $k=2$ in , gives $h_{12}^{\ell}=0$. If $\ell > m$ on the other hand, taking $i=1$, $j=\ell$ and $k=2$ in , gives $h_{12}^{\ell}=0$. This proves the claim. *Step 2: If $X,Y \in \mathrm{span}\{e_1,\ldots,e_m\}$ and $\|X\|=\|Y\|$, then $h(X,X)=h(Y,Y)$.* This follows immediately from Step 1 by noting that $X+Y \perp X-Y$ and using the bilinearity and the symmetry of $h$. *Step 3: $M^n$ cannot have constant sectional curvature.* By using , Step 1 and Step 2, we obtain that the sectional curvature of the plane spanned by $e_1$ and $e_2$ satisfies $$K_{12} = 2 + g(h(e_1,e_1),h(e_2,e_2)) - g(h(e_1,e_2),h(e_1,e_2)) = 2 + \|h(e_1,e_1)\|^2 \geq 2.$$ On the other hand, if $\ell > m$, then, again by using , the sectional curvature of the plane spanned by $e_1$ and $e_{\ell}$ satisfies $$K_{1 \ell} = 2 \cos^2(\theta_1-\theta_{\ell}) + g(h(e_1,e_1),h(e_{\ell},e_{\ell})) - g(h(e_1,e_{\ell}),h(e_1,e_{\ell})) < 2 + g(h(e_1,e_1),h(e_{\ell},e_{\ell})).$$ Remark that the inequality is strict since $\theta_1 \neq \theta_{\ell} \mod \pi$. It is now sufficient to prove that there is at least one $\ell_0 > m$ for which $g(h(e_1,e_1),h(e_{\ell_0},e_{\ell_0})) \leq 0$ since, for this particular index $\ell_0$, one has $K_{1 \ell_0} < 2$ and hence $K_{1 \ell_0} \neq K_{12}$. To prove the existence of such an index $\ell_0$, we remark that, due to minimality and Step 2, $$m \, h(e_1,e_1) + \sum_{\ell = m+1}^n h(e_{\ell},e_{\ell}) = 0.$$ Taking the inner product with $h(e_1,e_1)$ yields $$m \, \|h(e_1,e_1)\|^2 + \sum_{\ell = m+1}^n g(h(e_1,e_1),h(e_{\ell},e_{\ell})) = 0,$$ which shows that indeed one of the terms in the sum must be non-positive. This contradiction completes the proof. \[propCSC2\] Let $f:M^n \to Q^n$ be a minimal Lagrangian immersion such that $M^n$ has constant sectional curvature. If there exist three mutually different indices $i$, $j$ and $k$ such that $h_{ij}^k \neq 0$, then $n=3$ and the immersion is part of the Gauss map of a tube around the Veronese surface in $S^4(1)$. Without loss of generality, we assume that $h_{12}^3 \neq 0$. We will prove that the assumption $n \geq 4$ leads to a contradiction. Remark that, since $f$ is not totally geodesic, it follows from Proposition \[propCSC1\] that all the angle functions are different modulo $\pi$. Applying for $(i,j,k)=(1,2,3)$, we obtain $\sin(\theta_1+\theta_2-2\theta_3)=0$ and applying for $(i,j,k,\ell)=(1,2,3,4)$, we obtain $\sin(\theta_1+\theta_2-2\theta_4)=0$. By combining these two equations, we find that $2\theta_3=2\theta_4 \mod \pi$. Again from , but now for $(i,j,k,\ell)=(1,3,2,4)$, we obtain $\sin(\theta_1+\theta_3-2\theta_4)=0$. By combining this with $2\theta_3=2\theta_4 \mod \pi$, we obtain $\sin(\theta_1-\theta_3)=0$, and hence $\theta_1=\theta_3\mod\pi$, which is a contradiction. Hence, we obtain that $n=3$. To prove the second part of the theorem, it suffices, in view of Theorem \[theoCAdim3\] and its proof, to prove that all the angle functions are constant. We choose $A \in \mathcal A$ as in Example \[ex2\], such that $\theta_1+\theta_2+\theta_3=0 \mod\pi$. By taking $(i,j,k)=(1,2,3)$, respectively $(i,j,k)=(1,3,2)$, in and using $h_{12}^3 \neq 0$, we obtain $\theta_1+\theta_2-2\theta_3=0 \mod\pi$, respectively $\theta_1+\theta_3-2\theta_2=0 \mod\pi$. Combining all three relations for $\theta_1$, $\theta_2$ and $\theta_3$ implies that all three functions are constant. Classification in dimension $n=2$ --------------------------------- The complex hyperquadric $Q^2$ is isometric to the product of spheres $S^2(1/2) \times S^2(1/2)$, as can be deduced for example from [@CU] or [@TU2015]. Obviously, the almost product structure related to this splitting is different from the non-integrable almost product structures in $\mathcal A$. It was proven in [@CU] that a minimal Lagrangian surface with constant Gaussian curvature in $S^2(1) \times S^2(1)$ must be totally geodesic. In combination with Theorem \[theoTG\], we obtain the following. \[propCSCn=2\] Let $f:M^2 \to Q^2$ be a minimal Lagrangian immersion such that $M^2$ has constant Gaussian curvature. Then the immersion is totally geodesic and $f$ is the Gauss map of the standard embedding $S^2(r) \to S^3(1)$ or $S^1(r_1) \times S^1(r_2) \to S^3(1)$. In the first case, $M^2$ has constant Gaussian curvature $2$ and in the second case, $M^2$ is flat. Classification in dimension $n=3$ --------------------------------- The following theorem gives a complete classification of minimal Lagrangian submanifolds of $Q^3$ with constant sectional curvature. \[propCSCn=3\] Let $f:M^3\to Q^3$ be a Lagrangian minimal immersion with constant sectional curvature, then either $M^3$ has constant sectional curvature $2$ and $f$ is the Gauss map of a part of the standard embedding $S^3(r) \to S^4(1)$, or $M^3$ has constant sectional curvature $1/8$ and $f$ is the Gauss map of a part of a tube around a Veronese surface in $S^4(1)$. We choose $A \in \mathcal A$ as in Example \[ex2\], then we have that $\theta_1+\theta_2+\theta_3=0 \mod \pi$. It follows from Proposition \[propCSC1\] that the angle functions are either all equal modulo $\pi$ or mutually different modulo $\pi$ and that, in the former case, we obtain the first case in the proposition. Assume from now on that all the angle functions $\theta_1, \theta_2, \theta_3$ are different modulo $\pi$. If $h_{12}^3 \neq 0$, it follows from Proposition \[propCSC2\] that we obtain the Gauss map of a part of a tube around a Veronese surface in $S^4(1)$. We will assume from now on that $h_{12}^3=0$ and prove that no new examples can occur. We denote $$\begin{aligned} x&=\sin{(\theta_1-\theta_2)}\sin{(\theta_1+\theta_2-2\theta_3)},\\ y&=\sin{(\theta_2-\theta_3)}\sin{(\theta_2+\theta_3-2\theta_1)},\\ z&=\sin{(\theta_3-\theta_1)}\sin{(\theta_3+\theta_1-2\theta_2)}. \end{aligned}$$ By using elementary trigonometric identities, we have that $x+y+z = 0$. Moreover, is equivalent to $$\label{dim3eq1} h_{22}^1 x + h_{33}^1 z = 0, \qquad h_{11}^2 x + h_{33}^2 y = 0, \qquad h_{22}^3 y + h_{11}^3 z = 0.$$ We now distinguish three cases. *Case 1: At least two of the local functions $x$, $y$ and $z$ are identically zero.* Since the sum of the three functions vanishes, we know that all three of them are identically zero. Moreover, since the local angle functions $\theta_1$, $\theta_2$ and $\theta_3$ are mutually different modulo $\pi$, it follows that $\theta_1+\theta_2-2\theta_3$, $\theta_2+\theta_3-2\theta_1$ and $\theta_3+\theta_1-2\theta_2$ are all integer multiples of $\pi$. Together with $\theta_1+\theta_2+\theta_3=0\mod\pi$, this implies that the angle functions are all constants that are different modulo $\pi$ and it follows from Theorem \[theoCAdim3\] that $f$ is the Gauss map of a part of a tube around a Veronese surface in $S^4(1)$, which contradicts $h_{12}^3=0$. *Case 2: Exactly one of the local functions $x$, $y$ and $z$ is identically zero.* Without loss of generality, we may assume that $x=0$, Remark that $y=-z \neq 0$. It follows from that $h_{33}^1=h_{33}^2=0$ and $h_{11}^3=h_{22}^3$. Since $x=0$, we have $\theta_1+\theta_2-2\theta_3=0 \mod\pi$ and deriving this equality, using , yields $h_{11}^i + h_{22}^i - 2h_{33}^i = 0$ for all $i\in\{1,2,3\}$. On the other hand, the minimality condition implies $h_{11}^i + h_{22}^i + h_{33}^i = 0$, so we obtain $h_{11}^i + h_{22}^i=0$ and $h_{33}^i=0$ for all $i\in\{1,2,3\}$. Since we already have $h_{12}^3 = h_{33}^1 = h_{33}^2 = 0$ and $h_{11}^3 = h_{22}^3$, the only possibly non-zero components of the second fundamental form are $h_{11}^1 = -h_{22}^1$ and $h_{11}^2 = -h_{22}^2$. By , the sectional curvatures of the planes spanned by $\{e_1,e_2\}$, $\{e_1,e_3\}$ and $\{e_2,e_3\}$ are $K_{12} = \cos^2(\theta_1-\theta_2)-2(h_{11}^1)^2-2(h_{11}^2)^2$, $K_{13} = \cos^2(\theta_1-\theta_3)$ and $K_{23} = \cos^2(\theta_2-\theta_3)$. It follows from the latter two equalities that $\theta_1-\theta_3$ and $\theta_2-\theta_3$ are constant, and hence, by taking derivatives using and $h_{33}^i=0$, that $h_{11}^i = 0$ and $h_{22}^i = 0$. We conclude that the submanifold is totally geodesic, but by comparing $K_{12}$ and $K_{13}$, we then see that $y=0$, which is a contradiction. *Case 3: None of the local functions $x$, $y$ and $z$ are identically zero.* We work on an open subset of $M^n$ where none of the functions vanish. It follows from and the minimality condition that there are local functions $\alpha_1$, $\alpha_2$ and $\alpha_3$ on this subset such that $$\label{dim3eq2} \begin{aligned} & h^1_{22} = \alpha_1 z, & h^1_{33} = -\alpha_1 x, && h^1_{11} = \alpha_1(x-z), \\ & h^2_{33} = \alpha_2 x, & h^2_{11} = -\alpha_2 y, && h^2_{22} = \alpha_2 (y-x),\\ & h^3_{11} = \alpha_3 y, & h^3_{22} = -\alpha_3 z, && h^3_{33} = \alpha_3(z-y). \\ \end{aligned}$$ From these equations and , we obtain expressions for the derivatives of the components of $h$ in terms of the $\alpha_1$, $\alpha_2$, $\alpha_3$ and their derivatives. Substituting these into the Codazzi equation for $(X,Y,Z)=(e_1,e_2,e_3)$ and using $\theta_1+\theta_2+\theta_3=0 \mod \pi$ to eliminate $\theta_3$, we obtain $$\begin{aligned} & e_1(\alpha_3) = \frac{1}{4} \, \alpha_1 \, \alpha_3 \csc(2\theta_1+\theta_2) \Big( 5\cos(2\theta_1+\theta_2) - 7\cos(3\theta_2) + 2\cos(4\theta_1-\theta_2) \\ & \hspace{8cm} - \cos(4\theta_1+5\theta_2) + \cos(6\theta_1+3\theta_2) \Big), \\ & e_2(\alpha_3) = \frac{1}{4} \, \alpha_2 \, \alpha_3 \csc(\theta_1+2\theta_2) \Big( 5\cos(\theta_1+2\theta_2) - 7 \cos(3\theta_1)+ 2\cos(\theta_1-4\theta_2) \\ & \hspace{8cm} - \cos(5\theta_1+4\theta_2) + \cos(3\theta_1+6\theta_2) \Big). \end{aligned}$$ Similarly, for $(X,Y,Z)=(e_2,e_3,e_1)$ we obtain $$\begin{aligned} & e_2(\alpha_1) = -\frac{1}{4} \alpha_1 \, \alpha_2 \csc(\theta_1-\theta_2) \Big( 5\cos(\theta_1-\theta_2) - 7 \cos(3\theta_1+3\theta_2)+ 2 \cos(\theta_1+5\theta_2) \\ & \hspace{8cm} - \cos(5\theta_1+\theta_2)+ \cos(3\theta_1-3\theta_2) \Big),\\ & e_3(\alpha_1)= -\frac{1}{4} \alpha_1 \, \alpha_3 \csc(2\theta_1+\theta_2) \Big( 5\cos(2\theta_1+\theta_2) -7\cos(3\theta_2) +2\cos(4\theta_1+5\theta_2) \\ & \hspace{8cm} - \cos(4\theta_1-\theta_2) + \cos(6\theta_1+3\theta_2) \Big) \end{aligned}$$ and for $(X,Y,Z)=(e_3,e_1,e_2)$ we obtain $$\begin{aligned} & e_1(\alpha_2) = \frac{1}{4} \alpha_1 \, \alpha_2 \csc(\theta_1-\theta_2) \Big( 5\cos(\theta_1-\theta_2) - 7 \cos(3\theta_1+3\theta_2) + 2 \cos(5\theta_1+\theta_2) \\ & \hspace{8cm} - \cos(\theta_1+5\theta_2) + \cos(3\theta_1-3\theta_2)\Big), \\ & e_3(\alpha_2) = -\frac{1}{4} \alpha_2 \, \alpha_3 \csc(\theta_1+2\theta_2) \Big( 5\cos(\theta_1+2\theta_2) - 7\cos(3\theta_1)+ 2 \cos(5\theta_1+4\theta_2)\\ & \hspace{8cm} - \cos(\theta_1-4\theta_2) + \cos(3\theta_1+6\theta_2) \Big). \end{aligned}$$ Note that we are in the case $x y z\neq 0$, using these expressions above for the derivatives of $\alpha_1$, $\alpha_2$ and $\alpha_3$, the $Je_1$-component of the Codazzi equation for $(X,Y,Z)=(e_2,e_1,e_1)$ yields $\alpha_1\alpha_2=0$, the $Je_2$-component of the Codazzi equation for $(X,Y,Z)=(e_3,e_2,e_2)$ yields $\alpha_2\alpha_3=0$ and the $Je_3$-component of the Codazzi equation for $(X,Y,Z)=(e_1,e_3,e_3)$ yields $\alpha_1\alpha_3=0$. This implies that at least two of the functions $\{\alpha_1,\alpha_2,\alpha_3\}$ vanish identically. Because of the symmetry of the problem, we may assume without loss of generality that $\alpha_1=\alpha_2=0$. The $Je_2$-component of the Codazzi equation for $(X,Y,Z)=(e_1,e_2,e_1)$ then gives $$\label{eqn=3.1} \alpha_3^2 = -2\csc(3\theta_1)\csc(3\theta_2)\cos(\theta_1-\theta_2),$$ whereas the $Je_1$-component of the Codazzi equation for $(X,Y,Z)=(e_3,e_1,e_3)$ gives $$\label{eqn=3.2} \begin{aligned} e_3(\alpha_3)= & \frac{1}{16} \csc(3\theta_1) \csc(\theta_1+2\theta_2) \csc(2\theta_1+\theta_2) [-32 \sin^2(2\theta_1+\theta_2)\cos(2\theta_1+\theta_2) \\ & - \alpha_3^2[15\cos(2\theta_1+\theta_2) - \cos(8\theta_1+7\theta_2) + \cos(8\theta_1+\theta_2) + 4 \cos(6\theta_1+3\theta_2) \\ & - \cos(10\theta_1+5\theta_2) + 4 \cos(6\theta_1-3\theta_2) - 16 \cos(4\theta_1-\theta_2) + \cos(2\theta_1-5\theta_2) \\ & - 8 \cos(3\theta_2) + \cos(2\theta_1+7\theta_2)]]. \end{aligned}$$ Substituting and into the Codazzi equation for $(X,Y,Z)=(e_3,e_2,e_3)$ gives $$\label{5.13} -5\cos(\theta_1-\theta_2) + 2\cos(3(\theta_1-\theta_2)) + (1+2\cos(2(\theta_1-\theta_2)))\cos(3(\theta_1+\theta_2))=0.$$ On the other hand, the difference between the Gauss equations for $(X,Y,Z,W)=(e_1,e_2,e_2,e_1)$ and $(X,Y,Z,W)=(e_1,e_3,e_3,e_1)$ gives the following equation for $\alpha_3$. $$\label{5.14} 1+\alpha_3^2\big(\cos(2(\theta_1-\theta_2))-\cos(\theta_1-\theta_2)\cos(3(\theta_1+\theta_2)\big)=0.$$ Substituting into , we obtain $$\label{5.15} -2\cos(\theta_1-\theta_2) -\cos(3(\theta_1-\theta_2)) + (1+2\cos(2(\theta_1-\theta_2)))\cos(3(\theta_1+\theta_2))=0.$$ By combining and , we obtain that all angle functions are constant. Since they are mutually different modulo $\pi$, it follows from Theorem \[theoCAdim3\] that $f$, restricted to the open subset of $M^n$ on which we are working, is the Gauss map of a part of a tube around a Veronese surface in $S^4(1)$, which contradicts $h_{12}^3=0$. Classification in dimension $n=4$ --------------------------------- The following proposition shows that, for $n=4$, no new examples of minimal Lagrangian submanifolds of $Q^4$ with constant sectional curvature occur. \[propCSCn=4\] Let $f:M^4 \to Q^4$ be a minimal Lagrangian immersion such that $M^4$ has constant sectional curvature. Then $M^4$ has constant sectional curvature $2$ and $f$ is the Gauss map of the standard embedding $S^4(r) \to S^5(1)$. Choose $A \in \mathcal A$ as in Example \[ex2\]. By Theorem \[theoTG\], it suffices to show that $f$ is a totally geodesic immersion. We know from Proposition \[propCSC2\] that $h_{ij}^k=0$ for all mutually different indices $i$, $j$ and $k$, so we only have to show that $h^i_{jj}=0$ for all $i,j \in \{1,2,3,4\}$. We will proceed by contradiction and distinguish three cases. *Case 1: There are at least three different indices $i$ for which $h^i_{11}$, $h^i_{22}$, $h^i_{33}$ and $h^i_{44}$ are not all zero.* Without loss of generality, we may assume that these three indices are $1$, $2$ and $3$. Remark that if $h^i_{ii} \neq 0$, there exists also an index $j \neq i$ such that $h^i_{jj} \neq 0$ due to minimality. For every $i \in \{1,2,3\}$ we consider the following system of equations coming from : $$\label{eqn=4.1} \left\{ \begin{array}{l} \sin(\theta_j-\theta_i) \sin(\theta_j+\theta_i-2\theta_k) h_{jj}^i - \sin(\theta_k-\theta_i) \sin(\theta_k+\theta_i-2\theta_j) h_{kk}^i = 0, \\ \sin(\theta_k-\theta_i) \sin(\theta_k+\theta_i-2\theta_{\ell}) h_{kk}^i - \sin(\theta_{\ell}-\theta_i) \sin(\theta_{\ell}+\theta_i-2\theta_k) h_{\ell\ell}^i = 0, \\ \sin(\theta_{\ell}-\theta_i) \sin(\theta_{\ell}+\theta_i-2\theta_j) h_{\ell\ell}^i - \sin(\theta_j-\theta_i) \sin(\theta_j+\theta_i-2\theta_{\ell}) h_{jj}^i = 0, \end{array} \right.$$ where $\{j,k,\ell\}=\{1,2,3,4\}\setminus\{i\}$. By our assumption, the determinant of this system of linear equations in $h^i_{jj}$, $h^i_{kk}$ and $h^i_{\ell\ell}$ must vanish. A straightforward computation shows that this determinant is $$\begin{gathered} \label{det} 2\sin(\theta_j-\theta_i)\sin(\theta_k-\theta_i)\sin(\theta_{\ell}-\theta_i)\sin(\theta_k-\theta_j)\sin(\theta_{\ell}-\theta_j)\sin(\theta_{\ell}-\theta_k)\\ (\cos(\theta_i+\theta_j-\theta_k-\theta_{\ell})+\cos(\theta_i-\theta_j+\theta_k-\theta_{\ell})+\cos(\theta_i-\theta_j-\theta_k+\theta_{\ell})). \end{gathered}$$ Since all angle functions are different modulo $\pi$ by Proposition \[propCSC1\], we obtain $$\label{eqn=4.2} \cos(\theta_i+\theta_j-\theta_k-\theta_{\ell})+\cos(\theta_i-\theta_j+\theta_k-\theta_{\ell})+\cos(\theta_i-\theta_j-\theta_k+\theta_{\ell}) = 0.$$ Taking the derivative of in the direction of $e_i$, using , Corollary \[cor1\], the minimality condition and some elementary trigonometric identities, yields $$\label{eqn=4.3} \sin(\theta_i-\theta_j)\cos(\theta_k-\theta_\ell)h_{jj}^i + \sin(\theta_i-\theta_k)\cos(\theta_j-\theta_{\ell})h_{kk}^i + \sin(\theta_i-\theta_{\ell})\cos(\theta_j-\theta_k)h_{\ell\ell}^i = 0.$$ This is another linear equation in $h_{jj}^i$, $h_{kk}^i$ and $h_{\ell\ell}^i$ and the determinant of the system formed by any two equations from and must be zero. We will denote the determinant of and the first two equations from (which both involve $h_{kk}^i$) by $\Delta^i_{kk}$. A straightforward computation shows that the equations $$\begin{aligned} &(\Delta^1_{33}+\Delta^1_{44})/(\sin{(\theta_1-\theta_3)}\sin{(\theta_1-\theta_4)})+(\Delta^2_{33}+\Delta^2_{44})/(\sin{(\theta_2-\theta_3)}\sin{(\theta_2-\theta_4)})=0,\\ &(\Delta^2_{11}+\Delta^2_{44})/(\sin{(\theta_2-\theta_1)}\sin{(\theta_2-\theta_4)})+(\Delta^3_{11}+\Delta^3_{44})/(\sin{(\theta_3-\theta_1)}\sin{(\theta_3-\theta_4)})=0,\\ &(\Delta^3_{22}+\Delta^3_{44})/(\sin{(\theta_3-\theta_2)}\sin{(\theta_3-\theta_4)})+(\Delta^1_{22}+\Delta^1_{44})/(\sin{(\theta_1-\theta_2)}\sin{(\theta_1-\theta_4)})=0, \end{aligned}$$ are equivalent to $$\begin{aligned} & \sin(\theta_1+\theta_2-\theta_3-\theta_4)(\cos(\theta_4-\theta_3)+\cos(3(\theta_4-\theta_3))) = 0, \nonumber \\ & \sin(\theta_2+\theta_3-\theta_1-\theta_4)(\cos(\theta_4-\theta_1)+\cos(3(\theta_4-\theta_1))) = 0, \label{eqn=4.4} \\ & \sin(\theta_3+\theta_1-\theta_2-\theta_4)(\cos(\theta_4-\theta_2)+\cos(3(\theta_4-\theta_2))) = 0, \nonumber \end{aligned}$$ respectively. We distinguish two subcases. *Case 1.1: $\sin(\theta_1+\theta_2-\theta_3-\theta_4)\sin(\theta_2+\theta_3-\theta_1-\theta_4)\sin(\theta_3+\theta_1-\theta_2-\theta_4)=0$.* Assume that $\sin(\theta_1+\theta_2-\theta_3-\theta_4)=0$, the other two cases are analogous. Together with $\theta_1+\theta_2+\theta_3+\theta_4=0 \mod \pi$, we obtain $\theta_1+\theta_2=0 \mod \pi$ and $\theta_3+\theta_4=0 \mod \pi$, so that is equivalent to $$\label{eqn=4.5} 1 + 2 \cos(2\theta_1)\cos(2\theta_3) = 0.$$ Deriving $\theta_3+\theta_4=0 \mod \pi$ in the direction of $e_1$ gives $h^1_{33}+h^1_{44}=0$, so it follows from the equation involving $h^1_{33}$ and $h^1_{44}$ in for $i=1$ that $(\cos(2\theta_1)\cos(2\theta_3)-\cos(4\theta_3))h^1_{33}=0$, which, in combination with , yields $(1 + 2\cos(4\theta_3))h^1_{33}=0$. If $1 + 2\cos(4\theta_3)=0$, it follows from , $\theta_1+\theta_2=0 \mod \pi$ and $\theta_3+\theta_4=0 \mod \pi$ that all local angle functions are constant. But from we then obtain that all $h^i_{jj}$ are zero, which contradicts the assumption that we made for Case 1. If, on the other hand, $h^1_{33}=0$, and hence also $h^1_{44}=0$, the assumption for Case 1 yields $h^1_{22} \neq 0$ and the equations involving $h^1_{22}$ in for $i=1$ become $\sin(\theta_1+\theta_2-2\theta_3) = \sin(\theta_1+\theta_2-2\theta_4) = 0$. Since $\theta_1+\theta_2=0 \mod \pi$ and $\theta_3+\theta_4=0 \mod \pi$, both equations reduce to $\sin(2\theta_3)=0$. Again, from , $\theta_1+\theta_2=0 \mod \pi$ and $\theta_3+\theta_4=0 \mod \pi$, we obtain that all local angle functions are constant, which is a contradiction. *Case 1.2: $\sin(\theta_1+\theta_2-\theta_3-\theta_4)\sin(\theta_2+\theta_3-\theta_1-\theta_4)\sin(\theta_3+\theta_1-\theta_2-\theta_4) \neq 0$.* It follows from that there exist $k_1,k_2,k_3 \in \mathbb Z \setminus 4\mathbb Z$ such that $\theta_1 = \theta_4 + k_1\pi/4$, $\theta_2 = \theta_4 + k_2\pi/4$ and $\theta_3 = \theta_4 + k_3\pi/4$. By combining this with $\theta_1+\theta_2+\theta_3+\theta_4=0 \mod\pi$ we obtain that all local angle functions are constant, which is a contradiction as before. *Case 2: There are exactly two different indices $i$ for which $h^i_{11}$, $h^i_{22}$, $h^i_{33}$ and $h^i_{44}$ are not all zero.* Without loss of generality, we may assume that these indices are $1$ and $2$. As before, we can then obtain the first equation of . If $\sin(\theta_1+\theta_2-\theta_3-\theta_4)=0$, we proceed as in Case 1.1 to obtain a contradiction. If $\cos(\theta_4-\theta_3)+\cos(3(\theta_4-\theta_3))=0$, then $\theta_4 = \theta_3 + k\pi/4$ for some $k \in \mathbb Z \setminus 4\mathbb Z$. Deriving this equation in the direction of $e_i$ gives $h^i_{33}=h^i_{44}$ for all $i\in\{1,2,3,4\}$ and the equations involving $h^1_{33}$ and $h^1_{44}$ in for $i=1$, respectively $h^2_{33}$ and $h^2_{44}$ in for $i=2$, reduce to $$\label{eqn=4.6} \begin{aligned} & \sin(2\theta_3-2\theta_1+k\pi/4)h^1_{33}=0, \\ & \sin(2\theta_3-2\theta_2+k\pi/4)h^2_{33}=0. \end{aligned}$$ We again distinguish two subcases. *Case 2.1: $\sin(2\theta_3-2\theta_1+k\pi/4)\sin(2\theta_3-2\theta_2+k\pi/4)=0$.* Assume that $\sin(2\theta_3-2\theta_1+k\pi/4)=0$, the other case is analogous. Then $\theta_3=\theta_1-k\pi/8+\ell\pi/2$, and hence $\theta_4=\theta_1+k\pi/8+\ell\pi/2$, for some $\ell \in \mathbb Z$ and $k \in \mathbb Z \setminus 4\mathbb Z$. Equation is equivalent to $\cos(\theta_2-\theta_1)=0$, such that $\theta_2=\theta_1+\pi/2+m\pi$ for some $m \in \mathbb Z$. This means that all local angle functions can be written as $\theta_1$ plus a constant and it follows from $\theta_1+\theta_2+\theta_3+\theta_4=0 \mod\pi$ that they are all constant, which gives the desired contradiction. *Case 2.2: $\sin(2\theta_3-2\theta_1+k\pi/4)\sin(2\theta_3-2\theta_2+k\pi/4) \neq 0$.* It then follows from that $h^1_{33}=h^2_{33}=0$ and hence also $h^1_{44}=h^2_{44}=0$. By the assumption that we made for Case 2, $h^1_{22} \neq 0$. From the equations involving $h^1_{22}$ in for $i=1$ we have $\sin(\theta_1+\theta_2-2\theta_3)=\sin(\theta_1+\theta_2-2\theta_4)=0$. If $k$ is odd, this is a contradiction. If $k$ is even, we can conclude from $\sin(\theta_1+\theta_2-2\theta_3)=0$, $\theta_4=\theta_3+k\pi/4$ and $\theta_1+\theta_2+\theta_3+\theta_4=0 \mod\pi$ that $\theta_1+\theta_2$, $\theta_3$ and $\theta_4$ are constant. To finish the proof in this case, we compute the sectional curvature of the plane spanned by $e_1$ and $e_3$ using . Since $h^1_{33}=h^2_{33}=h^3_{11}=h^4_{11}=h^3_{12}=h^4_{13}=0$, we obtain that $K_{13}=2\cos^2(\theta_3-\theta_1)$ is constant. Again, we see that all the local angle functions are constant, which is a contradiction. *Case 3: There is exactly one index $i$ for which $h^i_{11}$, $h^i_{22}$, $h^i_{33}$ and $h^i_{44}$ are not all zero.* Without loss of generality, we may assume that $i=1$ and hence $h^2_{jj}=h^3_{jj}=h^4_{jj}=0$ for all $j$. Denote by $c$ the constant sectional curvature of $M^n$. A straightforward computation of the sectional curvatures using the definition of the curvature tensor and gives $$\begin{aligned} & c = K_{23} = -h^1_{22} h^1_{33} \cot(\theta_2-\theta_1) \cot(\theta_3-\theta_1), \nonumber \\ & c = K_{24} = -h^1_{22} h^1_{44} \cot(\theta_2-\theta_1) \cot(\theta_4-\theta_1), \label{eqn=4.7} \\ & c = K_{34} = -h^1_{33} h^1_{44} \cot(\theta_3-\theta_1) \cot(\theta_4-\theta_1). \nonumber \end{aligned}$$ On the other hand, from , it follows that $$\begin{aligned} & c = K_{23} = 2 \cos^2(\theta_3-\theta_2) + h^1_{22}h^1_{33}, \nonumber \\ & c = K_{24} = 2 \cos^2(\theta_4-\theta_2) + h^1_{22}h^1_{44}, \label{eqn=4.8} \\ & c = K_{34} = 2 \cos^2(\theta_4-\theta_3) + h^1_{33}h^1_{44}. \nonumber \end{aligned}$$ Remark that, for a fixed $i \in \{1,2,3,4\}$, at most one of the functions $\cos(\theta_i-\theta_j)$, with $j \in \{1,2,3,4\} \setminus \{i\}$, can be zero since all the local angle functions are mutually different modulo $\pi$ by Proposition \[propCSC1\]. In particular, this implies that $c \neq 0$. Indeed, for $c=0$, would imply that $h^1_{jj}=0$ for at least one $j \in \{2,3,4\}$ and then would imply that at least two of the functions $\cos(\theta_3-\theta_2)$, $\cos(\theta_4-\theta_3)$ and $\cos(\theta_4-\theta_2)$ are zero, which is impossible. As $c\neq0$, we obtain from that $d:= h^1_{22} \cot(\theta_2-\theta_1) = h^1_{33} \cot(\theta_3-\theta_1) = h^1_{44} \cot(\theta_4-\theta_1)$ is a constant satisfying $c=-d^2$. Without loss of generality, we will assume from now on that $\cos(\theta_3-\theta_2)\cos(\theta_4-\theta_2) \neq 0$. From $c \neq 0$ and , we have that none of the functions $\cos(\theta_j-\theta_1)$ with $j\in\{2,3,4\}$ is zero. When putting $h^1_{jj} = d/\cot(\theta_j-\theta_1)$ in the first two equations of , using the assumption $\cos(\theta_3-\theta_2)\cos(\theta_4-\theta_2) \neq 0$, we obtain $$\begin{aligned} c = 2 \cos(\theta_3-\theta_2) \cos(\theta_2-\theta_1) \cos(\theta_3-\theta_1), \\ c = 2 \cos(\theta_4-\theta_2) \cos(\theta_2-\theta_1) \cos(\theta_4-\theta_1), \end{aligned}$$ from which we get that $\cos(\theta_3-\theta_2)\cos(\theta_3-\theta_1) = \cos(\theta_4-\theta_2)\cos(\theta_4-\theta_1)$ or, equivalently, $\sin(\theta_1+\theta_2-\theta_3-\theta_4)=0$. We can now proceed as in Case 1.1 to obtain a contradiction. Classification in dimension $n\geq 5$ ------------------------------------- The following proposition shows that, also for $n \geq 5$, no new examples of minimal Lagrangian submanifolds of $Q^n$ with constant sectional curvature occur. \[propCSCn&gt;4\] For $n \geq 5$, let $f:M^n \to Q^n$ be a minimal Lagrangian immersion such that $M^n$ has constant sectional curvature. Then $M^n$ has constant sectional curvature $2$ and $f$ is the Gauss map of the standard embedding $S^n(r) \to S^{n+1}(1)$. Choose $A \in \mathcal A$ as in Example \[ex2\]. By Proposition \[propCSC1\], we know that the angle functions are either all the same modulo $\pi$ or all mutually different modulo $\pi$ and that in the former case, the immersion is totally geodesic. Hence, assume that all angle functions are mutually different modulo $\pi$. We know from Proposition \[propCSC2\] that all components $h_{ij}^k$ of the second fundamental form, where $i$, $j$ and $k$ are mutually different, vanish. Hence, it suffices to show that the components of the second fundamental form for which at least two indices are the same also vanish. *Step 1: For any fixed $i$, if there exists a $j \neq i$ such that $h_{jj}^i=0$, then $h_{jj}^i=0$ for all $j$.* Without loss of generality, we assume that $i=1$ and $j=2$, so $h_{22}^1=0$. We will prove the claim by contradiction, so suppose that $h_{jj}^1 \neq 0$ for some $j$. Remark that if $h_{11}^1 \neq 0$, there will be a $j \neq 1,2$ for which $h_{jj}^1 \neq 0$. Indeed, this follows from the fact that $h_{11}^1 + h_{22}^1 + h_{33}^1 + \cdots + h_{nn}^1 = 0$ due to minimality. For simplicity, we can hence assume that $h_{33}^1 \neq 0$. From for $i \geq 3$, $j=2$ and $k=1$, we obtain $h_{ii}^1 \sin(\theta_i+\theta_1-2\theta_2) = 0$. For $i=3$, this implies that $\theta_3+\theta_1-2\theta_2 = 0 \mod\pi$ and for $i \geq 4$, this implies that $h_{44}^1 = h_{55}^1 = \cdots h_{nn}^1 = 0$, since all the angle functions are mutually different modulo $\pi$. A similar argument, but now for $j=4$, respectively $j=5$, instead of $j=2$, yields $\theta_3+\theta_1-2\theta_4 = 0 \mod\pi$, respectively $\theta_3+\theta_1-2\theta_5 = 0 \mod\pi$. Summarizing the relations between the angle functions, we have that $2\theta_2$, $2\theta_4$ and $2\theta_5$ are all equal to $\theta_1+\theta_3$ modulo $\pi$. This means that at least two of the angle functions $\theta_2$, $\theta_4$ and $\theta_5$ must be equal modulo $\pi$, which is a contradiction. *Step 2. For any fixed $i$, there exists a $j \neq i$ such that $h_{jj}^i=0$.* It follows from that for any choice of mutually different indices $j$, $k$ and $\ell$, all different from $i$, the variables $h_{jj}^i$, $h_{kk}^i$ and $h_{\ell\ell}^i$ satisfy the system . It suffices to show that for some choice of $j$, $k$ and $\ell$, the determinant of this system is non zero. Indeed, this will imply that for that particular choice of $j$, $k$ and $\ell$, one has $h_{jj}^i = h_{kk}^i = h_{\ell\ell}^i = 0$. The determinant of the system is given by and it is clear that the only factor which could possibly be zero is the last one. We assume that this factor is zero for all choices of mutually different $j$, $k$ and $\ell$, different from $i$, and we will prove a contradiction. Define the function $f_1(\theta):=\cos(\theta_i+\theta_j-\theta_k-\theta)+\cos(\theta_i-\theta_j+\theta_k-\theta)+\cos(\theta_i-\theta_j-\theta_k+\theta)$. Since it satisfies the differential equation $f_1''+f_1=0$, we can write it as $f_1(\theta)=a_1\sin(\theta+b_1)$ for some constants $a_1$ and $b_1$ depending on $\theta_i$, $\theta_j$ and $\theta_k$. By our assumption, $f_1(\theta)=0$ for at least two values of $\theta$ which are different modulo $\pi$. This implies that $a_1=0$ and hence that $f_1=0$ identically. In particular, $f_1(0)=\cos(\theta_i+\theta_j-\theta_k)+\cos(\theta_i-\theta_j+\theta_k)+\cos(\theta_i-\theta_j-\theta_k)=0$ for all mutually different $j$ and $k$, different from $i$. We can basically repeat the same argument by defining the function $f_2(\theta):=\cos(\theta_i+\theta_j-\theta)+\cos(\theta_i-\theta_j+\theta)+\cos(\theta_i-\theta_j-\theta)$. Since $f_2''+f_2=0$, we have $f_2(\theta)=a_2\sin(\theta+b_2)$ for some constants $a_2$ and $b_2$ depending on $\theta_i$ and $\theta_j$. Since $f_2(\theta)=0$ for at least two values of $\theta$ which are different modulo $\pi$, we conclude that $a_2=0$ and hence $f_2=0$ identically. Since $f_2(\theta)=(\cos(\theta_i+\theta_j)+2\cos(\theta_i-\theta_j))\cos\theta+\sin(\theta_i+\theta_j)\sin\theta$, we obtain that $\sin(\theta_i+\theta_j)=0$ for all $j$ different from $i$ modulo $\pi$. This implies that all angles $\theta_j$, with $j \neq i$, are equal modulo $\pi$, a contradiction. Proof of Theorem \[theoCSCconclusion\] -------------------------------------- Combining the results in Sections 5.2-5.5, we obtain the classification result in Theorem \[theoCSCconclusion\]. Minimal Lagrangian submanifolds of $Q^n$ with $n-1$ equal angle functions ========================================================================= In this section, we give the proof of Theorem \[theoAllButOneSameAngles\], which is a classification theorem for minimal Lagrangian submanifolds of $Q^n$ with $n-1$ equal angle functions. We use the same notations as in Section 3. Case (i) and Case (ii) follow immediately from Corollary \[cor2\]. In Case (iii), we have that $\alpha$ is not a constant modulo $\pi$. We first show that $M^n$ must be a warped product $I\times_\rho S^{n-1}(1)$ with $\rho(\alpha)=|c_1(\sin n\alpha)^{-\frac{1}{n}}|$ for some positive constant $c_1$, and the angle function $\alpha$ satisfies the first order ordinary differential equation . First, as $\theta_1=(n-1)\alpha \mod\pi$ and $\theta_2=\cdots=\theta_n=-\alpha \mod\pi$, we obtain that the one-form $s$ vanishes on $TM^n$ from Corollary \[cor1\]. It follows from that $h_{ij}^k=0$ for any $i$ and any mutually different $j,k\in\{2,\ldots,n\}$. Using , for any $i\in\{2,\ldots,n\}$, as $n\geq 3$, we have that $h_{ii}^i=e_i(\theta_i)=e_i(\theta_k)=h_{kk}^i=0$ and $h_{11}^i=e_i(\theta_1)=-(n-1)e_i(\theta_k)=-(n-1)h_{kk}^i=0$ for any $k\in\{2,\ldots,n\}$, different from $i$. Therefore, we obtain that the only non-zero components of the second fundamental form are $h_{11}^1=(n-1)e_1(\alpha)$ and $h_{22}^1=\cdots=h_{nn}^1=-e_1(\alpha)$ and $e_k(\alpha)=0$ for any $k\in\{2,\ldots,n\}$. By applying again, we obtain $g(\nabla_{e_1}e_1,e_k)=\omega_1^k(e_1)=0$ for any $k\in\{2,\ldots,n\}$, which means that the distribution spanned by $e_1$ is autoparallel. We also have $g(\nabla_{e_k}e_j,e_1)=\omega_j^1(e_k)=-\cot{(n\alpha)}h_{jk}^1=\cot{(n\alpha)}e_1(\alpha)\delta_{jk}$ for any mutually different $j,k\in\{2,\ldots,n\}$, and since $e_k(\alpha)=0$ for any $k\in\{2,\ldots,n\}$, we obtain that the distribution spanned by $\{e_2,\ldots,e_n\}$ is spherical. Hence, by applying a theorem of Hiepko [@Hiepko] (see also a general result in [@Nolker]), we conclude that $M^n$ is a warped product $I\times_\rho N^{n-1}$ and the warping function $\rho$ satisfies that $\frac{e_1(\rho)}{\rho}=-\cot{(n\alpha)}e_1(\alpha)$. Hence, $\rho=|c_1(\sin n\alpha)^{-\frac{1}{n}}|$ for some positive constant $c_1$. We can further show that $N^{n-1}$ has positive constant sectional curvature. In fact, the sectional curvature $K^N$ of the plane spanned by $e_j$ and $e_k$, for mutually different $j,k\in\{2,\ldots,n\}$, can be calculated as follows: $$\label{6.4} \begin{aligned} K^N_{jk}&=\rho^2(K_{jk}+(\frac{e_1(\rho)}{\rho})^2)\\ &=\rho^2(2\cos^2{(\theta_j-\theta_k)}+e_1(\theta_j)e_1(\theta_k)+(\frac{e_1(\rho)}{\rho})^2)\\ &=\rho^2(2+e_1^2(\alpha)+(-\cot{(n\alpha)}e_1(\alpha))^2)\\ &=(c_1(\sin (n\alpha))^{-\frac{1}{n}})^2(2+e_1^2(\alpha)(\sin{n\alpha})^{-2}), \end{aligned}$$ where we used in the second equality. For any $k\in\{2,\ldots,n\}$, by considering the $Je_k$-component of the Codazzi equation for $(X,Y,Z)=(e_k,e_1,e_1)$, we get that $$\label{6.5} e_1(e_1(\alpha))-(n+1)\cot{(n\alpha)}(e_1(\alpha))^2-\sin{(2n\alpha)}=0.$$ We take the derivative of $K^N_{jk}$ with respect to $e_1$ and obtain that $$e_1(K^N_{jk})=2c_1^2(\sin (n\alpha))^{-\frac{2}{n}-2}e_1(\alpha)\big(e_1(e_1(\alpha))-(n+1)\cot{(n\alpha)}(e_1(\alpha))^2-\sin{(2n\alpha)}\big)=0,$$ which means that $N^{n-1}$ has constant sectional curvature $c_0$. From the expression , we know that $c_0$ is a positive constant and we can always choose $c_1$ such that $c_0=1$. Therefore, we have proved that $M^n$ is a warped product $I\times_\rho S^{n-1}(1)$ with $\rho(\alpha)=|c_1(\sin n\alpha)^{-\frac{1}{n}}|$ for some positive constant $c_1$, and the angle function $\alpha$ satisfies the first order ordinary differential equation . Secondly, we show that $f$ is locally isometric to the Gauss map of a rotational hypersurface of $S^{n+1}(1)$ with the profile curve $\gamma(\theta)\subset S^2(1)$ given by . As $\theta_1=(n-1)\alpha \mod\pi$ and $\theta_2=\cdots=\theta_n=-\alpha \mod\pi$, from the proof of Theorem \[theo1\], we know that $M$ is the Gauss map of a hypersurface of $S^{n+1}(1)$ with principal curvatures given by $$\label{6.6} \lambda_1=\cot{(\theta_1)}=\cot{((n-1)\alpha)}\neq\lambda_2=\cdots=\lambda_n=\cot{(\theta_2)}=-\cot{(\alpha)},$$ which immediately implies that $M$ is the Gauss map of a rotational hypersurface of $S^{n+1}(1)$ by applying a theorem of do Carmo and Dajczer (see Theorem 4.2 in [@CD1983]). In order to finish the proof of Case (iii), we only need to check that the principal curvatures of a rotational hypersurface of $S^{n+1}(1)$ with the profile curve $\gamma(\theta)\subset S^2(1)$ given by and $\alpha$ satisfying are the same as that in . This can be verified straightforwardly by applying the formulas of the principal curvatures computed in [@LiMaWei], we omit the details here. Finally, we note that the differential equation is equivalent to . If we define a new parameter $s$ by $\frac{d s}{d\theta}=-\frac{1}{\sqrt{2}}\sqrt{1-(\frac{d\alpha}{d\theta})^2}(\sin{(n\alpha)})^{-1}$, then $\alpha$ satisfies the following differential equation with respect to the new parameter $s$: $$\frac{d^2\alpha}{ds^2}-(n+1)\cot{(n\alpha)}(\frac{d\alpha}{ds})^2-\sin{(2n\alpha)}=0.$$ We can also calculate the length of $\frac{d}{d s}$ directly by using the induced metric of the Gauss map and find that $\frac{d}{d s}$ is a unit vector. Hence, $\frac{d}{d s}=\pm e_1$, which implies the equivalence of and . This completes the proof of Case (iii). Therefore, we finish the proof of Theorem \[theoAllButOneSameAngles\]. [10]{} Élie Cartan, *Sur quelques familles remarquables d’hypersurfaces*, C.R. Congrès Math. Liège, 1939, 30–41 (see also: Oeuvres Complètes: Partie III, Vol. 2, 1481–1492). Ildefonso Castro and Francisco Urbano, *Minimal [L]{}agrangian surfaces in [$\Bbb S^2\times\Bbb S^2$]{}*, Comm. Anal. Geom. **15** (2007), no. 2, 217–248. Bang-Yen Chen, *Riemannian geometry of [L]{}agrangian submanifolds*, Taiwanese J. Math. **5** (2001), no. 4, 681–723. , *Pseudo-[R]{}iemannian geometry, [$\delta$]{}-invariants and applications*, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2011, With a foreword by Leopold Verstraelen. Bang-yen Chen and Koichi Ogiue, *On totally real submanifolds*, Trans. Amer. Math. Soc. **193** (1974), 257–266. Manfredo. do Carmo and Marcos. Dajczer, *Rotation hypersurfaces in spaces of constant curvature*, Trans. Amer. Math. Soc. **277** (1983), no. 2, 685–709. Norio Ejiri, *Totally real minimal immersions of [$n$]{}-dimensional real space forms into [$n$]{}-dimensional complex space forms*, Proc. Amer. Math. Soc. **84** (1982), no. 2, 243–246. Jianquan Ge and Zizhou Tang, *Geometry of isoparametric hypersurfaces in [R]{}iemannian manifolds*, Asian J. Math. **18** (2014), no. 1, 117–125. Sönke Hiepko, *Eine innere [K]{}ennzeichnung der verzerrten [P]{}rodukte*, Math. Ann. **241** (1979), no. 3, 209–215. Haizhong Li, Hui Ma, and Guoxin Wei, *A class of minimal [L]{}agrangian submanifolds in complex hyperquadrics*, Geom. Dedicata **158** (2012), 137–148. Hui Ma and Yoshihiro Ohnita, *On [L]{}agrangian submanifolds in complex hyperquadrics and isoparametric hypersurfaces in spheres*, Math. Z. **261** (2009), no. 4, 749–785. , *Hamiltonian stability of the [G]{}auss images of homogeneous isoparametric hypersurfaces. [I]{}*, J. Differential Geom. **97** (2014), no. 2, 275–348. , *Hamiltonian stability of the [G]{}auss images of homogeneous isoparametric hypersurfaces [II]{}*, Tohoku Math. J. (2) **67** (2015), no. 2, 195–246. Hans Friedrich Münzner, *Isoparametrische [H]{}yperflächen in [S]{}phären*, Math. Ann. **251** (1980), no. 1, 57–71. , *Isoparametrische [H]{}yperflächen in [S]{}phären. [II]{}. Über die [Z]{}erlegung der [S]{}phäre in [B]{}allbündel*, Math. Ann. **256** (1981), no. 2, 215–232. Stefan Nölker, *Isometric immersions of warped products*, Differential Geom. Appl. **6** (1996), no. 1, 1–30. Yong-Geun Oh, *Second variation and stabilities of minimal [L]{}agrangian submanifolds in [K]{}ähler manifolds*, Invent. Math. **101** (1990), no. 2, 501–519. , *Volume minimization of [L]{}agrangian submanifolds under [H]{}amiltonian deformations*, Math. Z. **212** (1993), no. 2, 175–192. Bennett Palmer, *Buckling eigenvalues, [G]{}auss maps and [L]{}agrangian submanifolds*, Differential Geom. Appl. **4** (1994), no. 4, 391–403. , *Hamiltonian minimality and [H]{}amiltonian stability of [G]{}auss maps*, Differential Geom. Appl. **7** (1997), no. 1, 51–58. Chao Qian and Zizhou Tang, *Recent progress in isoparametric functions and isoparametric hypersurfaces*, Real and complex submanifolds, Springer Proc. Math. Stat., vol. 106, Springer, Tokyo, 2014, pp. 65–76. Helmut Reckziegel, *Horizontal lifts of isometric immersions into the bundle space of a pseudo-[R]{}iemannian submersion*, Global differential geometry and global analysis 1984 ([B]{}erlin, 1984), Lecture Notes in Math., vol. 1156, Springer, Berlin, 1985, pp. 264–279. , *On the geometry of the complex quadric*, Geometry and topology of submanifolds, [VIII]{} ([B]{}russels, 1995/[N]{}ordfjordeid, 1995), World Sci. Publ., River Edge, NJ, 1996, pp. 302–315. Anna Siffert, *A new structural approach to isoparametric hypersurfaces in spheres*, Ann. Global Anal. Geom. **52** (2017), no. 4, 425–456. Brian Smyth, *Differential geometry of complex hypersurfaces*, Ann. of Math. (2) **85** (1967), 246–266. Zizhou Tang and Wenjiao Yan, *Isoparametric theory and its applications*, arXiv:1709.07235. Francisco Torralbo and Francisco Urbano, *Minimal surfaces in [$\Bbb{S}^2\times\Bbb{S}^2$]{}*, J. Geom. Anal. **25** (2015), no. 2, 1132–1156.
--- abstract: 'An explicit expression of the $k$-th derivative of the Bessel function $J_\nu(z)$, with respect to its order $\nu$, is given. Particularizations for the cases of positive or negative integer $\nu$ are considered.' author: - | J. Sesma [^1]\ [*Departamento de Física Teórica, Facultad de Ciencias,*]{}\ [*50009, Zaragoza, Spain*]{} title: | Derivatives with respect to the order\ of the Bessel function of the first kind --- Introduction ============ Along this paper we use the notation $$\mathcal{G}^{(k)}(t) \equiv \frac{d^k}{dt^k}\,\frac{1}{\Gamma(t)}\,, \quad \mathcal{P}_m^{(k)}(t) \equiv \frac{1}{k!}\,\frac{d^k}{dt^k}\,(t)_m\,, \quad \mathcal{Q}_{m}^{(k)}(t) \equiv \frac{1}{k!}\,\frac{d^k}{dt^k}\,\frac{1}{(t)_m}\,, \label{i1}$$ to refer to the derivatives of the reciprocal gamma function and of the Pochhammer and reciprocal Pochhammer symbols. Our purpose is to provide with a closed expression for the $k$-th derivative of the Bessel function $J_\nu(z)$ with respect to its order $\nu$, that we assume to be real. From the ascending series definition [@nist Eq. 10.2.2] $$J_\nu(z)=(z/2)^\nu\,\sum_{m=0}^\infty \frac{(-\,z^2/4)^m}{m!\,\Gamma(\nu\! +\! 1\! +\! m)}, \label{i3}$$ one obtains immediately, with the notation introduced in (\[i1\]), $$\frac{\partial}{\partial \nu}\,J_\nu(z) = J_\nu(z)\,\ln(z/2)+(z/2)^\nu\,\sum_{m=0}^\infty \mathcal{G}^{(1)}(\nu\! +\! 1\! +\! m)\, \frac{(-\,z^2/4)^m}{m!}, \label{i4}$$ an expression that can be found in all treatises dealing with Bessel functions. (See, for instance, [@nist Eq. 10.15.1].) Derivation, $k$-1 times, with respect to $\nu$ gives a recurrence relation $$\begin{aligned} \frac{\partial^k}{\partial \nu^k}\,J_\nu(z) &=& \ln(z/2)\left(\dfrac{\partial^{k-1}}{\partial \nu^{n-1}}\,J_\nu(z)\right)+(z/2)^\nu\,\sum_{m=0}^\infty \Bigg[ \dfrac{(-z^2/4)^m}{m!} \nonumber \\ & &\hspace{40pt}\times\,\sum_{l=1}^k {k\! -\! 1 \choose l\! -\! 1}\mathcal{G}^{(l)}(\nu\! +\! 1\! +\! m)\,(\ln(z/2))^{k-l}\Bigg], \label{i5}\end{aligned}$$ which would allow one to compute the successive derivatives to get the $k$-th one. We suggest, however, a procedure to obtain directly such $k$-th derivative, without need of computing the lower ones. As auxiliary for the main result, we consider, respectively in Sections 2, 3 and 4, explicit expressions for the symbols defined in Eqs. (\[i1\]). Then, in Section 5, the $k$-th derivative of $J_{\nu}(z)$ with respect to $\nu$ is discussed. The possibility of extending the resulting expressions to the case of complex $\nu$ is discussed in Section 6. Derivatives of the reciprocal gamma function ============================================ We start with the series expansion [@nist Eq. 5.7.1] $$\frac{1}{\Gamma(t)}=\sum_{j=1}^\infty c_j\,t^j, \label{ii2}$$ convergent in the whole complex $t$-plane. Term by term derivation gives $$\mathcal{G}^{(k)}(t)=\sum_{j=k}^\infty c_j\,\frac{j!}{(j-k)!}\,t^{j-k}, \label{ii3}$$ an expansion also convergent in the whole plane. Nevertheless, its convergence becomes slower and slower as $k$ or $|t|$ increase. It is not recommended, for numerical computation, unless $|t|<1$. For large values of $|t|$ it is preferable to use the asymptotic expansion obtained, in a former paper [@abad Appendix B], by application of the saddle point method [@temm Sec. 3.6.3] to the Hankel contour representation of the reciprocal Gamma function. The expressions of the derivatives of the Bessel function, to be given below, contain $\mathcal{G}^{(k)}(1+\varepsilon)$, which can be calculated by using $$\mathcal{G}^{(k)}(1+\varepsilon)=\sum_{j=0}^\infty c_{j+k+1}\,(j+1)_k\,\varepsilon^{j}, \label{ii4}$$ whenever $|\varepsilon|<1$. Derivatives of the Pochhammer symbol $(t)_m$ with respect to its argument ========================================================================= As $(t)_m$ is a polynomial of degree $m$ in $t$, derivatives of order greater than $m$ vanish, $$\mathcal{P}_m^{(k)}(t) = 0 \qquad\mbox{for}\qquad k>m\,. \label{iii2}$$ For the nontrivial case of $k\leq m$, the $\mathcal{P}_m^{(k)}(t)$ can be computed by means of the recurrence relation $$\mathcal{P}_{m+1}^{(k)}(t)=(t+m)\,\mathcal{P}_m^{(k)}(t)+\mathcal{P}_m^{(k-1)}(t), \qquad k>0\,, \label{iii9}$$ with starting values $$\mathcal{P}_{0}^{(k)}(t)=\delta_{k,0}\,, \qquad \mathcal{P}_m^{(0)}(t)=(t)_m\,. \label{iii10}$$ Explicit expressions for $\mathcal{P}_m^{(k)}(t)$ can be found easily. From the generating function of the Pochhammer symbols [@luke Sec. 6.2.1, Eq. (2)] $$\sum_{m=0}^\infty \,(t)_m \,(-z)^m/m! \equiv \ _1\!F_0(t;;-z) = (1+z)^{-t}, \qquad |z|<1, \label{iii3}$$ one obtains by derivation, $k$ times, with respect to $t$ $$\sum_{m=0}^\infty k!\, \mathcal{P}_m^{(k)}(t)\, (-z)^m/m! = (-1)^k\,(1+z)^{-t}\, \left[\ln(1+z)\right]^k, \qquad |z|<1. \label{iii4}$$ The term $\mathcal{P}_m^{(k)}(t)$ can be isolated in this way $$\begin{aligned} \hspace{-1cm}\mathcal{P}_m^{(k)}(t) &=& \frac{(-1)^{k-m}}{k!}\,\left.\frac{\partial^m}{\partial z^m}\left((1+z)^{-t}\, \left[\ln(1+z)\right]^k\right)\right|_{z=0} \nonumber \\ &=& \frac{(-1)^{k-m}}{k!}\,\sum_{l=0}^m {m \choose l}\left.\left(\frac{\partial^l}{\partial z^l}(1+z)^{-t}\right)\left(\frac{d^{m-l}}{d z^{m-l}}\left[\ln(1+z)\right]^k\right)\right|_{z=0}\hspace{-12pt}. \label{iii5}\end{aligned}$$ Now we make use of the trivial result $$\left.\frac{\partial^l}{\partial z^l}(1+z)^{-t}\right|_{z=0} = (-1)^l\,(t)_l \label{iii6}$$ and of the generating relation of the Stirling numbers of the first kind [@nist Eq. 26.8.8], $$[\ln (1+z)]^k = k! \sum_{n=k}^\infty s(n, k)\, z^n/n!\,, \qquad |z|<1, \label{iii7}$$ to obtain the explicit expression $$\mathcal{P}_m^{(k)}(t) = (-1)^{m-k}\sum_{l=0}^{m-k}(-1)^l{m \choose l}\,s(m\!-\!l, k)\,(t)_l \qquad\mbox{for}\quad m\geq k\,. \label{iii8}$$ An alternative expression, in terms of generalized Bernoulli polynomials [@bry1; @bry2; @sriv], $$\mathcal{P}_m^{(k)}(t) = (-1)^{m-k}\,{m \choose k}\,B_{m-k}^{(m+1)}(1\! -\! t)\,. \label{A19}$$ can be obtained from a recent paper by Coffey [@coff Eq. (2.5)]. For the particular case of $t=0$, Eq. (\[iii8\]) gives $$\begin{aligned} \mathcal{P}_0^{(k)}(0)&=& \delta_{k,0}\,, \qquad \mathcal{P}_m^{(0)}(0)= \delta_{m,0}\,, \label{A5} \\ \mathcal{P}_m^{(k)}(0)&=& (-1)^{m-k}\,s(m, k) \qquad \mbox{for}\quad m\geq k>0\,. \label{A6}\end{aligned}$$ In the case of $t=1$, use of the property [@nist Eq. 26.8.20] $$s(n\!+\!1,k\!+\!1)=n!\sum_{j=k}^n \frac{(-1)^{n-j}}{j!}\,s(j, k) \label{A8}$$ allows one to obtain, from Eq. (\[iii8\]), $$\mathcal{P}_m^{(k)}(1) = (-1)^{m-k}\,s(m\!+\!1, k\!+\!1)\,. \label{A9}$$ Derivatives of the reciprocal Pochhammer symbol $1/(t)_m$ with respect to its argument ====================================================================================== For numerical implementation of the derivatives of the reciprocal Pochhamer symbol with respect to its variable, one may use the recurrence relations $$\mathcal{Q}_{m+1}^{(k)}(t)=\left(\mathcal{Q}_{m}^{(k)}(t) - \mathcal{Q}_{m+1}^{(k-1)}(t)\right)/(t+m), \label{iv4}$$ with initial values $$\mathcal{Q}_0^{(k)}(t)=\delta_{k,0}\,, \qquad \mathcal{Q}_{m}^{(0)}(t)=1/(t)_m\,, \label{iv5}$$ Very simple explicit expressions of the $\mathcal{Q}_{m}^{(k)}(t)$ can be easily obtained from the relation [@prud Eq. 4.2.2.45] $$\frac{1}{(t)_m} = \sum_{l=0}^{m-1}\frac{(-1)^l}{l!\,(m-1-l)!}\,\frac{1}{t+l}\,, \qquad m>0\,, \label{iv2}$$ provided $t$ is different from a nonpositive integer, $-n$, such that $0\leq n<m$. Direct derivation with respect to $t$ in this equation gives $$\mathcal{Q}_0^{(k)}(t)=\delta_{k,0}, \qquad \mathcal{Q}_{m}^{(k)}(t)= (-1)^k \sum_{l=0}^{m-1}\frac{(-1)^l}{l!\,(m-1-l)!}\,\frac{1}{(t+l)^{k+1}}\,. \label{iv3}$$ For the particular case of $t=1$, this expression admits a more concise form in terms of [*modified*]{} generalized harmonic numbers, $\hat{H}_m^{(k)}$, defined by $$\hat{H}_0^{(k)}\equiv\delta_{k,0}\,,\qquad \hat{H}_m^{(k)} \equiv \sum_{j=1}^m \,(-1)^{j-1}\,{m \choose j}\,\frac{1}{j^k}\,, \qquad m\geq 1\,, \label{A11}$$ not to be confused with the generalized harmonic numbers, $$H_m^{(k)} \equiv \sum_{j=1}^m \frac{1}{j^k}\,, \qquad m\geq 1\,, \label{A10}$$ although $$\hat{H}_m^{(1)} = {H}_m^{(1)} \equiv H_m \qquad {\rm for}\quad m\geq 1\,. \label{H1}$$ Besides the explicit expression (\[A11\]), the recurrence relation $$\hat{H}_{m+1}^{(k)}=\hat{H}_{m}^{(k)}+\frac{1}{m\!+\!1}\, \hat{H}_{m}^{(k-1)}\,, \qquad m\geq 0\,, \quad k\geq 1\,, \label{H2}$$ with the starting values $$\hat{H}_{0}^{(k)}=\delta_{k,0}\,, \qquad \hat{H}_{m}^{(0)}=1\,, \label{H3}$$ may be used to calculate the $\hat{H}_m^{(k)}$. With that notation, Eq. (\[iv3\]) gives $$\mathcal{Q}_m^{(k)}(1)=\frac{(-1)^k}{m!}\,\hat{H}_m^{(k)}\,. \label{A12}$$ Derivatives of $J_\nu(z)$ with respect to $\nu$ =============================================== We proceed to obtain our expression for the $k$-th derivative of $J_\nu(z)$ with respect to $\nu$. To avoid unnecessary complications in the resulting formulas, we assume $k\neq 0$, i. e., $k= 1, 2, \ldots$. Let us denote by $N$ the nearest integer to $\nu$, and define $\varepsilon$ by $$\nu=N+\varepsilon, \qquad |\varepsilon|\leq 1/2. \label{v1}$$ We distinguish two possible ranges of values of $N$. $N\geq 0$ --------- The ascending series in Eq. (\[i3\]) can be written in the form $$J_\nu(z)=(z/2)^\nu\,\frac{1}{\Gamma(1+\varepsilon)}\sum_{m=0}^\infty \frac{(-\,z^2/4)^m}{m!\,(1+\varepsilon)_{m+N}}\,. \label{v2}$$ Derivation, $k$ times, with respect to $\nu$ gives, with the notation introduced in (\[i1\]), $$\begin{aligned} \frac{\partial^k}{\partial \nu^k}\,J_\nu(z) &=& k!\,(z/2)^\nu\,\sum_{m=0}^\infty \frac{(-z^2/4)^m}{m!} \nonumber \\ & & \times\,\sum_{k_1=0}^k \frac{\left[\ln(z/2)\right]^{k_1}}{k_1!}\sum_{k_2=0}^{k-k_1} \frac{\mathcal{G}^{(k_2)}(1\!+\!\varepsilon)}{k_2!}\,\mathcal{Q}_{m+N}^{(k-k_1-k_2)}(1\!+\!\varepsilon)\,, \label{v3}\end{aligned}$$ where $\mathcal{G}^{(k_2)}(1\!+\!\varepsilon)$ is given in Eq. (\[ii4\]) and, according to Eq (\[iv3\]), $$\mathcal{Q}_0^{(k)}(1\!+\!\varepsilon)= \delta_{k,0}\,,\quad \mathcal{Q}_{m+N}^{(k)}(1\!+\!\varepsilon)=\sum_{j=1}^{m+N}\frac{(-1)^{k+j-1}}{(j\!-\!1)!\,(m\!+\!N\!-\!j)!}\,\frac{1}{(\varepsilon\!+\!j)^{k+1}}\,. \label{v4}$$ In the particular case of $\nu$ being a nonnegative integer, $\nu=n\geq 0$, Eq. (\[v3\]) becomes, in terms of the modified generalized harmonic numbers defined in (\[A11\]), $$\begin{aligned} \left.\frac{\partial^k}{\partial \nu^k}\,J_\nu(z)\right|_{\nu=n} &=& k!\,(z/2)^n\,\sum_{m=0}^\infty \frac{(-z^2/4)^m}{m!\,(m\!+\!n)!} \nonumber \\ && \hspace{-20pt}\times\sum_{k_1=0}^k \frac{\left[\ln(z/2)\right]^{k_1}}{k_1!}\sum_{k_2=0}^{k-k_1} (-1)^{k-k_1-k_2}\,c_{k_2+1}\,\hat{H}_{m+n}^{(k-k_1-k_2)}. \label{v5}\end{aligned}$$ Expressions for the first derivative can be found in the bibliography. Besides the familiar expressions given in, for instance, Sect. 10.15 of Ref. [@nist], alternative closed forms can be found in a paper by Brychkov and Geddes [@bry3]. Our Eqs. (\[v3\]) and (\[v5\]) become, for $k=1$, $$\begin{aligned} \frac{\partial}{\partial \nu}\,J_\nu(z) &=& \left(\ln (z/2)-\psi(1\!+\!\varepsilon)\right)\,J_\nu(z) \nonumber \\ && \hspace{-20pt}+\,\frac{(z/2)^\nu}{\Gamma(1\!+\!\varepsilon)}\,\sum_{m=0}^\infty \frac{(-z^2/4)^m}{m!} \sum_{j=1}^{m+N}\frac{(-1)^{j}}{(j\!-\!1)!\,(m\!+\!N\!-\!j)!}\,\frac{1}{(\varepsilon\!+\!j)^2}, \label{v7}\end{aligned}$$ where $\psi$ represents the digamma function and the last sum is understood to be zero if $m+N=0$. In the case of integer $\nu=n\geq 0$ we have $$\left.\frac{\partial}{\partial \nu}\,J_\nu(z)\right|_{\nu=n} = \left(\ln (z/2)+\gamma\right)\,J_n(z) - (z/2)^n \sum_{m=0}^\infty \frac{(-z^2/4)^m}{m!\,(m\!+\!n)!}\,\hat{H}_{m+n}^{(1)}\,, \label{v8}$$ where $\gamma$ represents the well known Euler-Mascheroni constant. $N<0$ ----- Instead of Eq. (\[v2\]) we have now $$\begin{aligned} J_\nu(z)&=&(z/2)^\nu\,\frac{1}{\Gamma(1+\varepsilon)}\Bigg[\sum_{m=0}^{-N-1} \frac{(-\,z^2/4)^m}{m!}\,(-1)^{-N-m}\,(-\varepsilon)_{-N-m} \nonumber \\ & & \hspace{100pt}+\, \sum_{m=-N}^\infty \frac{(-\,z^2/4)^m}{m!}\,\frac{1}{(1+\varepsilon)_{m+N}}\Bigg]. \label{v9}\end{aligned}$$ Derivation with respect to $\nu$ gives $$\begin{aligned} \frac{\partial^k}{\partial\nu^k}J_\nu(z)&=&k!\,(z/2)^\nu\sum_{k_1=0}^k \frac{\left[\ln (z/2)\right]^{k_1}}{k_1!} \sum_{k_2=0}^{k-k_1}\frac{\mathcal{G}^{(k_2)}(1+\varepsilon)}{k_2!} \nonumber \\ & & \hspace{10pt} \times\,\Bigg[\sum_{m=0}^{-N-1} \frac{(-z^2/4)^m}{m!}\,(-1)^{-N-m+k-k_1-k_2}\,\mathcal{P}_{-N-m}^{(k-k_1-k_2)}(-\varepsilon) \nonumber \\ & & \hspace{60pt}+\, \sum_{m=-N}^\infty \frac{(-\,z^2/4)^m}{m!}\,\mathcal{Q}_{m+N}^{(k-k_1-k_2)}(1+\varepsilon)\Bigg], \label{v10}\end{aligned}$$ with $\mathcal{P}_{-N-m}^{(k)}(-\varepsilon)$ given by Eqs. (\[iii8\]) or (\[A19\]) and $\mathcal{Q}_{m+N}^{(k)}(1+\varepsilon)$ by Eq. (\[v4\]). In the particular case of $\nu$ being a negative integer, $\nu=-n$, $n>0$, this equation turns into $$\begin{aligned} \left.\frac{\partial^k}{\partial\nu^k}J_\nu(z)\right|_{\nu=-n}&=&k!\,(z/2)^{-n}\sum_{k_1=0}^k \frac{\left[\ln (z/2)\right]^{k_1}}{k_1!} \sum_{k_2=0}^{k-k_1}\,c_{k_2+1} \nonumber \\ & & \times\,\Bigg[\sum_{m=0}^{n-1} \frac{(-z^2/4)^m}{m!}\,s(n\!-\!m,\,k\!-\!k_1\!-\!k_2) \nonumber \\ & & \hspace{10pt}+\, (-1)^{k-k_1-k_2}\,\sum_{m=n}^\infty \frac{(-\,z^2/4)^m}{m!\,(m-n)!}\,\hat{H}_{m-n}^{(k-k_1-k_2)}\Bigg]. \label{v11}\end{aligned}$$ For $k=1$, Eq. (\[v10\]) becomes $$\begin{aligned} \frac{\partial}{\partial \nu}\,J_\nu(z) &=& \left(\ln (z/2)-\psi(1+\varepsilon)\right)\,J_\nu(z) \nonumber \\ & & \hspace{-10pt}+\,\frac{(z/2)^\nu}{\Gamma(1\!+\!\varepsilon)} \Bigg[\sum_{m=0}^{-N-1} \frac{(-z^2/4)^m}{m!}\,(-N\!-\!m)\,B_{-N-m-1}^{(-N-m+1)}(1\!+\!\varepsilon) \nonumber \\ & & +\sum_{m=-N}^\infty \frac{(-z^2/4)^m}{m!}\, \sum_{j=1}^{m+N}\frac{(-1)^{j}}{(j\!-\!1)!\,(m\!+\!N\!-\!j)!}\,\frac{1}{(\varepsilon\!+\!j)^2}\Bigg], \label{v12}\end{aligned}$$ where $B_n^{(\alpha)}(x)$ represents the generalized Bernoulli polynomial [@bry1; @bry2; @sriv]. It may be written in terms of Stirling numbers of the first kind by using the relation $$(-N\!-\!m)\,B_{-N-m-1}^{(-N-m+1)}(1\!+\!\varepsilon) = \sum_{j=0}^{-N-m-1}\,(j\!+\!1)\,s(-N\!-\!m, j\!+\!1)\,\varepsilon^j\,. \label{v13}$$ In the case of $\nu$ being a negative integer, Eq. (\[v12\]) gives $$\begin{aligned} \left.\frac{\partial}{\partial\nu}J_{\nu(z)}\right|_{\nu=-n} &=&\left(\ln (z/2)+\gamma\right)\,J_{-n}(z) \nonumber \\ &&\hspace{-90pt}-\,(z/2)^{-n}\Bigg[(-1)^{n}\sum_{m=0}^{n-1} \frac{(z^2/4)^m}{m!}\,(n\!-\!m\!-\!1)! + \sum_{m=n}^\infty \frac{(-\,z^2/4)^m}{m!\,(m-n)!}\,\hat{H}_{m-n}^{(1)}\Bigg]. \label{v14}\end{aligned}$$ Extension to complex values of $\nu$ ==================================== The expressions of the derivatives of the reciprocal Gamma function and of the Pochhammer and reciprocal Pochhammer symbols given in sections 2 to 4 stay for complex values of their argument $t$. Therefore, our Eqs. (\[v3\]), (\[v7\]), (\[v10\]) and (\[v12\]) may be used safely for complex $\varepsilon$, i. e. complex $\nu$, whenever $|\Im \nu|\lesssim 1/2$. As auxiliary integer $N$ one should consider again the nearest to $\nu$ one, in such a way that, instead of (\[v1\]), one would have $$\nu=N+\varepsilon, \qquad |\Re \varepsilon|\leq 1/2. \label{vi1}$$ For large values of $\Im \nu$, the given expressions are correct, but they are not useful from a computational point of view. The reason, as pointed out is Sect. 2, is the slow convergence of the series in the right hand side of (\[ii3\]) for large values ot $t$. Acknowledgements {#acknowledgements .unnumbered} ================ This work has been supported by Departamento de Ciencia, Tecnología y Universidad del Gobierno de Aragón (Project E24/1) and Ministerio de Ciencia e Innovación (Project MTM2009-11154) [99]{} J. Abad and J. Sesma, [*Successive derivatives of Whittaker functions with respect to the first parameter*]{}, Comput. Phys. Comm. 156 (2003), pp. 13–21. Yu.A. Brychkov, [*On multiple sums of special functions*]{}, Integral Transforms Spec. Funct. 21 (2010), pp. 877–884. Yu.A. Brychkov, [*On some properties of the generalized Bernoulli and Euler polynomials*]{}, Integral Transforms Spec. Funct. 23 (2012), pp. 723–735. Yu.A. Brychkov and K.O. Geddes, [*On the derivatives of the Bessel and Struve functions with respect to the order*]{}, Integral Transforms Spec. Funct. 16 (2005) 187–198. M.W. Coffey, [*Series representations of the Riemann and Hurwitz zeta functions and series and integral representations of the first Stieltjes constant*]{} \[arXiv:1106.5146\]. Y.L. Luke, [*The Special Functions and Their Aproximations*]{}, Academic Press, New York, 1969, Vol I. F.W.J. Olver, D.W. Lozier, R. Boisvert, and C.W. Clark, eds., [*NIST Handbook of Mathematical Functions*]{}, Cambridge Univ. Press, Cambridge, 2010. Available at http://dlmf.nist.gov/. A.P. Prudnikov, Yu.A. Brychkov and O.I. Marichev, [*Integrals and Series*]{}, Gordon and Breach, New York, 1986, Vol 1. H.M. Srivastava and P.G. Todorov, [*An explicit formula for the generalized Bernoulli polynomials*]{}, J. Math. Anal. Appl. 130 (1988), pp. 509–513. N.M. Temme, [*Special Functions*]{}, John Wiley & Sons, New York, 1996. [^1]: Email: javier@unizar.es
--- abstract: 'We study the class of $1$-perfectly orientable graphs, that is, graphs having an orientation in which every out-neighborhood induces a tournament. $1$-perfectly orientable graphs form a common generalization of chordal graphs and circular arc graphs. Even though they can be recognized in polynomial time, little is known about their structure. In this paper, we develop several results on $1$-perfectly orientable graphs. In particular, we: (i) give a characterization of $1$-perfectly orientable graphs in terms of edge clique covers, (ii) identify several graph transformations preserving the class of $1$-perfectly orientable graphs, (iii) exhibit an infinite family of minimal forbidden induced minors for the class of $1$-perfectly orientable graphs, and (iv) characterize the class of $1$-perfectly orientable graphs within the classes of cographs and of co-bipartite graphs. The class of $1$-perfectly orientable co-bipartite graphs coincides with the class of co-bipartite circular arc graphs.' author: - | Tatiana Romina Hartinger\ University of Primorska, UP IAM, Muzejski trg 2, SI6000 Koper, Slovenia\ `tatiana.hartinger@iam.upr.si`\ - | Martin Milanič\ University of Primorska, UP IAM, Muzejski trg 2, SI6000 Koper, Slovenia\ University of Primorska, UP FAMNIT, Glagoljaška 8, SI6000 Koper, Slovenia\ `martin.milanic@upr.si` bibliography: - '1-perfectly-orientable-bib.bib' title: 'Partial characterizations of $1$-perfectly orientable graphs' --- **Keywords:** $1$-perfectly orientable graph, fraternally orientable graph, in-tournament digraph, structural characterization of families of graphs, cograph, co-bipartite graph, circular arc graph\ **MSC (2010):** 05C20, 05C75, 05C05, 05C62, 05C69 Introduction ============ Many graph classes studied in the literature can be defined with the existence of orientations satisfying certain properties. In this paper we study graphs having an orientation that is an [*out-tournament*]{}, that is, a digraph in which in the out-neighborhood of every vertex induces an orientation of a complete graph. (An [*in-tournament*]{} is defined similarly.) Following the terminology of Kammer and Tholey [@MR3152051], we say that an orientation of a graph is [*$1$-perfect*]{} if the out-neighborhood of every vertex induces a tournament, and that a graph is [*$1$-perfectly orientable*]{} ([*$1$-p.o.*]{} for short) if it has a $1$-perfect orientation.[^1] In [@MR3152051], the authors introduced a hierarchy of graph classes, with [$1$-p.o.]{} graphs being the smallest member of the family. Namely, they defined a graph to be [*$k$-perfectly orientable*]{} if in some orientation of it every out-neighborhood induces a disjoint union of at most $k$ tournaments. In [@MR3152051], several approximation algorithms were given for optimization problems on $k$-perfectly orientable graphs and related classes, and it is shown that recognizing $k$-perfectly orientable graphs is NP-complete for all $k\ge 3$. While the recognition complexity of $2$-perfectly orientable graphs seems to be unknown, [$1$-p.o.]{} graphs can be recognized in polynomial time via a reduction to $2$-SAT [@MR1244934 Theorem 5.1]. A polynomial time algorithm for recognizing [$1$-p.o.]{} graphs that works directly on the graph was given by Urrutia and Gavril [@MR1161986]. The notion of [$1$-p.o.]{} graphs was introduced in 1982 by Skrien [@MR666799] (under the name $\{B_2\}$-graphs), where the problem of characterizing [$1$-p.o.]{} graphs was posed. By definition, [$1$-p.o.]{} graphs are exactly the graphs that admit an orientation that is an out-tournament. A simple arc reversal argument shows that that [$1$-p.o.]{} graphs are exactly the graphs that admit an orientation that is an in-tournament. Such orientations were called [*fraternal orientations*]{} in several papers [@MR1161986; @MR1287025; @MR1292980; @MR1449722; @MR1246675; @MR2323998; @MR2548660]. [$1$-p.o.]{} graphs are also exactly the underlying graphs of the so-called locally in-semicomplete digraphs (which are defined similarly as in-tournaments, except that pairs of oppositely oriented arcs are allowed), see [@MR2472389]. While a structural understanding of 1-p.o. graphs is still an open question, partial results are known. Bang-Jensen et al. [@MR1244934] (see also [@prisner1988familien]) proved a topological property of [$1$-p.o.]{} graphs (stating that every [$1$-p.o.]{} graph is $1$-homotopic), gave characterizations of [$1$-p.o.]{} line graphs and of  triangle-free graphs, and showed that every graph representable as the intersection graph of connected subgraphs of unicyclic graphs is $1$-p.o. This implies that all chordal graphs and all circular arc graphs are [$1$-p.o.]{}, as observed already in [@MR1161986] and in [@MR1161986; @MR666799], respectively. Moreover, Bang-Jensen et al. showed in [@MR1244934] that every graph having a unique induced cycle of order at least $4$ is [$1$-p.o.]{}, and conjectured that every [$1$-p.o.]{} graph can be represented as the intersection graph of connected subgraphs of a cactus graph. The subclass of [$1$-p.o.]{} graphs consisting of graphs that admit an orientation that is both an in-tournament and an out-tournament was characterized in [@MR666799] (see also [@MR1081957]) as exactly the class of proper circular arc graphs. We now briefly discuss the characterizations of [$1$-p.o.]{} graphs as intersection graphs and with forbidden substructures given by Urrutia and Gavril in [@MR1161986]. The characterizations state that a graph is [$1$-p.o.]{} if and only if it is the vertex-intersection graph of a family of mutually graftable subtrees in a graph (where two rooted subtrees of a graph are said to be [*graftable*]{} if their intersection is empty or contains one of their roots), if and only if it does not contain an induced [*bicycle*]{}, where a bicycle is a graph that can be written as a union of two monocycles satisfying a certain condition. A [*monocycle*]{} is a graph $G$ having a closed walk $C = (x_0,x_1,\ldots,x_k)$ with $x_k = x_0$, $k\ge 4$, and a walk $P = (y_1,\ldots, y_\ell)$ with $\ell \ge 2$ such that $V(G) = \{x_0,x_1,\ldots, x_k\}\cup \{y_1,\ldots, y_\ell\}$, for all $i\in \{1,\ldots, k\}$ (indices modulo $k$) vertex $x_i$ is neither equal nor adjacent to vertex $x_{i+2}$, and for all $j\in \{1,\ldots, \ell-2\}$ vertex $y_j$ is neither equal nor adjacent to vertex $y_{j+2}$; moreover, vertex $y_\ell$ is either adjacent to two consecutive vertices $s,t$ of $C$, or it is in $C$ and its immediate neighbours in $C$ are denoted $s,t$, and, finally, $y_{\ell-1}$ is distinct from and not adjacent to $s, t$. Unfortunately, the above characterization is not constructive and does not lead to an explicit list of minimal forbidden induced subgraphs for the class of [$1$-p.o.]{} graphs. Bicycles can be recognized in polynomial time using the polynomial time recognition algorithm for [$1$-p.o.]{} graphs, but are not structurally understood. It is also not clear how to detect an induced bicycle in a given non-$1$-perfectly orientable graph, except by running the algorithm by Urrutia and Gavril and then applying the arguments given in the proof . In this paper, we prove several further results regarding $1$-perfectly orientable graphs. Our results can be summarized as follows: - We give a characterization of [$1$-p.o.]{} graphs in terms of edge clique covers similar to a known characterization of squared graphs due to Mukhopadhyay (Theorem \[thm:characterization-edge-clique-covers\]). - We identify several graph transformations preserving the class of [$1$-p.o.]{} graphs (Theorem \[prop:operations\]). In particular, we show that the class of [$1$-p.o.]{} graphs is closed under taking induced minors. We also study the behavior of [$1$-p.o.]{} graphs under the join operation (Theorem \[prop:co-bipartite-join\]), which motivates the study of [$1$-p.o.]{} co-bipartite graphs. - We show that within the class of co-bipartite graphs, [$1$-p.o.]{} graphs coincide with circular arc graphs (Theorem \[thm:co-bipartite-circular-arc\]). This adds to the list of the many characterizations of co-bipartite circular arc graphs. - We identify several minimal forbidden induced minors for the class of [$1$-p.o.]{} graphs, including ten small specific graphs and two infinite families: the complements of even cycles of length at least six and the complements of the graphs obtained from odd cycles by adding a component consisting of a single edge (Theorem \[thm:non-1-po\]). - Finally, we characterize [$1$-p.o.]{} cographs, obtaining characterizations both in terms of forbidden induced subgraphs and in terms of structural properties (Theorem \[thm:cograph\]). The paper is structured as follows. In Section \[sec:notation\] we fix the notation. In Section \[sec:edge-clique-covers\] we give a characterization of [$1$-p.o.]{} graphs in terms of edge clique covers. In Section \[sec:operations\], we identify several graph transformations preserving the class of [$1$-p.o.]{} graphs. In Section \[sec:co-bipartite-graphs\] we characterize [$1$-p.o.]{} graphs within the class of complements of bipartite graphs. Minimal forbidden induced minors for the class of [$1$-p.o.]{} graphs are discussed in Section \[sec:minors\]. We conclude the paper with characterizations of [$1$-p.o.]{} cographs in Section \[sec:cographs\]. Preliminaries {#sec:notation} ============= In this section, we recall the definitions of some of the most used notions in this paper. All graphs in this paper are simple and finite, but may be either directed (in which case we will refer to them as [*digraphs*]{}) or undirected (in which case we will refer to them as [*graphs*]{}). We use standard graph and digraph terminology. In particular, the vertex and edge sets of a graph $G$ will be denoted by $V(G)$ and $E(G)$, respectively, and the vertex and arc sets of a digraph $D$ will be denoted by $V(D)$ and $A(D)$. An edge in a graph connecting vertices $u$ and $v$ will be denoted by $\{u,v\}$ or simply $uv$. An arc in a graph connecting vertices $u$ and $v$ will be denoted by $(u,v)$. We will also use the notation $u\rightarrow v$ to denote the fact that an edge $uv$ of a graph $G$ is oriented from $u$ to $v$ in an orientation of $G$. The set of all vertices adjacent to a vertex $v$ in a graph $G$ will be denoted by $N_{G}(v)$, and its cardinality, the [*degree of $v$ in $G$*]{}, by $d_G(v)$. The [*closed neighborhood*]{} of $v$ in $G$ is the set $N_G(v)\cup \{v\}$, denoted by $N_G[v]$. An *orientation* of a graph $G = (V, E)$ is a digraph $D= (V,A)$ obtained by assigning a direction to each edge of $G$. A [*tournament*]{} is an orientation of a complete graph. Given a digraph $D$, the *in-neighborhood* of a vertex $v$ in $D$, denoted by $N^{-}_{D}(v)$, is the set of all vertices $w$ such that $(w,v) \in A$. Similarly, the *out-neighborhood* of $v$ in $D$ is the set of all vertices $w$ such that $(v,w) \in A$. The cardinalities of the in- and the out-neighborhood of $v$ are the [*in-degree*]{} and the [*out-degree of $v$*]{} and are denoted by $d^{-}_{D}(v)$ and $d^{+}_{D}(v)$, respectively. Given two graphs $G$ and $H$, their [*disjoint union*]{} is the graph $G+H$ with vertex set $V(G)\cup V(H)$ (disjoint union) and edge set $E(G)\cup E(H)$. We write $2G$ for $G+G$. The [*join*]{} of two graphs $G$ and $H$ is the graph denoted by $G\ast H$ and obtained from the disjoint union of $G$ and $H$ by adding to it all edges joining vertex of $G$ with a vertex of $H$. Given a graph $G$ and a subset $S$ of its vertices, we denote by $G[S]$ the [*subgraph of $G$ induced by $S$*]{}, that is, the graph with vertex set $S$ and edge set $\{uv\in E(G)\mid u,v\in S\}$. By $G-S$ we denote the subgraph of $G$ induced by $V(G)\setminus S$, and when $S = \{v\}$ for a vertex $v$, we also write $G-v$. The graph $G/e$ obtained from $G$ by *contracting* an edge $e=uv$ is defined as $G/e=(V,E)$ where $V=(V(G)\setminus \{u,v\}) \cup \{w\}$ with $w$ a new vertex and $E=E(G-\{u,v\}) \cup \{wx \mid x\in N_{G}(u) \cap N_{G}(v)\}$. A [*clique*]{} (resp., [*independent set*]{}) in a graph $G$ is a set of pairwise adjacent (resp., non-adjacent) vertices of $G$. The [*complement*]{} of a graph $G$ is the graph $\overline{G}$ with the same vertex set as $G$ in which two distinct vertices are adjacent if and only if they are not adjacent in $G$. The fact that two graphs $G$ and $H$ are isomorphic to each other will be denoted by $G\cong H$. In this paper we will often not distinguish between isomorphic graphs. The path, the cycle, and complete graph on $n$ vertices will be denoted by $P_n$, $C_n$, and $K_n$, respectively, and the complete bipartite graph with parts of size $m$ and $n$ by $K_{m,n}$. Given two graphs $G$ and $H$, we say that $G$ is [*$H$-free*]{} if no induced subgraph of $G$ is isomorphic to $H$. For further background on graphs, we refer to [@MR2744811], on graph classes, to [@MR2063679; @MR1686154], and on digraphs, to [@MR2472389]. A connected graph is said to be [*unicyclic*]{} if it has exactly one cycle. The following consequence of [@MR1244934 Corollary 5.7] will be used in some of our proofs. \[prop:unicyclic\] Every unicyclic graph is $1$-perfectly orientable. A characterization in terms of edge clique covers {#sec:edge-clique-covers} ================================================= A graph $G$ is said to [*have a square root*]{} if there exists a graph $H$ with $V(H) = V(G)$ such that for all $u,v\in V(G)$, we have $uv\in E(G)$ if and only if the distance in $H$ between $u$ and $v$ is either $1$ or $2$. An [*edge clique cover*]{} in a graph $G$ is a collection of cliques $\{C_1,\ldots, C_k\}$ in $G$ such that every edge of $G$ belongs to some clique $C_i$. In this section, we characterize [$1$-p.o.]{} graphs in terms of edge clique covers, in a spirit similar to the well known Mukhopadhyay’s characterization of graphs admitting a square root, which we now recall. \[thm:squares\] A graph $G$ with $V(G) = \{v_1,\ldots, v_n\}$ has a square root if and only if $G$ has an edge clique cover $\{C_1,\ldots, C_n\}$ such that the following two conditions hold: (a) $v_i\in C_i$ for all $i$, (b) for every edge $v_iv_j\in E(G)$, we have $v_i\in C_j$ if and only if $v_j\in C_i$. In the original statement of the theorem, the second condition is required for all $i\neq j$, but since $v_iv_j\not\in E(G)$ clearly implies $v_i\not \in C_j$ and $v_j\not \in C_i$, the equivalence in condition 2 automatically holds for all non-adjacent vertex pairs. \[thm:characterization-edge-clique-covers\] For every graph $G$ with $V(G) = \{v_1,\ldots, v_n\}$, the following conditions are equivalent: 1. $G$ is $1$-perfectly orientable. 2. $G$ has an edge clique cover $\{C_1,\ldots, C_n\}$ such that the following two conditions hold: 1. $v_i\in C_i$ for all $i$, 2. for every edge $v_iv_j\in E(G)$, we have $v_i\in C_j$ or $v_j\in C_i$, but not both. 3. $G$ has an edge clique cover $\{C_1,\ldots, C_n\}$ such that the following two conditions hold: 1. $v_i\in C_i$ for all $i$, 2. for every edge $v_iv_j\in E(G)$, we have $v_i\in C_j$ or $v_j\in C_i$. Before proving Theorem \[thm:characterization-edge-clique-covers\], we give two remarks. First, note that the difference between Theorem \[thm:squares\] and the equivalence of conditions 1 and 3 in Theorem \[thm:characterization-edge-clique-covers\] consists in replacing the equivalence in condition (b) of Theorem \[thm:squares\] with disjunction. This seemingly minor difference is in sharp contrast with the fact that recognizing graphs admitting a square root is NP-complete [@MR1293386], while [$1$-p.o.]{} graphs can be recognized in polynomial time. Second, recall that a [*pointed set*]{} is a pair $(S,v)$ where $S$ is a nonempty set and $v\in S$. To every family ${\cal S} = \{(S_1,v_1),\ldots, (S_n,v_n)\}$ of pointed sets, one can associate a graph, the so called [*catch graph*]{} of ${\cal S}$ by setting $V(G) = \{v_1,\ldots, v_n\}$ and joining two distinct vertices $v_i$ and $v_j$ if and only if $v_i\in S_j$ or $v_j\in S_i$ (see, e.g. [@MR1672910]). The equivalence between conditions 1 and 3 in the above theorem gives another proof of the fact that every [$1$-p.o.]{} graph is the catch graph of a family of pointed sets, which also follows from the characterization of [$1$-p.o.]{} graphs due to Urrutia and Gavril (stating that a graph is [$1$-p.o.]{} if and only if it is the vertex-intersection graph of a family of mutually graftable subtrees in a graph) [@MR1161986]. First, we show the implication $1\Rightarrow 2$. Given a $1$-perfect orientation $D$ of a [$1$-p.o.]{} graph $G$ with $V(G) = \{v_1,\ldots, v_n\}$, we define an edge clique cover $\{C_1,\ldots, C_n\}$ of $G$ by setting $C_i = \{v_i\}\cup N_D^+(v_i)$. By definition, each $C_i$ contains $v_i$, and, since $D$ is $1$-perfect, is a clique in $G$. Note that for all $i\neq j$, we have $v_j\in C_i$ if and only if $(v_i,v_j)\in A(D)$. In particular $v_j\in C_i$ and $v_i\in C_j$ cannot happen simultaneously. Since for every edge $v_iv_j\in E(G)$, we have either $(v_i,v_j)\in A(D)$ or $(v_j,v_i)\in A(D)$ but not both, condition 2(b) follows. The implication $2\Rightarrow 3$ is trivial. Finally, we show the implication $3\Rightarrow 1$. Suppose that $G$ has an edge clique cover $\{C_1,\ldots, C_n\}$ such that $v_i\in C_i$ for all $i$, and for every edge $v_iv_j\in E(G)$, $v_i\in C_j$ or $v_j\in C_i$. Define an orientation $D$ of $G$ as follows: for $1\le i<j\le n$ such that $v_iv_j\in E(G)$, set $(v_i,v_j)\in A(D)$ if $v_j\in C_i$, and $(v_j,v_i)\in A(D)$, otherwise. By definition, for every vertex $v_i\in V(G)$ we have $$\begin{aligned} N_D^+(v_i) &=& \{v_j\mid j<i~\wedge~v_i\not\in C_j\}\cup \{v_j\mid j>i~\wedge~v_j\in C_i \}\\ & \subseteq & \{v_j\mid j<i~\wedge~v_j\in C_i\}\cup \{v_j\mid j>i~\wedge~v_j\in C_i\}~~\subseteq~~ C_i\,,\end{aligned}$$ where the first inclusion relation holds due to condition 3(b). Hence, $D$ is a $1$-perfect orientation of $G$, and $G$ is [$1$-p.o.]{} For later use, we also record the following immediate consequences of Theorem \[thm:characterization-edge-clique-covers\]. \[cor:characterization-1po-complements\] For every graph $G$ with $V(G) = \{v_1,\ldots, v_n\}$, the following conditions are equivalent: 1. $\overline{G}$ is $1$-perfectly orientable. 2. $G$ has a collection of independent sets $\{I_1,\ldots, I_n\}$ such that the following two conditions hold: 1. $v_i\in I_i$ for all $i$, 2. for every non-adjacent vertex pair $v_iv_j\in E(\overline{G})$, we have $v_i\in I_j$ or $v_j\in I_i$. \[cor:upper-bound-on-theta\] The edges of every $1$-perfectly orientable graph with $n$ vertices can be covered by $n$ cliques. Note that the converse of Corollary \[cor:upper-bound-on-theta\] does not hold. For example, the complement of the graph $G_1$ (see Fig. \[fig:1\] on p. ) is not [$1$-p.o.]{} (see Theorem \[thm:non-1-po\]), but can be edge-covered with (at most) $9$ cliques. (Determining if the edges of a given $n$-vertex graph can be covered by $n$ cliques is NP-complete [@MR679638]; see also [@MR712930].) Operations preserving $1$-perfectly orientable graphs {#sec:operations} ===================================================== In this section, we identify several operations preserving [$1$-p.o.]{} graphs and characterize when the join of two graphs is $1$-p.o. Two distinct vertices $u$ and $v$ in a graph $G$ are said to be *true twins* if $N_{G}[u] = N_{G}[v]$, and *false twins* if $N_{G}(u) = N_{G}(v)$. A vertex $v$ is *simplicial* if its neighborhood forms a clique, and *universal* if it is adjacent to every other vertex of the graph. The operations of adding a true twin, a simplicial vertex or a universal vertex to a given graph are defined in the obvious way. The operation of [*duplicating a $2$-branch in the complement*]{} of a graph $G$ is defined as follows. A [*$2$-branch*]{} in a graph $G$ is a path $(a,b,c)$ such that $d_G(b) = 2$ and $d_G(c) = 1$. We say that such a $2$-branch is [*rooted at $a$*]{}. [*Duplicating a $2$-branch*]{} $G$ results in a graph $H$ where $(a,b,c)$ is a $2$-branch in $G$, $V({H}) = V({G})\cup\{b',c'\}$, where $b'$ and $c'$ are new vertices, ${H}-\{b',c'\}= {G}$, and $(a,b',c')$ is a $2$-branch in $H$. Finally, the result of [*duplicating a $2$-branch in the complement*]{} of a graph $G$ is the complement of a graph obtained by duplicating a $2$-branch in $\overline{G}$. \[prop:operations\] The class of $1$-perfectly orientable graphs is closed under each of the following operations: 1. Disjoint union. 2. Adding a universal vertex (that is, join with $K_1$). 3. Adding a true twin. 4. Adding a simplicial vertex. 5. Duplicating a $2$-branch in the complement. 6. Vertex deletion. 7. Edge contraction. For a [$1$-p.o.]{} graph $G$, let us denote by $D(G)$ an arbitrary (but fixed) $1$-perfect orientation of $G$. 1\. If $G=G_1 +G_2$ is the disjoint union of two [$1$-p.o.]{} graphs $G_1$ and $G_2$, then the disjoint union of $D(G_1)$ and $D(G_2)$ is a $1$-perfect orientation of $G$. 2\. Suppose we have a [$1$-p.o.]{} graph $G$ with orientation $D(G)$ and we add a universal vertex $v$ to $G$, thus obtaining a graph $G'$. A $1$-perfect orientation $D'$ of $G'$ can be obtained by orienting an edge $xy\in E(G)$ from $x$ to $y$ if the edge is oriented from $x$ to $y$ in $D(G)$, and orienting the edges of the form $uv$ from $u$ to $v$. It is easy to see that $D'$ is indeed $1$-perfect. 3\. Let $w$ be a vertex in a [$1$-p.o.]{} graph $G$, and let $G'$ be the graph obtained from $G$ by adding to it a true twin of $w$, say $v$. We obtain a $1$-perfect orientation $D'$ of $G'$ by maintaining the same orientation as in $D(G)$ for the edges in $G$ and orienting the new edges (incident with $v$) as $v\rightarrow{u}$ if $u \in N_{D(G)}^{+}(w)$, and $u\rightarrow{v}$ if $u \in N_{D(G)}^{-}(w)$. We also orient the edge between $w$ and $v$ as $w\rightarrow{v}$. It is a matter of routine verification to check that the so obtained orientation of $G'$ is $1$-perfect. 4\. If we add a simplicial vertex $v$ to a [$1$-p.o.]{} graph $G$, then extending $D(G)$ by orienting all edges incident with $v$ away from $v$ results in an orientation $D'$ of the new graph, say $G'$, such that $N_{D'}^{+}(v)$ is a clique in $G'$. The other out-neighborhoods were not changed, so they are cliques in $G'$ as well. 5\. Let $V(G) = \{v_1,\ldots, v_n\}$. If $G$ is [$1$-p.o.]{}, then Corollary \[cor:characterization-1po-complements\] applies to $\overline{G}$. Hence, $\overline G$ has a collection of independent sets $\{I_1,\ldots, I_n\}$ such that $v_i\in I_i$ for all $i$, and for every edge $v_iv_j\in E(G)$, we have $v_i\in I_j$ or $v_j\in I_i$. Let $H$ be the graph resulting from duplicating a $2$-branch $(a,b,c)$ in $\overline{G}$; without loss of generality, we may assume that $(a,b,c) = (v_1,v_2,v_3)$; furthermore, let the two new vertices $b'$ and $c'$ be labeled as $v_{n+1}$ and $v_{n+2}$, respectively. It suffices to prove that $H$ has a collection of independent sets $\{J_1,\ldots, J_{n+2}\}$ such that $v_k\in J_k$ for all $k$, and for every edge $v_iv_k\in E(\overline{H})$, we have $v_i\in J_k$ or $v_k\in J_i$. We may assume without loss of generality that the sets $I_j$ are maximal independent sets in $\overline{G}$, which in particular implies that each $I_j$ contains exactly one of the vertices $b$ and $c$. We define the sets $J_k$ for $k\in \{1,\ldots, n+2\}$ with the following rule: - For all $v_k\in V(G)$, set $$J_k = \left\{ \begin{array}{ll} I_k\cup\{b'\}, & \hbox{if $b\in I_k$;} \\ I_k\cup \{c'\}, & \hbox{if $c\in I_k$.} \end{array} \right.$$ - For $k = n+1$ (that is, $v_k = b'$), set $J_k = (I_2\setminus \{b\})\cup\{b',c\}$. - For $k = n+2$ (that is, $v_k = c'$), set $J_k = (I_3\setminus \{a,c\})\cup\{b,c'\}$. Clearly, each $J_k$ is an independent set in $H$. Let $v_iv_k\in E(\overline{H})$. Since $b'c'\not\in E(\overline{H})$, we may assume that $v_i\in V(G)$. We analyze three cases according to where is $v_k$. If $v_k\in V(G)$, then $v_iv_k\in E(G)$ and hence $v_i\in I_k$ or $v_k\in I_i$, implying $v_i\in J_k$ or $v_k\in J_i$. If $v_k= b'$, then either $v_i\in J_k$ (in which case we are done), or $v_i\not \in J_k = (I_2\setminus \{b\})\cup\{b',c\}$, in which case either $v_i = b$ or $v_i\not\in I_2$. In the former case, we have $i = 2$ and $v_k = b'\in J_2$, while in the latter case, we have $b = v_2\in I_i$, which implies $v_k = b'\in J_i$. If $v_k= c'$, then either $v_i\in J_k$ (in which case we are done), or $v_i\not \in J_k = (I_3\setminus \{a,c\})\cup\{b,c'\}$, in which case either $v_i \in \{a,c\}$ or $v_i\not\in I_3$. In the former case, we have $c\in I_i$ (if $v_i = a$ this follows from the maximality of $I_i$) and consequently $v_k = c'\in J_i$. In the latter case, we have $c = v_3\in I_i$, which implies $v_k = c'\in J_i$. We have shown that $v_k\in J_k$ for all $k$, and for every edge $v_iv_k\in E(\overline{H})$, we have $v_i\in J_k$ or $v_k\in J_i$. By Corollary \[cor:characterization-1po-complements\], $H$ is the complement of a [$1$-p.o.]{} graph, which establishes item 5. 6\. Closure under vertex deletions follows immediately from the fact that the class of complete graphs is closed under vertex deletions. 7\. Let $e = uv$ be an edge of a [$1$-p.o.]{} graph $G$, and let $D$ be a $1$-perfect orientation of $G$, with (without loss of generality) $u\rightarrow v$. Let $G'=G/e$ be the graph obtained by contracting the edge $e$, and let $w$ be the vertex replacing $u$ and $v$. Set $$\begin{aligned} X &=& N_G(u)\setminus N_G(v)\,,\\ Y &=& \{x\in N_G(u)\cap N_G(v)\mid (x,v) \in A(D)\}\,,\\ U &=& \{x\in N_G(u)\cap N_G(v)\mid (v,x) \in A(D)\}\,,\\ W &=& \{x\in N_G(v)\setminus N_G(u)\mid (x,v) \in A(D)\}\,,\\ Z &=& \{x\in N_G(v)\setminus N_G(u)\mid (v,x) \in A(D)\}\,,\\ R &=& V(G)\setminus(X\cup Y\cup U \cup W\cup Z\cup \{u,v\})\,.\end{aligned}$$ Let $D'$ be an orientation of $G'$ defined as follows: (i) For all edges $e \in E(G')$ whose endpoints are not incident with $w$, orient $e$ the same way as it is oriented in $D$. (ii) For all $x \in X$, orient the edge $xw$ as $x \rightarrow{w}$. (iii) For all $x \in Y$, orient the edge $xw$ as $x \rightarrow{w}$. (iv) For all $x \in U$, orient the edge $xw$ as $w \rightarrow{x}$. (v) For all $x \in W$, orient the edge $xw$ as $x \rightarrow{w}$. (vi) For all $x \in Z$, orient the edge $xw$ as $w \rightarrow{x}$. We complete the proof by showing that $D'$ is a $1$-perfect orientation of $G'$. We do this by directly verifying the defining condition that for every vertex $x$ of $V(G')$, the set $N_{D'}^+(x)$ is a clique in $G'$. Note that $X\cup Y\cup U\cup W\cup Z\cup \{w\}\cup R$ is a partition of $V(G')$. We consider seven cases depending on to which part of this partition $x$ belongs. \(1) $x \in X$. In this case, $N_{D'}^{+}(x)=(N_{D}^{+}(x)\setminus \{u\}) \cup \{w\}$. Note that since $(u,v)\in A(D)$ and $D$ is a $1$-perfect orientation of $G$, we have $u\in N^+_D(x)$. Consequently, since $N^+_D(x)$ is a clique in $G$ containing $u$, it contains no vertex from $R\cup Z$, and thus $N_{D'}^{+}(x)=(N_{D}^{+}(x)\setminus \{u\}) \cup \{w\}$ is a clique in $G'$. \(2) $x \in W$. In this case, $v \in N_{D}^{+}(x)$, and a similar reasoning as above shows that $N_{D'}^{+}(x)=(N_{D}^{+}(x) \setminus \{v\}) \cup \{w\}$ is a clique in $G'$. \(3) $x \in Z$. In this case, $N_{D'}^{+}(x)=N_{D}^{+}(x)$ and this set is a clique in $G$ and hence in $G'$. \(4) $x \in Y$. In this case, we have two possibilities, either $u \in N_{D}^{+}(x)$ or not. In the former case, we have $N_{D'}^{+}(x)=(N_{D}^{+}(x) \setminus \{u,v\}) \cup \{w\}$ which is a clique in $G'$, since $N_{D}^{+}(x) $ is a clique in $G$ containing $u$ and $v$, and every neighbor of $w$ in $G'$ is a neighbor of either $u$ or of $v$ in $G$. In the latter case, we have $N_{D'}^{+}(x)=(N_{D}^{+}(x) \setminus \{v\}) \cup \{w\}$, which is again a clique in $G'$ by a similar argument. \(5) $x\in U$. Now, $N_{D'}^{+}(x)=N_{D}^{+}(x) \setminus \{u\}$, which is a clique in $G$ not containing $u$ or $v$, and hence a clique in $G'$. \(6) $x \in R$. Since the edges with endpoints in $R$ have no endpoint in $\{u,v\}$, the edges which have $x$ as an endpoint will maintain the same orientation as in $D$. Therefore, $N_{D'}^{+}(x)=N_{D}^{+}(x)$ is a clique in $G'$. \(7) $x=w$. In this case, we have $N_{D'}^{+}(x)= N_{D}^{+}(v)$, therefore $N_{D'}^{+}(x)$ forms a clique in $G'$. In the study of [$1$-p.o.]{} graphs we may restrict our attention to connected graphs. It is a natural question whether we may also assume that $G$ is co-connected, that is, that its complement is connected, or, equivalently, that $G$ is not the join of two smaller graphs. The join operation does not generally preserve the class of [$1$-p.o.]{} graphs: the graphs $2K_1$ and $3K_1$ are trivially [$1$-p.o.]{}, but their join, $K_{2,3}$, is not (as can be easily verified; see also Theorem \[thm:non-1-po\]). In the next theorem we characterize when the join of two graphs is $1$-p.o. A graph is said to be [*co-bipartite*]{} of its complement is bipartite. \[prop:co-bipartite-join\] Suppose that a graph $G$ is the join of two graphs $G_1$ and $G_2$. Then, $G$ is $1$-perfectly orientable if and only if one of the following conditions hold: 1. $G_1$ is complete and $G_2$ is [$1$-p.o.]{}, or vice versa. 2. Each of $G_1$ and $G_2$ is a co-bipartite [$1$-p.o.]{} graph. In particular, the class of co-bipartite [$1$-p.o.]{} graphs is closed under join. Suppose first that $G$ is $1$-p.o. Clearly, both $G_1$ and $G_2$ are [$1$-p.o.]{} graphs. If one of $G_1$ or $G_2$ is complete or both are co-bipartite, we are done. So suppose that neither of them is complete and $G_1$, say, is not co-bipartite. Then, $G_1$ contains the complement of an odd cycle, $\overline{C_{2k+1}}$ for some $k\ge 1$, as induced subgraph. Since $G_2$ is not complete, it contains $2K_1$ as induced subgraph. Consequently, $G$ contains the join of $\overline{C_{2k+1}}$ and $2K_1$ as induced subgraph. As this graph is isomorphic to the complement of $C_{2k+1}+K_2$, it is not [$1$-p.o.]{} (see Theorem \[thm:non-1-po\]), and hence neither is $G$, a contradiction. For the converse direction, suppose first that $G_1$ is complete and $G_2$ is [$1$-p.o.]{}, or vice versa. In this case $G$ is [$1$-p.o.]{}, since it can be obtained from a [$1$-p.o.]{} graph by a sequence of universal vertex additions, and Theorem \[prop:operations\] applies. Suppose now that $G_1$ and $G_2$ are two co-bipartite [$1$-p.o.]{} graphs with bipartitions of their respective vertex sets into cliques $\{A_1, B_1\}$ and $\{A_2,B_2\}$, respectively (one of the two cliques in each graph can be empty). Fixing a $1$-perfect orientation $D_i$ of each $G_i$ (for $i = 1,2$), we can construct a $1$-perfect orientation, say $D$, of $G = G_1\ast G_2$, as follows. Every edge of $G$ that is an edge of some $G_i$ is oriented as in $D_i$. Orient the remaining edges of the join from $A_1$ to $A_2$, from $B_1$ to $B_2$, from $A_2$ to $B_1$ and from $B_2$ to $A_1$. Let us verify that the out-neighborhood of a vertex $x\in A_1$ with respect to $D$ forms a clique in $G$ (the other cases follow by symmetry). We have $N_D^+(x) = N_{D_1}^+(x)\cup A_2$, and since $N_{D_1}^+(x)$ is a clique in $G_1$, $A_2$ is a clique in $G$ and there are all edges between $G_1$ and $A_2$, the set $N_D^+(x)$ is indeed a clique in $G$. This shows that $G$ is [$1$-p.o.]{} Since the class of bipartite graphs is closed under disjoint union, the class of co-bipartite graphs is closed under join. Consequently, the set of co-bipartite [$1$-p.o.]{} graphs is closed under join. $1$-perfectly orientable co-bipartite graphs {#sec:co-bipartite-graphs} ============================================ The behavior of [$1$-p.o.]{} graphs under the join operation motivates the study of [$1$-p.o.]{} co-bipartite graphs. In this section we show that a co-bipartite graph is [$1$-p.o.]{} if and only if it is circular arc. A graph is [*circular arc*]{} if it is the intersection graph of a set of closed arcs on a circle. The class of circular arc graphs forms an important and well studied subclass of [$1$-p.o.]{} graphs; see, e.g., [@MR3159129; @MR2567965]. The equivalence of the classes of [$1$-p.o.]{} graphs and circular arc graphs within co-bipartite graphs will be derived using two ingredients: a necessary condition for the [$1$-p.o.]{} property, which holds in general, and a characterization of co-bipartite circular arc graphs due to Hell and Huang. We say that a chordless cycle $C$ in a graph $G$ is oriented [*cyclically*]{} in an orientation $D$ of $G$ if every vertex of the cycle has exactly one out-neighbor on the cycle (see [@MR2370526; @MR2643278] for results on orientations defined in terms of this property). \[lem:cycles\] In every $1$-perfect orientation $D$ of a [$1$-p.o.]{} graph $G$, every chordless cycle of length at least four is oriented cyclically. Suppose that a chordless cycle $C$ in $G$ is not oriented cyclically in some $1$-perfect orientation $D$ of $G$. Let $C'$ be the orientation of $C$ induced by $D$. By assumption, $C$ contains a vertex $v$ with $d^+_{C'}(v) \neq 1$. Since $\sum_{u\in V(C)}d^+_{C'}(u) = |A(C')| = |E(C)| = |V(C)|$, it is not possible that $d^+_{C'}(u) \le 1$ for all $u\in V(C)$, as this would imply $d^+_{C'}(v) = 0$ and consequently $\sum_{u\in V(C)}d^+_{C'}(u) <|V(C)|$. Thus, $C$ contains a vertex $v$ with $d^+_{C'}(v) = 2$. Since $C$ is of length at least $4$ and chordless, the out-neighborhood of $v$ in $C'$, and hence in $D$, is not a clique in $G$, contradicting the fact that $D$ is a $1$-perfect orientation of $G$. We now describe the characterization of co-bipartite circular arc graphs due to Hell and Huang. Let $G$ be a co-bipartite graph with a bipartition $\{U, U'\}$ of its vertex set into two cliques. An edge of $G$ connecting a vertex from $U$ with a vertex of $U'$ is said to be a *crossing edge* of $G$. A coloring of the crossing edges of $G$ with colors red and blue is said to be *good* (with respect to $\{U, U'\}$) if for every induced $4$-cycle in $G$, the two crossing edges in it are of the opposite color. The following characterization of co-bipartite circular arc graphs is a reformulation of [@MR1435657 Corollary 2.3]. \[thm:co-bip-circ-arc\] Let $G$ be a co-bipartite graph with a bipartition $\{U, U'\}$ of its vertex set into two cliques. Then $G$ is a circular arc graph if and only if it has a good coloring with respect to $\{U, U'\}$. \[thm:co-bipartite-circular-arc\] For every co-bipartite $G$ graph, the following properties are equivalent: 1. $G$ is $1$-perfectly orientable. 2. $G$ has an orientation in which every induced $4$-cycle is oriented cyclically. 3. $G$ is circular arc. As shown by Skrien [@MR666799], implication $3\Rightarrow 1$ holds for general (not necessarily co-bipartite) graphs. Similarly, the implication $1\Rightarrow 2$ holds in general as follows from Lemma \[lem:cycles\]. It remains to prove that if $G$ is co-bipartite, then condition 2 implies condition 3. Let $D$ be an orientation of $G$ in which every induced $4$-cycle of $G$ is oriented cyclically. Fix a partition $\{U,U'\}$ of $V(G)$ into two cliques. We will now show that $G$ admits a good coloring (with respect to $\{U,U'\}$), and Theorem \[thm:co-bip-circ-arc\] will imply that $G$ is circular arc. We obtain a good coloring of $G$ as follows: for every crossing edge $e$ of $G$, we color $e$ red if the arc of $D$ corresponding to $e$ goes from $U$ to $U'$, and blue if it goes from $U'$ to $U$. To see that this is indeed a good coloring, let $C$ be an arbitrary induced $4$-cycle of $G$. Since $C$ is oriented cyclically in $D$, out of the two crossing edges of $C$ exactly one is oriented from $U$ to $U'$ in $D$. This implies that the two crossing edges of $C$ will have different colors in the above coloring. It follows that the obtained coloring is a good coloring, as claimed. Many characterizations of circular arc co-bipartite graphs are known, including a characterization in terms of forbidden induced subgraphs due to Trotter and Moore [@MR0450140] and several (at least five) others, see, e.g., [@MR2567965; @MR3159129]. By Theorem \[thm:co-bipartite-circular-arc\], each of these yields a characterization of [$1$-p.o.]{} co-bipartite graphs. Theorem \[thm:co-bipartite-circular-arc\] can also be seen as providing further characterizations of co-bipartite circular arc graphs. A family of minimal forbidden induced minors {#sec:minors} ============================================ Theorem \[prop:operations\] implies that the class of [$1$-p.o.]{} graphs is closed under vertex deletions and edge contractions. Hence, it is also closed under taking induced minors. Recall that a graph $H$ is said to be an [*induced minor*]{} of a graph $G$ if $H$ can be obtained from $G$ by a series of vertex deletions or edge contractions. Graph classes closed under induced minors include all the minor-closed graph classes, as well as many others (see, e.g., [@MR1415290; @MR2901082; @MR1360111; @MR1109419; @MR0276129]). Since the class of [$1$-p.o.]{} graphs is closed under induced minors, it can be characterized in terms of [*minimal forbidden induced minors*]{}. That is, there exists a unique minimal set of graphs $\tilde{\cal F}$ such that (i) a graph $G$ is [$1$-p.o.]{} if and only if $G$ is [*$\tilde{\cal F}$-induced-minor-free*]{} (that is, no induced minor of $G$ is isomorphic to a member of $\tilde{\cal F}$), and (ii) every proper induced minor of every graph in $\tilde{\cal F}$ is [$1$-p.o.]{} In this section we identify an infinite subfamily ${\cal F}\subseteq \tilde{\cal F}$ of minimal forbidden induced minors for the class of [$1$-p.o.]{} graphs. We start with two preliminary observations. The fact that every circular arc graph implies the following. \[prop:powers-of-cycles\] The complement of every odd cycle is $1$-perfectly orientable. Recall that the [*$k$-th power*]{} of a graph $G$ is the graph with the same vertex set as $G$ in which two distinct vertices are adjacent if and only if their distance in $G$ is at most $k$. It is easy to see (and also follows from the fact that the class of circular arc graphs is closed under taking powers [@MR1206558]) that all powers of cycles are circular arc graphs. Therefore, the fact that the complement of every odd cycle is [$1$-p.o.]{} follows from two facts: (i) that the complement of $C_{3}$ is [$1$-p.o.]{}, and (ii) for every $k\ge 2$, the complement of the odd cycle $C_{2k+1}$ is isomorphic to a power of a cycle, namely to $C_{2k+1}^{k-1}$. Since every disjoint union of paths is an induced subgraph of a sufficiently large odd cycle, Proposition \[prop:powers-of-cycles\] and Theorem \[prop:operations\] yield the following. \[cor:co-linear-forest\] The complement of every disjoint union of paths is $1$-perfectly orientable. The following theorem describes a set of minimal forbidden induced minors for the class of [$1$-p.o.]{} graphs. \[thm:non-1-po\] Let ${\cal F} = \{F_1,F_2,F_5,\ldots,F_{12}\}\cup {\cal F}_3\cup {\cal F}_4$, where: - graphs $F_1$, $F_2$ are depicted in Fig. \[fig:1\], and - ${\cal F}_3 = \{\overline{C_{2k}}\mid k\ge 3\}$, the set of complements of even cycles of length at least $6$, - ${\cal F}_4 = \{\overline{K_2+C_{2k+1}}\mid k\ge 1\}$, the set of complements of the graphs obtained as the disjoint union of $K_2$ and an odd cycle, - for $i \in \{5,\ldots, 12\}$, graph $F_i$ is the complement of the graph $G_{i-4}$, depicted in Fig. \[fig:1\]. Then, every graph in set ${\cal F}$ is a minimal forbidden induced minor for the class of $1$-perfectly graphs. ![Four non-[$1$-p.o.]{} graphs and $8$ complements of non-[$1$-p.o.]{} graphs. Graphs $F_3$ and $F_4$ are the smallest members of families ${\cal F}_3$ and ${\cal F}_4$, respectively.[]{data-label="fig:1"}](forbidden-i-m){width="90.00000%"} We need to show that each $F\in {\cal F}$ is not [$1$-p.o.]{}, but every proper induced minor of $F$ is. We first show that no graph in ${\cal F}$ is $1$-p.o., and will argue minimality for all $F\in {\cal F}$ in the second part of the proof. First consider the graphs $F_1$ and $F_2$. Since they are both triangle-free, every edge clique cover of $F_i$ (for $i\in \{1,2\}$) contains all edges of $F_i$ and hence has at least $|E(F_i)|>|V(F_i)|$ members. Hence, Corollary \[cor:upper-bound-on-theta\] implies that $F_1$ and $F_2$ are not [$1$-p.o.]{} The family ${\cal F}_3$ consists precisely of complements of even cycles of length at least $6$. In particular, every $F\in {\cal F}_3$ is co-bipartite. By Theorem \[thm:co-bipartite-circular-arc\], $F$ is [$1$-p.o.]{} if and only if $F$ is circular arc. Since the family ${\cal F}_3$ is one of the six infinite families of minimal forbidden induced subgraphs for the class of circular arc co-bipartite graphs [@MR0450140], we infer that $F$ is not $1$-p.o. Now let $F\in {\cal F}_4$, that is, $F = \overline{K_2+C_{2k+1}}$ for some $k\ge 1$. We will prove that $F$ is not $1$-p.o. using Lemma \[lem:cycles\]. Let the vertices of the cycle component of $\overline{F}$ be named $u_1,\ldots, u_{2k+1}$, according to a cyclic order of $C_{2k+1}$. Also, let the two vertices of the $K_2$ component of $\overline{F}$ be named $v_1$ and $v_2$. Suppose that $F$ admits a $1$-perfect orientation $D$. For every two consecutive vertices from the cycle we have an induced $C_4$ given by these two vertices together with $v_1$ and $v_2$. By Lemma \[lem:cycles\], every such $C_4$ must be oriented cyclically. Consider the $C_4$ induced by vertices $\{u_1, u_2, v_1, v_2\}$. Without loss of generality we may assume that it is oriented as $v_1 \rightarrow{u_1} \rightarrow{v_2} \rightarrow{u_2} \rightarrow{v_1}$. This determines the orientation of the $C_4$ induced by $\{u_2,u_3,v_1,v_2\}$. Since the edge $\{v_{1},u_2\}$ is oriented as $u_2 \rightarrow{v_1}$, the edge $\{v_{1},u_3\}$ must be oriented as $v_1 \rightarrow{u_3}$. Proceeding along the cycle, we infer that $v_1\rightarrow u_i$ for odd $i$ and $u_i\rightarrow v_1$ for even $i$. However, this implies that $v_1\rightarrow u_1$ and $v_1\rightarrow u_{2k+1}$, contrary to the fact that $D$ is a $1$-perfect orientation of $F$. Therefore, $F$ is not [$1$-p.o.]{} Each of the remaining graphs, $F_5$–$F_{12}$, belongs to the list of minimal forbidden induced subgraphs for the class of circular arc co-bipartite graphs [@MR0450140]. By Theorem \[thm:co-bipartite-circular-arc\], none of these graphs is $1$-p.o. It remains to show minimality, that is, that [*every proper induced minor of every graph in ${\cal F}$ is $1$-p.o.*]{} First consider the graphs $F_1$ and $F_2$. Deleting any vertex of either $F_1$ or $F_2$ results in either a chordal graph or in a unicyclic graph, hence in a [$1$-p.o.]{} graph (cf. Proposition \[prop:unicyclic\]). Contracting any edge of $F_1$ results in a graph that is either chordal, or is obtained from a cycle by adding to it a simplicial vertex, hence in either case a [$1$-p.o.]{} graph. Contracting any edge of $F_2$ results in a graph that can be reduced to a cycle by removing true twins and simplicial vertices, hence this graph is also [$1$-p.o.]{} We are left with graphs that are defined in terms of their complements. To argue minimality for them, it will be convenient to understand the effect of performing the operation of edge contraction on a given graph on its complement. It can be seen that if $G$ is the graph obtained from a graph $H$ by contracting an edge $uv$, then $\overline{G}$ is the graph obtained from $\overline{H}$ identifying a pair of non-adjacent vertices (namely, $u$ and $v$) and making the new vertex adjacent exactly to the common neighbors in $\overline{H}$ of $u$ and $v$. We will refer to this operation as [*co-contracting a non-edge*]{}. Let $F\in {\cal F}_3$, that is, $F = \overline{C_{2k}}$ for some $k\ge 3$. Deleting a vertex from $F$ results in the complement of a path, which is [$1$-p.o.]{} by Corollary \[cor:co-linear-forest\]. Similarly, one can verify that co-contracting a non-edge of $\overline{F}$ results in a disjoint union of paths. Thus, every proper induced minor of $F$ is [$1$-p.o.]{} Let $F\in {\cal F}_4$, that is, $F = \overline{C_{2k+1}+K_2}$ for some $k\ge 1$. Deleting a vertex in the cycle component of $\overline{F}$ from $F$ results in the complement of a disjoint union of path, which is [$1$-p.o.]{} by Corollary \[cor:co-linear-forest\]. Deleting a vertex in the $K_2$ component of $\overline{F}$ from $F$ results in the graph that consists of the join of $K_1$ and the complement of an odd cycle, which is [$1$-p.o.]{} by Theorem \[prop:operations\] and Proposition \[prop:powers-of-cycles\]. Furthermore, co-contracting a non-edge of $\overline{F}$ results in a disjoint union of paths, and Corollary \[cor:co-linear-forest\] applies again. Thus, every proper induced minor of $F$ is [$1$-p.o.]{} We recall that each of the remaining graphs, $F_5$–$F_{12}$, is a minimal forbidden induced subgraph for the class of circular arc co-bipartite graphs. Deleting a vertex from any of them results in a circular arc co-bipartite graph, hence in a [$1$-p.o.]{} graph. Note that $F_{9}$ has $9$ vertices, each of $F_5, F_6, F_7, F_8, F_{10}$ has $10$ vertices, and $F_{11}$ and $F_{12}$ have $12$ vertices. Also note that since co-bipartite graphs are closed under edge contractions, in order to show that every graph obtained from one of the graphs $F_5$–$F_{12}$ by contracting an edge is [$1$-p.o.]{}, it suffices to argue that it is circular arc, which (since it is co-bipartite) is equivalent to verifying that it does not contain any of the minimal forbidden induced subgraphs for the class of circular arc co-bipartite graphs [@MR0450140]. The only graphs with at most $10$ vertices on this list are $\overline{C_6}$, $\overline{C_8}$, $\overline{C_{10}}$, and graphs $F_5$–$F_{10}$. The list also contains a unique graph of order $11$; let $G_9$ denote its complement. Let $G\in \{\overline{F_5}, \ldots, \overline{F_{12}}\} = \{G_1,\ldots, G_8\}$. A direct inspection of the possible graphs resulting from co-contracting a non-edge of $G$ shows that every such graph has either at most $10$ vertices, in which case its complement is $\{C_6,C_8,C_{10}, G_1,\ldots, G_6\}$-free, or it has $11$ vertices, in which case its complement either has an isolated vertex and the rest is $\{C_6,C_8,C_{10}, G_1,\ldots, G_6\}$-free, or it is connected, of order $11$, and $\{C_6,C_8,C_{10}, G_1,\ldots, G_6,G_9\}$-free. Thus, in all cases contracting an edge of a graph in $\{F_5, \ldots, F_{12}\}$ results in a circular arc graph, hence in a [$1$-p.o.]{} graph. This completes the proof. Theorem \[thm:non-1-po\] implies that ${\cal F} \subseteq \tilde{\cal F}$, where $\tilde{\cal F}$ is the set of minimal forbidden induced minors for the class of [$1$-p.o.]{} graphs. However, the complete set $\tilde{\cal F}$ is unknown. It is conceivable that one can obtain further graphs in $\tilde{\cal F}$ by computing the minimal elements with respect to the induced minor relation of the list of forbidden induced subgraphs for the class of circular arc co-bipartite graphs due to Trotter and Moore [@MR0450140]. Besides the three small graphs $F_5,F_6,F_7$ and the family ${\cal F}_3$ of complements of even cycles of length at least $6$, the list contains five other infinite families, the smallest members of which are graphs $F_8,\ldots, F_{12}$, respectively. (In [@MR0450140], the lists represent the complementary property and are denoted by ${\cal T}_i$, ${\cal W}_i$, ${\cal D}_i$, ${\cal M}_i$, and ${\cal N}_i$, respectively.) Determine the set of minimal forbidden induced minors for the class of [$1$-perfectly orientable graphs]{}. $1$-perfectly orientable cographs {#sec:cographs} ================================= Bang-Jensen et al. [@MR1244934] characterized [$1$-p.o.]{} line graphs and  triangle-free graphs, and in Section \[sec:co-bipartite-graphs\] we characterized [$1$-p.o.]{} co-bipartite graphs. We conclude the paper by characterizing [$1$-p.o.]{} cographs, obtaining characterizations both in terms of forbidden induced subgraphs and in terms of structural properties. The class of [*cographs*]{} is defined recursively by stating that $K_1$ is a cograph, the disjoint union of two cographs is a cograph, the join of two cographs is a cograph, and there are no other cographs. It is well known (see, e.g., [@MR1686154]) that cographs can be characterized in terms of forbidden induced subgraphs by a single obstruction, namely the $4$-vertex path $P_4$. \[thm:cograph\] For every cograph $G$, the following conditions are equivalent: 1. $G$ is $1$-perfectly orientable. 2. $G$ is $K_{2,3}$-free. 3. One of the following conditions holds: - $G\cong K_1$. - $G\cong \overline{mK_2}$ for some $m\ge 2$. - $G$ is the disjoint union of two smaller [$1$-p.o.]{} cographs. - $G$ is obtained from a [$1$-p.o.]{} cograph by adding to it a universal vertex. - $G$ is obtained from a [$1$-p.o.]{} cograph by adding to it a true twin. The implication $1\Rightarrow 2$ follows from Theorems \[prop:operations\] and \[thm:non-1-po\]. To show the implication $2\Rightarrow 3$, suppose that $G$ is a $K_{2,3}$-free cograph on at least two vertices that is not disconnected and does not have a universal vertex or a pair of true twins. We want to show that $G=\overline{mK_2}$. Since $G$ is not disconnected and $G\neq K_1$, its complement $\overline{G}$ is disconnected. Let $m\ge 2$ denote the number of [*co-components*]{} of $G$ (subgraphs of $G$ induced by the vertex sets of components of $\overline{G}$). If one of the co-components has exactly one vertex, then that vertex is universal in $G$, which is a contradiction. Therefore, each co-components has at least two vertices. The recursive structure of cographs implies that each co-component of $G$ is disconnected. In particular, it has independence number at least $2$. On the other hand, since $G$ is $K_{2,3}$-free, each co-component of $G$ has independence number at most $2$. This implies that each co-component is the disjoint union of two complete graphs. Since $G$ has no true twins, each co-component is isomorphic to $2K_1$, that is, $G\cong \overline{mK_2}$ for some $m\ge 2$, as claimed. Finally, we show the implication $3\Rightarrow 1$. Suppose that $G$ is a cograph such that one of the five conditions in item $3$ holds. An inductive argument shows that $G$ is [$1$-p.o.]{}, using Theorem \[prop:operations\] and the fact that $K_1$ and all graphs of the form $\overline{mK_2}$ are [$1$-p.o.]{} (which follows, e.g., from Corollary \[cor:co-linear-forest\]). Acknowledgement {#acknowledgement .unnumbered} --------------- We are grateful to Aistis Atminas, Marcin J. Kamiński, and Nicolas Trotignon for useful discussions on the topic and to Pavol Hell for helpful comments on an earlier draft. [^1]: This notion is not to be confused with the notion of [*perfectly orientable graphs*]{}, defined as graphs that admit a linear vertex ordering (called a [*perfect order*]{}) $<$ such that $G$ contains no $P_4$ $a,b,c,d$ with $a < b$ and $d < c$.
--- abstract: 'Superluminous supernova (SLSN) lightcurves exhibit a superior diversity compared to their regular luminosity counterparts in terms of rise and decline timescales, peak luminosities and overall shapes. It remains unclear whether this striking variety arises due to a dominant power input mechanism involving many underlying parameters, or due to contributions by different progenitor channels. In this work, we propose that a systematic quantitative study of SLSN lightcurve timescales and shape properties, such as symmetry around peak luminosity, can be used to characterize these enthralling stellar explosions. We find that applying clustering analysis on the properties of model SLSN lightcurves, powered by either a magnetar spin–down or a supernova ejecta–circumstellar interaction mechanism, can yield a distinction between the two, especially in terms of lightcurve symmetry. We show that most events in the observed SLSN sample with well–constrained lightcurves and early detections strongly associate with clusters dominated by circumstellar interaction models. Magnetar spin–down models also show association at a lower–degree but have difficulty in reproducing fast–evolving and fully symmetric lightcurves. We believe this is due to the truncated nature of the circumstellar interaction shock energy input as compared to decreasing but continuous power input sources like magnetar spin–down and radioactive $^{56}$Ni decay. Our study demonstrates the importance of clustering analysis in characterizing SLSNe based on high–cadence photometric observations that will be made available in the near future by surveys like [*LSST*]{}, [*ZTF*]{} and [*Pan–STARRS*]{}.' author: - 'E. Chatzopoulos' - Richard Tuminello bibliography: - 'refs.bib' title: A SYSTEMATIC STUDY OF SUPERLUMINOUS SUPERNOVA LIGHTCURVE MODELS USING CLUSTERING --- Introduction {#intro} ============ Superluminous supernovae (SLSNe; @2012Sci...337..927G [@2018arXiv181201428G; @2018SSRv..214...59M]) possess a striking diversity in terms of photometric and spectroscopic properties. SLSNe are often divided in two classes based on the presence of hydrogen (H) in their spectra: H–poor (SLSN–I) and H–rich (SLSN–II) events. In terms of photometry, SLSNe are characterized by reaching very high peak luminosities ($\gtrapprox 10^{44}$ erg s$^{-1}$) over timescales ranging from a few days to several months. The overall evolution and shape of SLSN lightcurves (LCs) can significantly vary from one event to another. Some SLSN LCs appear to have a symmetric, bell–like shape around peak luminosity [@2009ApJ...690.1358B; @2011Natur.474..487Q] while others are highly skewed with a fast rise followed by a slow, long–term decline [@2011ApJ...735..106D; @2016ApJ...831..144L]. Most SLSNe appear to be hosted in low–metallicity dwarf galaxies similar to long–duration Gamma–ray bursts (LGRBs) [@2011ApJ...727...15N; @2014ApJ...787..138L]. Several power input mechanisms have been proposed to interpret the extreme peak luminosities and diverse observational properties of SLSNe. Most SLSN–II show robust signs of circumstellar interaction with a hydrogen medium in their spectra indicating that effective conversion of shock heating to luminosity can reproduce their LCs [@2007ApJ...671L..17S; @2013ApJ...773...76C]. SLSN–I, on the other hand, do not show the usual signatures of circumstellar interaction and are often modelled by magneto–rotational energy release due to the spin–down of a newly–born magnetar following a core–collapse supernova (CCSN) explosion [@2010ApJ...717..245K; @2010ApJ...719L.204W; @2013ApJ...770..128I]. Nonetheless, the association between power input mechanism and SLSN type is still ambiguous. The magnetar spin–down model is occasionally invoked as an explanation to SLSN–II that exhibit P–Cygni H$\alpha$ line profiles, like SN 2008es, . On the other hand, circumstellar interaction cannot be completely ruled out for SLSN–I events because H lines may be hidden due to complicated circumstellar matter geometries [@2018ApJ...856...29M; @2018MNRAS.475.3152K], details of non–local thermal equillibrium line transfer physics in non–homologously expanding shocked, dense regions yet unexplored by numerical radiation transport models [@2013ApJ...773...76C; @2015MNRAS.449.4304D] or, simply, interaction with a H–deficient medium [@2012ApJ...760..154C; @2016ApJ...828...94C; @2016ApJ...829...17S]. A sub–class of SLSNe are found to transition from SLSN–I at early times to SLSN–II of Type IIn at late times indicating late–time interaction adding to the complexity of the problem [@2017ApJ...848....6Y]. Breaking the degeneracy between SLSNe powered by magnetar spin–down, circumstellar interaction and other mechanisms will help address a variety of important questions surrounding massive stellar evolution and explosive stellar death: the link between LGRBs and SLSNe, the formation of extremely magnetized stars following CCSN and their effect on the dynamics of the expansion of the supernova (SN) ejecta, the mass–loss history of massive stars in the days to years prior to their explosion and how their environments affect the radiative properties of their explosion, to name a few. The advent of automated, wide–field, high–cadence transient surveys like the [*Panoramic Survey Telescope and Rapid Response System; Pan–STARRS*]{} [@2002SPIE.4836..154K], the [*Zwicky Transient Facility; ZTF*]{} [@2019PASP..131a8002B] and, of course, the [*Large Synoptic Survey Telescope (LSST)*]{} [@2008SerAJ.176....1I] will significantly enhance the SLSN discovery rate and equip us with more complete photometric coverage that includes detections shortly after the SN explosion tightly constraining the LCs of these events. This work aims to illustrate how well–sampled LCs can be used to unveil the power input mechanism of SLSNe. This is done by quantitatively characterizing several key properties of SLSN LCs such as rise and decline timescales [@2015MNRAS.452.3869N] and LC symmetry around peak luminosity. Using the power of machine learning and $k$–means clustering analysis we are able to distinguish between groups of LC shape parameters corresponding to different power input mechanisms, and calculate their association with the properties of observed SLSN LCs. Our paper is organized as follows: Section \[obs\] presents the observed SLSN LC sample that we use in this work and introduces the LC shape properties that are utilized in our analysis. Section \[mod\] introduces the SLSN power input models adopted to obtain large grids of semi–analytic LCs across the associated parameter spaces. Section \[cluster\] introduces the $k$–means clustering analysis method that we employ to characterize observed and model SLSN LCs and Section \[results\] details the results of this analysis. Finally, Section \[disc\] summarizes our discussion. Observed SLSN Lightcurve Sample {#obs} =============================== We use the [*Open Supernova Catalog*]{} (OSC; @2017ApJ...835...64G) to access publicly available photometric data on a sample of 126 events that are spectroscopically classified as SLSN–I (68% of the sample) or SLSN–II (32% of the sample). For events with available redshift measurements, we compute pseudo–bolometric LCs using the [*SuperBol*]{}[^1] code [@2018RNAAS...2d.230N]. [*SuperBol*]{} is a user–friendly [*Python*]{} software instrument that uses the available observed fluxes in different filters to fit blackbodies to the Spectral Energy Distribution (SED) of a SN. The resulting pseudo–bolometric SN LCs can also be corrected for time dilation, distance and converted to the rest frame (K–correction). Using extrapolation techniques, missing near–infrared (NIR) and ultraviolet (UV) flux can also be accounted for. Subsequently, all rest–frame LCs are translated in time so that $t =$ 0 is coincident with the time corresponding to peak luminosity ($t_{\rm 0} = t_{\rm max}$), and scaled by the peak luminosity ($L_{\rm max}$). For the purposes of our study, we select a sub–sample of SLSNe defined by rest–frame LCs with near–complete temporal photometric coverage, that we define as including observed data in the range $L_{\rm max}/{\rm e}<L(t)<L_{\rm max}$ (or $1/{\rm e} < L(t) < 1$ in the scaled form). Thus, we only focus on SLSN LCs with observed evolution within one ${\rm e}$–folding timescale from the peak luminosity ensuring that our analysis relies only on real data and not approximate, often model–based, extrapolations to explosion time (see \[lcshape\]). In this regard, our sample selection criterion for LC coverage is similar to that used in (@2015MNRAS.452.3869N; hereafter referred to as N15) but our SLSN sample is larger from their “gold” sample by 8 events due to our inclusion of SLSN–II events and the availability of more SLSN discoveries since their publication. This process leaves us with a reduced sample of 25 SLSNe with well–covered LCs: 21 SLSN–I and 4 SLSN–II events. Table \[T1\] presents the details of the SLSN sample used in our analysis including the photometric band with the longest (in time) LC coverage that was used in generating their pseudo–bolometric LC. [*Quantitative properties of SLSNe LC shapes*]{} {#lcshape} ------------------------------------------------ In order to quantitatively constrain the shapes of SLSN LCs, we define the following scaled luminosity thresholds: - [Primary luminosity threshold: $L_{\rm 1} =1.0/{\rm e}$ or 36.79% of the peak luminosity.]{} - [Secondary luminosity threshold: $L_{\rm 2} = 1.0/(0.5{\rm e})$ or 73.58% of the peak luminosity.]{} - [Tertiary luminosity threshold: $L_{\rm 3} = 1.0/(0.4{\rm e})$ or 91.97% of the peak luminosity.]{} At each luminosity threshold we can compute a “rise–time” to peak luminosity and a “decline–time” from peak. As such, we accordingly define the primary, secondary and tertiary rise ($tr_{\rm 1}$, $tr_{\rm 2}$, $tr_{\rm 3}$) and decline ($td_{\rm 1}$, $td_{\rm 2}$, $td_{\rm 3}$) timescales. It is evident that $t[d,r]_{\rm 3} < t[d,r]_{\rm 2} < t[d,r]_{\rm 1}$ and that all of the SLSNe in our selected LC sample have observations that include these timescales. We note that our choice for the primary luminosity threshold and corresponding rise and decline timescales is the same as the one used in N15 to study how closely these timescales correlate with different power input models. Next, for the sake of quantifying how symmetric a LC is around peak luminosity, we define three corresponding “LC symmetry” parameters: $s_{\rm 1,2,3} = tr_{\rm 1,2,3}/td_{\rm 1,2,3}$. The closer these parameters are to unity, the more symmetric the LC is at the corresponding luminosity threshold. Obviously, to consider a LC as “fully symmetric” all of the three LC symmetry parameters need to be close to unity. For the purposes of this study we define a symmetric LC one that satisfies the criterion $0.9 < s_{\rm 1,2,3} < 1.1$. For the remainder of this paper we refer to the nine ($tr_{\rm 1,2,3}$, $td_{\rm 1,2,3}$, $s_{\rm 1,2,3}$) LC parameters as “LC shape parameters”. [lccccccccccccc]{} & & & [*SLSN–I*]{} & & & & & & & & &\ PTF09cnd & @2011Natur.474..487Q & 0.258 & UBgRi & 29.5 & 56.3 & 0.52 & 18.9 & 26.9 & 0.7 & 10.6 & 12.9 & 0.82\ SN2011kg & @2013ApJ...770..128I & 0.192 & UBgrizJ & 20.5 & 30.0 & 0.68 & 12.5 & 15.9 & 0.79 & 6.9 & 7.9 & 0.88\ SN2010md & @2013ApJ...770..128I & 0.098 & UBgriz & 30.4 & 31.9 & 0.95 & 16.1 & 16.6 & 0.97 & 8.4 & 8.4 & 1.0\ SN2213–1745 & @2012Natur.491..228C & 2.046 & g$^{\prime}$r$^{\prime}$i$^{\prime}$ & 10.4 & 25.5 & 0.41 & 6.7 & 8.6 & 0.78 & 3.7 & 4.3 & 0.87\ PTF09atu & @2011Natur.474..487Q & 0.501 & gRi & 48.8 & 50.9 & 0.96 & 29.9 & 30.2 & 0.99 & 16.4 & 16.0 & 1.02\ iPTF13ajg & @2014ApJ...797...24V & 0.740 & uBgR$_{\rm s}$iz & 21.9 & 28.8 & 0.76 & 14.3 & 16.4 & 0.87 & 8.0 & 8.6 & 0.93\ PS1–10pm & @2015MNRAS.448.1206M & 1.206 & griz & 27.9 & 25.4 & 1.1 & 14.9 & 15.0 & 0.99 & 7.9 & 7.9 & 1.0\ PS1–14bj & @2016ApJ...831..144L & 0.522 & grizJ & 81.6 & 138.2 & 0.59 & 49.2 & 64.9 & 0.76 & 27.2 & 32.4 & 0.84\ SN2013dg & @2014MNRAS.444.2096N & 0.265 & griz & 15.6 & 29.7 & 0.52 & 10.4 & 14.0 & 0.74 & 5.9 & 6.8 & 0.87\ iPTF13ehe & @2015ApJ...814..108Y [@2017ApJ...848....6Y] & 0.343 & gri & 53.4 & 62.1 & 0.86 & 32.2 & 35.4 & 0.91 & 18.1 & 18.1 & 1.0\ LSQ14mo & @2015ApJ...815L..10L & 0.253 & Ugri & 16.2 & 25.3 & 0.64 & 10.9 & 14.0 & 0.78 & 6.2 & 7.1 & 0.87\ PS1–10bzj & @2013ApJ...771...97L & 0.650 & griz & 14.6 & 22.5 & 0.65 & 10.3 & 13.8 & 0.75 & 6.1 & 7.2 & 0.84\ DES14X3taz & @2016ApJ...818L...8S & 0.608 & griz & 31.9 & 41.8 & 0.76 & 19.9 & 23.0 & 0.87 & 11.0 & 11.7 & 0.94\ LSQ14bdq & @2015ApJ...807L..18N & 0.345 & griz & 54.6 & 90.2 & 0.61 & 37.1 & 48.8 & 0.76 & 21.7 & 24.4 & 0.89\ SNLS 07D2bv & @2013ApJ...779...98H & 1.500 & griz & 18.9 & 17.7 & 1.07 & 12.5 & 12.8 & 0.98 & 7.1 & 7.0 & 1.01\ SNLS 06D4eu & @2013ApJ...779...98H & 1.588 & griz & 15.0 & 17.6 & 0.85 & 9.4 & 10.6 & 0.89 & 5.3 & 5.7 & 0.92\ PTF12dam & @2018ApJ...860..100D & 0.107 & UBgVrizJHK & 46.2 & 75.0 & 0.62 & 28.8 & 37.5 & 0.77 & 16.6 & 18.3 & 0.91\ SN2011ke & @2018ApJ...860..100D & 0.143 & UBgVriz & 22.1 & 26.6 & 0.83 & 12.3 & 13.8 & 0.97 & 6.8 & 7.0 & 0.97\ PTF12gty & @2018ApJ...860..100D & 0.177 & gri & 46.4 & 65.9 & 0.70 & 24.9 & 27.0 & 0.92 & 14.0 & 15.2 & 0.92\ PS1–11ap & @2018ApJ...852...81L & 0.524 & grizy & 26.7 & 52.5 & 0.51 & 18.5 & 26.3 & 0.71 & 11.0 & 12.9 & 0.85\ SCP 06F6 & @2009ApJ...690.1358B & 1.189 & iz & 31.8 & 32.7 & 0.97 & 19.5 & 19.5 & 1.0 & 10.6 & 10.4 & 1.02\ & & & [*SLSN–II*]{} & & & & & & & & &\ SN2006gy & @2007ApJ...666.1116S & 0.019 & BVR & 41.0 & 54.3 & 0.76 & 24.4 & 27.8 & 0.88 & 13.3 & 14.1 & 0.94\ CSS121015:004244+132827 & @2014MNRAS.441..289B & 0.287 & UBVRGI & 20.3 & 30.9 & 0.66 & 12.5 & 15.2 & 0.82 & 7.0 & 7.6 & 0.92\ SN2016jhn & @2018arXiv180108240M & 1.965 & GI2zY & 12.4 & 27.0 & 0.46 & 10.3 & 20.7 & 0.5 & 6.3 & 10.6 & 0.6\ SDSSII SN2538 &@2018PASP..130f4002S & 0.530 & u$^{\prime}$g$^{\prime}$r$^{\prime}$i$^{\prime}$z$^{\prime}$ & 31.6 & 37.8 & 0.84 & 19.0 & 19.2 & 0.99 & 10.0 & 10.0 & 1.0\ [lccccc|cccccccc]{} &&& SLSN–I & & & & & SLSN–II & &\ $tr_{\rm 1}$ & 31.6 & 27.9 & 17.3 & 81.6 & 10.4 & 26.3 & 25.9 & 10.9 & 41.0 & 12.4\ $td_{\rm 1}$ & 45.1 & 31.9 & 28.5 & 138.2 & 17.6 & 37.5 & 34.3 & 10.4 & 54.3 & 27.0\ $s_{\rm 1}$ & 0.74 & 0.70 & 0.19 & 1.10 & 0.41 & 0.68 & 0.71 & 0.14 & 0.84 & 0.46\ $tr_{\rm 2}$ & 19.5 & 16.1 & 10.5 & 49.2 & 6.7 & 16.6 & 15.8 & 5.6 & 24.4 & 10.3\ $td_{\rm 2}$& 23.4 & 16.6 & 13.6 & 64.9 & 8.6 & 20.7 & 19.9 & 4.5 & 27.8 & 15.3\ $s_{\rm 2}$ & 0.85 & 0.87 & 0.10 & 1.00 & 0.70 & 0.80 & 0.85 & 0.18 & 0.99 & 0.50\ $tr_{\rm 3}$ & 10.9 & 8.4 & 5.9 & 27.2 & 3.7 & 9.1 & 8.5 & 2.8 & 13.3 & 6.2\ $td_{\rm 3}$ & 11.9 & 8.6 & 6.7 & 32.4 & 4.3 & 10.6 & 10.3 & 2.3 & 14.1 & 7.6\ $s_{\rm 3}$ & 0.93 & 0.92 & 0.06 & 1.02 & 0.82 & 0.86 & 0.93 & 0.16 & 1.00 & 0.60\ We have developed a [*Python*]{} script that fits a high–degree polynomial to the scaled observed LCs of the SLSN in our sample. This provides with interpolation between missing photometric datapoints and an accurate measurement of the LC shape parameters discussed above. An example of such fit is shown in Figure \[Fig:06gypoly\] for SN2006; unarguably one of the most well–observed SLSN–II of Type IIn [@2007ApJ...666.1116S]. In this figure, the light blue horizontal lines show the three luminosity thresholds that were introduced earlier. Based on these thresholds, we find $tr_{\rm 1} =$ 41.0 days and $td_{\rm 1} =$ 54.3 days for this SN, implying primary symmetry, $s_{\rm 1} =$ 0.76. The rest of the LC shape parameters for SN2006gy are given in Table \[T1\]. Table \[T2\] lists the main LC shape statistical properties of the observed SLSN–I and SLSN–II in our sample. The SLSN–II sample only includes 4 events therefore preventing us from performing an accurate statistical comparison against the SLSN–I sample to look for potential systematic differences in the two distributions. Our sample overlaps with that presented in Table 3 of N15 for 11 SLSNe: SN2011ke, SN2013dg, LSQ14mo, LSQ13bdq, PTF12dam, CSS121015:004244+132827, PS1–11ap, SCP 06F6, PTF09cnd, PS1–10bj and iPTF13ajg. This is due to the fact that for the purposes of our study we decided to include only events with real detections shortly after the explosion and a good coverage of the LC in order to tightly constrain their LC shape parameters. N15, on the other hand, opted to use polynomial extrapolation to earlier times for some of the SLSNe in their sample in order to obtain estimates for $tr_{\rm 1}$ and $td_{\rm 1}$. For objects where this extrapolation is done only by a few days this may not be a bad approximation, however the LCs for cases like SN2007bi [@2009Natur.462..624G], SN2005ap [@2007ApJ...668L..99Q], and PS1-10ky [@2011ApJ...743..114C], $tr_{\rm 1}$ is poorly constrained using this method. For the 11 events that are common between our sample and that of N15, we calculate the mean value of $tr_{\rm 1}$ to be 27.2 days versus 25.7 days in their case, and the mean value of $td_{\rm1}$ to be 42.8 days compared to 51.6 days in their case. While our results are consistent in terms of $tr_{\rm 1}$, the discrepancy observed in $td_{\rm 1}$ could be due to a variety of reasons including different combinations of filters used to calculate the rest–frame pseudo–bolometric LC of each event. In our work, we have used all available filters with more than 2 data points for each event to construct LCs using [*SuperBol*]{} as described earlier. We caution that more accurate consideration for near–IR and IR fluxes may lead to flattening of the true bolometric LC at late times and therefore longer primary decline timescales. We note that comparing the mean $tr_{\rm 1}$ and $td_{\rm 1}$ values of our entire sample ($tr_{\rm 1} =$ 30.8 days, $td_{\rm 1} =$ 43.9 days from Table \[T2\]) against those of the full SLSN sample of N15 (their Table 3; $tr_{\rm 1} =$ 22.9 days, $td_{\rm 1} =$ 46.4 days) the agreement is somewhat better, within uncertainties. We also derive a linear fit for the observed $tr_{\rm 1}$ and $td_{\rm 1}$ values of the form: $$td_{\rm 1} = \gamma_{\rm 0} + \gamma_{\rm 1} \times tr_{\rm 1}\label{Eq1},$$ where $\gamma_{\rm 0} =$ -1.962 and $\gamma_{\rm 1} =$ 1.489 (see also \[Fig:tr1td1fit\]). In contrast, N15 derive a steeper correlation for their “gold” SLSN sample with $\gamma_{\rm 0,N15} =$ -0.10 and $\gamma_{\rm 1,N15} =$ 1.96. An investigation of Table \[T1\] reveals yet another interesting property of our observed SLSN sample: five SLSN–I events SN2010md, PTF09atu, PS1–10pm, SNLS 07D2bv and SCP 06F6) or, equivalently, 23.81% of the entire SLSN–I sample have fully symmetric LC around peak luminosity, following the criterion we established earlier for full LC symmetry ($0.9 < s_{\rm 1,2,3} < 1.1$). This can be said for more certainty for SN2010md and PTF09atu (with redshifts 0.098 and 0.5 accordingly) as compared to the other three events with large redshifts ($>$ 1), because in this case the observed band correspond to near–UV fluxes in the rest–frame. Bias toward UV fluxes may correspond to faster post–maximum decline rate and thus steeper, more symmetric LCs. Neverthelss, we have attempted to account for this effect by making use of approximate extrapolations to the IR flux by using the techniques available in [*SuperBol*]{}. The upper left panel of Figure \[Fig:symmLCs\] shows 2 examples of SLSNe with “fully–symmetric” LCs. Given that symmetric LCs are present in about a quarter of our SLSN–I sample, a considerable fraction of LC models corresponding to the proposed power input mechanisms must be able to reproduce this observation. [*This raises the question of whether LC symmetry is a property shared amongst all the proposed power input mechanisms for different combinations of model parameters or is uniquely tied to one power input mechanism. In the latter case, we can use photometry alone to characterize the nature of SLSNe*]{}. Lastly, another LC shape property that will be interesting to constrain with future, high–cadence photometric follow–up of SLSNe would be the convexity (second derivative) of the bolometric LC during the rise to peak luminosity [@2017ApJ...851L..14W]. Given the low temporal resolution of the observed LC in our sample, we opt to not provide estimates of the percentages of concave–up and concave–down LCs, yet we briefly discuss the predictions for these parameters coming from semi–analytical models in the following section. SLSN Power Input Models {#mod} ======================= A number of models have been proposed to explain both the unprecedented peak luminosities but, more importantly, the striking diversity in the observed properties of SLSNe, both photometrically (LC timescales and shapes) and spectroscopically (SLSN–I versus SLSN–II class events). The three most commonly cited SLSN power input mechanisms are the radioactive decay of several masses of $^{56}$Ni produced in a full–fledged Pair–Instability Supernova explosion (PISN; @2009Natur.462..624G [@2012ApJ...748...42C; @2015ApJ...799...18C]), the magneto–rotational energy release from the spin–down of a newly born magnetar following a core–collapse SN (CCSNe) [@2010ApJ...717..245K; @2010ApJ...719L.204W] and the interaction between SN ejecta and massive, dense circumstellar shells ejected by the progenitor star prior to the explosion [@2007ApJ...671L..17S; @2008ApJ...686..467S; @2016ApJ...828...94C; @2017ApJ...851L..14W]. We have decided to leave the PISN model outside of our analysis because of several reasons that make it unsuitable for contemporary SLSNe. First, given that the known hosts of SLSNe have metallicities $Z >$ 0.1 [@2013ApJ...771...97L; @2014ApJ...787..138L], very massive stars formed in these environments are likely to suffer strong radiatively–driven mass–loss preventing them from forming the massive carbon–oxygen cores ($\gtrapprox$ 40–60 $M_{\odot}$, depending on Zero Age Main Sequence rotation rate @2012ApJ...748...42C) required to encounter pair–instability . Second, the majority of PISN models do not yield superluminous LCs. Yet even many of the PISN superluminous LCs require total SN ejecta masses that are comparable to – or smaller in some cases – to the predicted $^{56}$Ni mass needed to explain the high peak luminosity [@2013ApJ...773...76C]. Finally, while radiation transport models of PISNe can reproduce superluminous LCs and provide good fits to the LCs of some SLSNe [@2009Natur.462..624G; @2017ApJ...846..100G], the model spectra are too red compared to the observed SLSN spectra at contemporaneous epochs [@2013MNRAS.428.3227D; @2015ApJ...799...18C]. Full–fledged PISN may however still be at play in lower metallicity environments and massive, Population III primordial stars. For an alternative perspective on the viability of low–redshfit full–fledged PISNe we refer to . We add that a model that is recently gaining popularity is energy input by fallback accretion into a newly–formed black hole following core collapse [@2013ApJ...772...30D]. One caviat of this model is that unrealistically large accretion masses are needed in order to fit the observed LCs of SLSNe given a fiducial choice for the energy conversion efficiency for the most cases [@2018ApJ...867..113M]. While the fallback accretion model is a very interesting suggestion that may be relevant to a small fraction of SLSNe, we opt to exclude it from our model LC shape analysis at least until it is further investigated in the literature. This leaves us with two main channels to power SLSNe often discussed today, the magnetar spin–down and the cirumstellar interaction model. From hereafter, we refer to the magnetar spin–down model as “MAG” and to the SN ejecta–circumstellar interaction model as “CSM”. For both the MAG and the CSM model, we adopt the semi–analytic formalism presented in [@2012ApJ...746..121C; @2013ApJ...773...76C] (hereafter C12, C13) and based on the seminal works of [@1980ApJ...237..541A; @1982ApJ...253..785A] on modeling the LCs of Type Ia and Type II SNe. While these models invoke many simplifying assumptions (centrally concentrated input source – in terms of energy density, homologous expansion of the SN ejecta and constant, Thompson scattering opacity for the SN ejecta to name a few), they remain a powerful tool to study the LC shapes of SNe assuming different power inputs because of their ability to provide reasonable estimates of the associated physical parameters when fit to observed data. In addition, these semi–analytic models are numerically inexpensive to compute, allowing us to compute large grids of LC models throughout the associated, multi–dimensional parameter space. As such they remain a popular SN LC modeling tool with a few codes that have been made publicly available to compute them such as [*TigerFit*]{} [@2017ApJ...851L..14W] and [*MOSFiT*]{} [@2018ApJS..236....6G]. We caution, however, that comparisons against rigorous, numerical radiation transport models have shown that semi–analytic SLSN LC models have their limitations, especially in regimes where the SN expansion is not homologous (for example due to circumstellar interaction) and due to the assumption of constant opacity in the SN ejecta and constant diffusion timescale [@2013MNRAS.428.1020M; @2018arXiv181206522K]. For this reason, we include some analysis of the LC shape properties of numerically–computed SLSN LCs that are available in the literature for both the MAG and the CSM model. [*The SN–ejecta circumstellar interaction model (CSM)*]{} {#csi} --------------------------------------------------------- Massive stars can suffer significant mass–loss episodes, especially during the late stages of their evolution, due to a variety of mechanisms: super–Eddington strong winds during a Luminous Blue Variable (LBV) stage similar to the $\eta$–Carina [@2007ApJ...671L..17S; @2011MNRAS.415..773S; @2018arXiv180910187J; @2018MNRAS.480.1466S], gravity–wave driven mass–loss excited during vigorous shell Si and O shell burning [@2012MNRAS.423L..92Q; @2014ApJ...780...96S; @2017MNRAS.470.1642F], binary interactions [@1994ApJ...429..300W] or a softer version of PISN that does not lead to complete disruption of the progenitor star (Pulsational Pair–Instability or PPISN; @2007Natur.450..390W [@2012ApJ...760..154C; @2017ApJ...836..244W]). PPISNe originate from less massive progenitors than full–fledged PISNe and can thus occur in the nearby Universe offering a channel to produce a sequence of SLSN–like transients originating from the same progenitor as successively ejected shells can collide with each other before the final CCSN takes place [@2016ApJ...828...94C; @2017ApJ...836..244W; @2018NatAs.tmp..125L]. As a result, both observational evidence and theoretical modeling suggest that the environments around massive stars can be very complicated with diverse geometries (circumstellar (CS) spherical or bipolar shells, disks or clumps) and, in some cases, very dense and at the right distance from the progenitor star that a violent interaction will be imminent following the SN explosion. This SN ejecta–circumstellar matter interaction (CSI) leads to the formation of forward and reverse shocks and the efficient conversion of kinetic energy into luminosity [@1994ApJ...420..268C; @2041-8205-729-1-L6] that can produce superluminous transients with immense diversity in their LC shapes and maybe even spectra [@2012ApJ...747..118M; @2013MNRAS.430.1402M; @2016MNRAS.tmp..117D; @2018MNRAS.475.3152K]. C12 combined the self–simular CSI solutions presented by @1994ApJ...420..268C with the @1980ApJ...237..541A [@1982ApJ...253..785A] LC modeling formalism to compute approximate, semi–analytical CSM models that were then successfully fit to the LCs of several SLSN–I and SLSN–II events in C13. Given a SN explosion energy ($E_{\rm SN}$), SN ejecta mass ($M_{\rm ej}$), the index of the outer (power–law) density profile of the SN ejecta ($n$, related to the progenitor radius), the distance of the CS shell ($R_{\rm CS}$), the mass of the CS shell $M_{\rm CS}$, the (power–law) density profile of the CS shell ($s$) and the progenitor star mass–loss rate ($\dot{M}$) a model, semi–analytic CSM LC can be computed. The energy input originates from the efficient conversion of the kinetic energy of both the forward and the reverse shock to luminosity. As such, forward shock energy input is terminated when it breaks out to the optically–thin CS while reverse shock input is terminated once it sweeps–up the bulk of the SN ejecta. [*This is a property unique to the CSM model and not present in other, continuous heating sources such as radioactive decay of $^{56}$Ni and magnetar spin–down input: during CSI energy input terminates abruptly, thus affecting the shape of the LC in a way that can yield a faster decline in luminosity at late times.*]{} While the CSM model can naturally explain the observed diversity of SLSN LCs and is consistent with observation of narrow emission lines in the spectra of SLSN–II events of IIn class, it has been challenged as a viable explanation for SLSN–I due to the lack of spectroscopic signatures associated with interaction (@2013ApJ...770..128I, N15). There is, however, a “hybrid” class of SLSNe that transition from SLSN–I to SLSN–II at late times indicating possible interaction with H–poor material early on before the SN ejecta reach the ejected H envelope and interact with it producing Balmer emission lines [@2015ApJ...814..108Y]. Another concern for the CSM model is the necessity to include many parameters in the model that can lead to overfitting observed data and to parameter degeneracy issues [@2013MNRAS.428.1020M]. Detailed radiation hydrodynamics and radiation transport modeling of the CSI process across the relevant parameter space, including in cases of H–poor CSI, is still needed in order to resolve whether SLSN–I can be powered by this mechanism. [*The magnetar spin–down model (MAG)*]{} {#mag} ---------------------------------------- The spin–down of a newly born magnetar following CCSN can release magneto–rotational energy that, if efficiently thermalized in the expanding SN ejecta, can produce a superluminous display [@2010ApJ...717..245K; @2010ApJ...719L.204W]. Given a dipole magnetic field for the magnetar, an initial rotation period of $P_{\rm mag}$ in units of 1 ms and an initial magnetar magnetic field $B_{\rm 14, mag}$ in units of $10^{14}$ G, the associated SN LC can be computed by making use of Equation 13 of C12. This model LC can also provide estimates for the SN ejecta mass, $M_{\rm ej}$, that is controlled by the diffusion timescale (Equaton 10 of C12). Numerical radiation transport simulations of SNe powered by magnetars have yielded additional insights on the efficiency of this model in powering SLSNe, primarily of the hydrogen–poor (SLSN–I) type . Some observational evidence linking the host properties of SLSN–I to those of long–duration Gamma–ray bursts [@2014ApJ...787..138L] and the discovery of double–peaked SLSN LCs, a feature that can be produced by magnetar–driven shock breakout [@2015ApJ...807L..18N; @2016ApJ...821...36K] seem to strongly suggest that most, if not all, SLSN–I are powered by this mechanism. This is strengthened by the suggestion that a lot of SLSN LCs can be successfully fit but a semi–analytical MAG LC model [@2017ApJ...850...55N; @2018ApJ...860..100D]. There is, however, on–going discussion on whether the MAG model is always efficient in thermalizing the magnetar luminosity in the SN ejecta or even allowing for the efficient conversion of the magnetar energy to radiated luminosity [@2006MNRAS.368.1717B], instead of kinetic energy for the inner ejecta [@2016ApJ...821...22W]. Recent, 2D simulations of magnetar–powered SNe appear to enhance these concerns [@2016ApJ...832...73C; @2017ApJ...839...85C]. [*Grids of Models with the TigerFit code*]{} {#tigerfit} -------------------------------------------- We have adapted the [*TigerFit*]{} code [@2016ApJ...828...94C; @2017ApJ...851L..14W] to run grids of CSM and MAG models throughout a large parameter space in order to systematically study the statistical LC shape properties and determine their association with the observed SLSN sample presented in Section \[obs\]. ![Same as Figure \[Fig:tr1td1fit\] but for $s_{\rm 1}$, $s_{\rm 2}$ and $s_{\rm 3}$.[]{data-label="Fig:s1s2s3fit"}](s1_s2_s3_num_analytical_obs.png){width="9cm"} For the CSM model we consider cases with H–poor opacity (CSM–I; $\kappa =$ 0.2 cm$^{2}$ g$^{-1}$) and H–rich opacity ($\kappa =$ 0.4 cm$^{2}$ g$^{-1}$) and run two sets of grids: (a) CSM–I$\kappa$/CSM–II$\kappa$ models, where the parameter grid is identical and (b) CSM–I/CSM–II models where the parameter grid is constrained in each case, motivated by assumptions about the nature of the progenitor stars in Type I versus Type II SNe respectively that are further discussed later in this section. For case (a) the ranges used for each parameter are as following: - [$E_{\rm SN, 51} \in [1.0,1.2,1.5,2.0]$, where $E_{\rm SN} = E_{\rm SN,51} \times 10^{51}$ erg]{} - [$M_{\rm ej} \in [5,8,10,15,20,25,30,40]$, where $M_{\rm ej}$ is in units of $M_{\odot}$]{} - [$n \in [7,8,9,10,11,12]$]{} - [$R_{\rm CS,15} \in [10^{-5},10^{-4},10^{-3},10^{-2},10^{-1}]$, where $R_{\rm CS} = R_{\rm CS,15} \times 10^{15}$ cm]{} - [$M_{\rm CS} \in [0.1,0.2,0.5,1.0,2.0,5.0,8.0]$, where $M_{\rm CS}$ is in units of $M_{\odot}$]{} - [$\dot{M} \in [0.001,0.01,0.05,0.1,0.2,0.5,1]$, where $\dot{M}$ is in units of $M_{\odot}$ yr$^{-1}$.]{} For case (b) and the CSM–I subset, the ranges used are: - [$E_{\rm SN, 51} \in [1,1.2,1.5,1.75,2]$]{} - [$M_{\rm ej} \in [5,8,10,12,15,20,25,30]$]{} - [$n \in [7,8,9]$]{} - [$R_{\rm CS,15} \in [10^{-5},10^{-4},5 \times 10^{-4},10^{-3},5 \times 10^{-3},10^{-2}]$]{} - [$M_{\rm CS} \in [0.1,0.2,0.5,0.7,1.0,2.0,5.0]$]{} - [$\dot{M} \in [10^{-5},10^{-4},10^{-3},0.01,0.1,0.2,0.5,1.0,2.0]$,]{} and accordingly for the CSM–II subset: - [$E_{\rm SN, 51} \in [1,1.2,1.5,1.75,2]$]{} - [$M_{\rm ej} \in [12,15,20,25,30,40,50,60]$]{} - [$n \in [10,11,12]$]{} - [$R_{\rm CS,15} \in [0.01,0.05,0.08,0.10,0.20,0.30]$]{} - [$M_{\rm CS} \in [0.5,1.0,2.0,5.0,8.0,10.0,15.0]$]{} - [$\dot{M} \in [10^{-5},10^{-4},10^{-3},0.01,0.1,0.2,0.5,1.0,2.0]$]{} For all CSM models we are focusing on the $s =$ 0 cases implying a fiducial, constant–density circumstellar shell. While the $s =$ 2 case is of interest since it implies a radiatively–driven wind structure that is common around red supergiant stars (RSGs) we omit it in this work because it is inconsistent with episodic mass–loss, that is more likely to be the case for luminous SNe. Also, for the vast majority of cases where the $s =$ 2 choice yields luminous LCs other parameters obtain unrealistic values (for example, $M_{\rm CS}$ values in excess of $\sim$ 100 $M_{\odot}$ are commonly found; C13). As a result, a total of 47,040 models were generated for the CSM–I$\kappa$/CSM–II$\kappa$ cases and 45,360 models for the CSM–I/CSM–II cases. [lccccc|ccccccc]{} &&& CSM–I & & & & & CSM–II & &\ $tr_{\rm 1}$ & 12.2 & 11.0 & 5.9 & 36.1 & 2.3 & 45.1 & 46.6 & 8.8 & 59.5 & 17.3\ $td_{\rm 1}$ & 29.7 & 28.6 & 13.2 & 82.8 & 4.0 & 72.6 & 69.5 & 16.1 & 101.1 & 44.9\ $s_{\rm 1}$ & 0.43 & 0.41 & 0.15 & 0.87 & 0.13 & 0.64 & 0.61 & 0.13 & 1.00 & 0.37\ $tr_{\rm 2}$ & 7.0 & 6.0 & 4.0 & 28.2 & 1.3 & 18.7 & 19.4 & 3.7 & 25.4 & 7.1\ $td_{\rm 2}$& 9.1 & 8.4 & 5.0 & 33.3 & 1.5 & 24.4 & 25.9 & 4.7 & 31.7 & 11.8\ $s_{\rm 2}$ & 0.78 & 0.77 & 0.15 & 1.15 & 0.52 & 0.77 & 0.74 & 0.14 & 1.14 & 0.60\ $tr_{\rm 3}$ & 2.9 & 2.2 & 2.6 & 18.9 & 0.5 & 6.1 & 6.1 & 1.2 & 8.3 & 2.5\ $td_{\rm 3}$ & 3.1 & 2.4 & 3.1 & 24.4 & 0.5 & 7.0 & 7.2 & 1.4 & 10.2 & 3.0\ $s_{\rm 3}$ & 0.92 & 0.93 & 0.11 & 1.10 & 0.73 & 0.88 & 0.85 & 0.10 & 1.09 & 0.73\ [lccccc|ccccccc]{} &&& CSM–I$\kappa$ & & & & & CSM–II$\kappa$ & &\ $tr_{\rm 1}$ & 15.1 & 12.6 & 8.3 & 50.3 & 2.5 & 11.9 & 11.5 & 3.5 & 22.1 & 3.3\ $td_{\rm 1}$ & 32.2 & 30.5 & 16.0 & 83.2 & 3.3 & 25.3 & 22.9 & 10.7 & 49.7 & 5.0\ $s_{\rm 1}$ & 0.50 & 0.48 & 0.18 & 1.03 & 0.16 & 0.52 & 0.48 & 0.16 & 0.86 & 0.26\ $tr_{\rm 2}$ & 7.7 & 6.1 & 5.4 & 39.3 & 1.7 & 6.8 & 6.3 & 3.5 & 20.8 & 2.2\ $td_{\rm 2}$ & 10.4 & 8.5 & 6.4 & 38.9 & 1.9 & 8.4 & 7.5 & 4.0 & 22.3 & 1.9\ $s_{\rm 2}$ & 0.75 & 0.72 & 0.26 & 1.16 & 0.53 & 0.82 & 0.80 & 0.16 & 1.15 & 0.55\ $tr_{\rm 3}$ & 3.1 & 2.3 & 3.4 & 26.3 & 0.6 & 2.6 & 2.1 & 2.4 & 15.2 & 0.9\ $td_{\rm 3}$ & 3.6 & 2.5 & 4.2 & 32.5 & 0.6 & 2.9 & 2.5 & 2.9 & 18.6 & 1.0\ $s_{\rm 3}$ & 0.90 & 0.89 & 0.10 & 1.10 & 0.74 & 0.90 & 0.89 & 0.10 & 1.09 & 0.74\ [lcccccc]{} &&& MAG & &\ $tr_{\rm 1}$ & 22.8 & 18.7 & 14.3 & 64.4 & 4.9\ $td_{\rm 1}$ & 50.8 & 43.3 & 28.4 & 123.9 & 10.7\ $s_{\rm 1}$ & 0.44 & 0.46 & 0.08 & 0.54 & 0.20\ $tr_{\rm 2}$ & 15.2 & 12.5 & 9.3 & 41.4 & 3.3\ $td_{\rm 2}$ & 22.2 & 18.4 & 13.0 & 56.4 & 4.7\ $s_{\rm 2}$ & 0.68 & 0.69 & 0.05 & 0.78 & 0.52\ $tr_{\rm 3}$ & 8.8 & 7.2 & 5.3 & 23.5 & 1.9\ $td_{\rm 3}$ & 10.5 & 8.7 & 6.4 & 27.1 & 2.06\ $s_{\rm 3}$ & 0.85 & 0.84 & 0.07 & 1.09 & 0.73\ [lccccccccccccc]{} & @2016MNRAS.tmp..117D & CSM–I & 5.9 & 43.0 & 0.14 & 4.3 & 9.8 & 0.44 & 2.7 & 3.9 & 0.70\ [T130D-b]{} & @2017ApJ...836..244W & CSM–I & 6.9 & 11.9 & 0.59 & 4.3 & 5.8 & 0.75 & 2.3 & 2.9 & 0.80\ [D2]{} & @2013MNRAS.428.1020M & CSM–II & 29.9 & 50.1 & 0.60 & 19.0 & 22.7 & 0.84 & 10.5 & 11.3 & 0.93\ [F1]{} & @2013MNRAS.428.1020M & CSM–II & 33.5 & 82.0 & 0.41 & 23.3 & 43.1 & 0.54 & 13.7 & 18.8 & 0.73\ [R3]{} & @2016MNRAS.tmp..117D & CSM–II & 5.4 & 11.4 & 0.47 & 3.7 & 5.7 & 0.65 & 2.0 & 2.7 & 0.75\ [T20]{} & @2017ApJ...836..244W & CSM–II & 10.7 & 20.0 & 0.53 & 7.0 & 9.8 & 0.71 & 3.7 & 4.7 & 0.80\ (Black curve) & @2010ApJ...717..245K & MAG & 21.4 & 38.5 & 0.56 & 13.7 & 18.8 & 0.73 & 7.7 & 9.4 & 0.82\ [KB 2]{} (Red curve) & @2010ApJ...717..245K & MAG & 38.5 & 117.9 & 0.33 & 25.37 & 40.9 & 0.62 & 14.7 & 18.0 & 0.82\ [Model 2]{} & @2016ApJ...821...36K & MAG & 48.2 & 100.3 & 0.48 & 33.6 & 49.3 & 0.68 & 20.2 & 24.1 & 0.84\ [RE3B1]{} & @dessartaudit & MAG & 58.8 & 96.7 & 0.61 & 46.5 & 43.9 & 1.06 & 31.1 & 19.2 & 1.62\ [RE0p4B3p5]{} & @dessartaudit & MAG & 57.3 & 68.0 & 0.84 & 34.8 & 35.6 & 0.97 & 19.0 & 18.6 & 1.02\ [lccccc|ccccccc]{} &&& CSM–I/CSM–II & & & & & MAG & &\ $tr_{\rm 1}$ & 15.4 & 8.8 & 33.5 & 5.4 & 11.7 & 44.8 & 48.2 & 58.8 & 22.4 & 13.8\ $td_{\rm 1}$ & 36.4 & 31.5 & 82.1 & 11.4 & 25.2 & 84.3 & 96.7 & 117.9 & 38.5 & 28.0\ $s_{\rm 1}$ & 0.46 & 0.50 & 0.60 & 0.14 & 0.16 & 0.56 & 0.56 & 0.84 & 0.33 & 0.17\ $tr_{\rm 2}$ & 10.3 & 5.7 & 23.3 & 3.7 & 7.9 & 30.8 & 33.6 & 46.5 & 13.7 & 10.9\ $td_{\rm 2}$ & 16.13 & 9.8 & 43.1 & 5.7 & 13.3 & 37.7 & 40.9 & 49.3 & 18.8 & 10.5\ $s_{\rm 2}$ & 0.66 & 0.68 & 0.84 & 0.44 & 0.133 & 0.81 & 0.73 & 1.06 & 0.62 & 0.17\ $tr_{\rm 3}$ & 5.8 & 3.2 & 13.7 & 2.7 & 4.6 & 18.5 & 19.0 & 31.1 & 7.7 & 7.7\ $td_{\rm 3}$ & 7.4 & 4.3 & 18.8 & 2.7 & 5.9 & 17.9 & 18.6 & 24.1 & 9.4 & 4.8\ $s_{\rm 3}$ & 0.79 & 0.78 & 0.93 & 0.70 & 0.07 & 1.02 & 0.84 & 1.62 & 0.82 & 0.31\ Our motivation for adopting different parameter ranges for the CSM–I and CSM–II models stems from several factors. First, larger $M_{\rm CS}$ values are possible in the CSM–II case as suggested by spectroscopic observations of SLSN-II of Type IIn [@2010ApJ...709..856S] where stronger mass–loss pertains due to LBV–type or PPISN processes . That, in turn, also implies larger progenitor masses (and therefore $M_{\rm ej}$) for CSM–II, as is the case for regular luminosity SNe where LC fits imply larger $M_{\rm ej}$, and therefore larger diffusion timescales, for Type II events than for Type I SNe. Finally, lower values of $n$ are more typical of compact, blue supergiant (BSG) progenitors with radiative envelopes while while higher values imply extended, RSG–type convective envelopes that are more appropriate for SLSN–II [@2003LNP...598..171C]. In summary, the CSM–II parameters are associated with RSG–type progenitors with extended H–rich envelopes while the CSM–I parameters with more compact, BSG–type stars. We caution that one potential issue with our choices for model parameter grids, is there are no good observational constrains yet on what the shape of the distribution of SN ejecta and circumstellar shell masses should be, so using these models in a clustering analysis (Section \[cluster\]) might be misleading as it can create dense clusters of models that might actually be very sparsely populated in nature, or conversely an underdensity of points in regions where more MAG or CSM SNe might lie in reality. Our grid selection for $M_{\rm CS}$ is largely driven by published observations of nebular shells around massive, LBV–type stars indicating $M_{\rm CS} \simeq$ 0.1–20 $M_{\odot}$ . The ranges for $M_{\rm ej}$ are within typical ranges for stars massive enough to experience a SN, and in agreement with observations of SN progenitor stars in pre–explosion images and supernova remnants ($M_{\rm ej} \simeq$ 8–25 $M_{\odot}$) . Higher–mass progenitors cannot be excluded given observations of stars as massive as $>$ 150 $M_{\odot}$ in the Milky Way galaxy [@2010MNRAS.408..731C]. For the MAG model, we investigate a dense grid of models with $10^{12} < B_{\rm MAG} < 10^{15}$ G and $1.0 < P_{\rm MAG} < 50$ ms, where $B_{\rm MAG}$ and $P_{\rm MAG}$ is the magnetic field and the initial rotational period of the magnetar respectively. We are also varying the diffusion timescale, $t_{\rm d}$, that further controls the shape of MAG model LCs (Equation 13 of C12), in the range $3 < t_{\rm d} < 100$ days. The grid resolution we use for these parameters results to a total of 46,656 MAG model LCs generated. A large fraction of CSM and MAG models did not produce superluminous LCs, which we take to be those reaching $L_{\rm max} = 10^{44}$ erg s$^{-1}$ or more [@2012Sci...337..927G]. These models are ignored from each of our CSM and MAG model samples for further analysis. In addition, we exclude model LCs that result in physically inconsistent parameters such as combinations of $B_{\rm MAG}$ and $P_{\rm MAG}$ values in the MAG model that are incompatible with the convective dynamo process in magnetars [@1992ApJ...392L...9D], and CSM models that yield $M_{\rm CS}$ too large compared to the associated $M_{\rm ej}$ values that represent a measure of the total progenitor mass. As a result, our original CSM–I/CSM–II, CSM–I$\kappa$/CSM–II$\kappa$ and MAG model samples are each reduced into smaller subsamples of nearly equal size that are then used in our final LC shape parameter analysis. More specifically, a total of 306 CSM–I/CSM–II, 248 CSM–I$\kappa$/CSM–II$\kappa$ and 304 MAG superluminous LC models are used in this work. The statistical properties of the LC shape parameters of all models are summarized in Tables \[T3\] through \[T5\]. Figures \[Fig:tr1td1\] and \[Fig:s1s2s3\] show the distribution of a few LC shape parameters ($tr_{\rm 1}$, $td_{\rm 1}$, $s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$) for the CSM–I/CSM–II and MAG model samples and Figure \[Fig:symmLCs\] examples of some of the most symmetric LCs in these samples. For comparison against our semi–analytical LCs, we have also included a sample numerical CSM and MAG LCs available in the literature. Table \[T6\] lists the details of the numerical model LCs and Table \[T7\] summarizes the statistics of their shape parameters. Figure \[Fig:tr1td1fit\] is a scatter plot between $tr_{\rm 1}$ and $td_{\rm 1}$ for all samples in this work, including the numerical MAG and CSM models. A linear best–fit to the observed SLSN–I and SLSN–II data is also shown (see Equation \[Eq1\]). Although we chose to not use different symbols for the CSM models as presented in Figure \[Fig:tr1td1fit\], it is evident by inspecting Table \[T4\] that CSM–II models occupy the upper right corner of this plot given their longer primary rise and decline timescales. A few SLSN–I thus appear to be associated with the CSM–II data that were chosen based on assumptions for the progenitors of H–rich SLSNe. The situation is different when looking at the CSM–I$\kappa$/CSM–II$\kappa$ distribution, however, where the parameter grids are identical and the only difference is due to different SN ejecta + CS shell opacity. In this case, the primary timescales of the models are consistent. Very slowly evolving H–poor SLSNe may be hard to produce under the assumption of H–poor CSM interaction given the large, H–deficient CS shell mass needed to account for the long primary rise and decline timescales. Interaction with a H–poor CS shells of non–spherical geometry in combination with viewing–angle effects may be a way out of this apparent discrepancy [@2018MNRAS.475.3152K]. Accordingly, Figure \[Fig:s1s2s3fit\] shows a 3D scatter plot for the primary, secondary and tertiary LC symmetry parameter for all samples. The superluminous LCs recovered infer the following mean values for the parameters of each model: - [CSM–I: $E_{\rm SN, 51} =$ 1.75, $M_{\rm ej} =$ 10 $M_{\odot}$, $n =$ 8, $R_{\rm CS,15} =$ 0.006, $M_{\rm CS} =$ 1 $M_{\odot}$ and $\dot{M} =$ 0.01 $M_{\odot}$ yr$^{-1}$,]{} - [CSM–II: $E_{\rm SN, 51} =$ 2.00, $M_{\rm ej} =$ 13 $M_{\odot}$, $n =$ 12, $R_{\rm CS,15} =$ 0.2, $M_{\rm CS} =$ 10 $M_{\odot}$ and $\dot{M} =$ 0.01 $M_{\odot}$ yr$^{-1}$,]{} - [CSM–I$\kappa$: $E_{\rm SN, 51} =$ 1.80, $M_{\rm ej} =$ 10 $M_{\odot}$, $n =$ 9, $R_{\rm CS,15} =$ 0.08, $M_{\rm CS} =$ 2 $M_{\odot}$ and $\dot{M} =$ 0.15 $M_{\odot}$ yr$^{-1}$,]{} - [CSM–II$\kappa$: $E_{\rm SN, 51} =$ 2.00, $M_{\rm ej} =$ 7 $M_{\odot}$, $n =$ 9, $R_{\rm CS,15} =$ 0.1, $M_{\rm CS} =$ 0.3 $M_{\odot}$ and $\dot{M} =$ 0.3 $M_{\odot}$ yr$^{-1}$,]{} - [MAG: $B_{\rm MAG} = 1.4 \times 10^{13}$ G and $P_{\rm MAG} =$ 1.3 ms.]{} These parameters are within the range of semi–analytical and numerical fits of the CSM and MAG models to observed SLSN LCs commonly found in the literature. A careful examination of the computed LC shape parameter distributions for the CSM and MAG models reveals a lot of interesting insights. First, the primary rise and decline timescales appear to have a binary distribution for the CSM models with CSM–I models typically reaching shorter $tr_{\rm 1}$ and $td_{\rm 1}$ values than CSM–II models. This is both due to the physically–motivated choices for the parameter grids discussed earlier, but also because of the opacity difference between H–rich and H–poor models. On the other hand, the MAG models show a more continuous and single–peaked distribution with typical values $tr_{\rm 1} \simeq$ 5–15 days and $td_{\rm 1} \simeq$ 20–30 days. In terms of LC symmetry, the majority of models do not appear to produce symmetric LCs around the primary luminosity threshold as $0.9 < s_{\rm 1} < 1.1$ values are rarely recovered. In fact, CSM is the only set of models reaching $s_{\rm 1}$ values close to unity while MAG is unable to produce any models with symmetric LCs both in terms of $s_{\rm 1}$ and $s_{\rm 2}$. Even the most symmetric MAG LCs in our sample appear to have this issue (Figure \[Fig:symmLCs\]) [*This is an important issue for MAG models given that a significant fraction of observed SLSN–I are symmetric around these luminosity thresholds*]{} (Section \[obs\]). This seems to be the case for numerically–computed MAG LC models as well, with the most symmetric one being model [RE0p4B3p5]{} [@dessartaudit] with $s_{\rm 1} =$ 0.84. Numerical CSM models tend to yield more rapidly–evolving LCs than their semi–analytical counterparts. The primary source of this difference is the assumption of a constant diffusion timescale in the semi–analytical CSM models [@2013MNRAS.428.1020M; @2018arXiv181206522K]. We explore the possiblity that gamma–ray leakage produces faster–declining MAG LCs, therefore enhancing symmetry, by adopting the same formalism employed in the case of LCs powered by the radioactive decay of $^{56}$Ni [@1984ApJ...280..282S; @1997ApJ...491..375C; @2008MNRAS.383.1485V; @2013ApJ...773...76C]. Using a fiducial SN ejecta gamma–ray opacity of $\kappa_{\rm \gamma} =$ 0.03 cm$^{2}$ g$^{-1}$ and the implied SN ejecta mass for the two most symmetric MAG models shown in the top right panel of Figure \[Fig:symmLCs\], we adjust the output luminosity as $L^{\prime}(t) = L(t) (1-\exp{-A t^{-2}})$, where $A t^{-2} = \kappa_{\rm \gamma} \rho R$. The two most symmetric MAG models with high gamma–ray leakage are then plotted as dashed curves. Allowing for gamma–rays to escape can increase the decline rate of the LC at late times leading to shorter $td_{\rm 1}$ and slightly higher $s_{\rm 1}$ values. The change, however, still falls short in producing symmetric MAG LCs since $s_{\rm 1}$ only increases by 14–22% and the maximum value for $s_{\rm 1} \lessapprox$ 0.6. Second, the observed tight $tr_{\rm 1}$–$td_{\rm 1}$ correlation in SLSN LCs is reproduced by both CSM and MAG models. CSM models generally predict faster–evolving LCs at late times than MAG models, consistent with the observations. This is mainly due to the continuous power input in the MAG model that sustains a flatter LC at late times while in the CSM model the energy input is terminated abruptly leading to rapid decline after peak luminosity (C12). An example of a SLSN with a very flat late–time LC is SN2015bn [@2018ApJ...866L..24N], indicating that this may be a good candidate for the MAG model. The observed LC symmetry parameter distributions (Figure \[Fig:s1s2s3fit\]) reveal a more distinct dichotomy between CSM and MAG models. MAG models fail to produce fully symmetric LCs and are clustered in a confined region of the 3D ($s_{\rm 1}$, $s_{\rm 2}$ and $s_{\rm 3}$) parameter space while CSM models more scatter. Finally, we estimate the fraction of CSM and MAG model SLSN LCs that have a concave–up shape during the rise to peak luminosity or, in other words, positive second derivative for $t<t_{\rm max}$. An example of an observed SLSN with concave–up LC during the rise is SN 2017egm [@2017ApJ...851L..14W]. Not a single MAG LCs is found to be concave–up during the rise. On the contrary, $\sim$ 20% of CSM–I, $\sim$ 60% of CSM–II and $\sim$ 50% of CSM–I$\kappa$/CSM–II$\kappa$ models are found to have concave–up rise to peak luminosity. The implication is that the shape of the rising part of SLSN LCs may also be tied to the nature of the power input mechanism and, specifically, the functional form of the input luminosity. Continuous, monotonically declining power inputs like $^{56}$Ni decay and magnetar spin–down energy correspond to concave–down SLSN LCs while truncated CSM shock luminosity input depends on the details of the SN ejecta and the circumstellar material density structure and can yield either concave–up or concave–down LCs during the early, rising phase. This further enforces the need to obtain high–cadence photometric coverage of these events in the future transient surveys. [lccccccccc]{} CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.62 & 0.66 & 5.95/18.45/75.6 & 59.28/0.68/40.05 & -\ & & & & & & 33.33/25.00 & 66.67/75.00 & -\ CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 3 & 0.46 & 0.58 & 61.11/0.00/38.89 & 27.70/12.16/60.14 & 0.00/19.05/80.95\ & & & & & & 57.14/50.00 & 28.57/50.00 & 14.29/0.00\ CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.77 & 0.63 & 99.59/0.41 & 48.44/51.56 & -\ & & & & & & 57.14/75.00 & 42.86/25.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.68 & 0.65 & 44.61/11.03/44.36 & 11.19/0.00/88.81 & -\ & & & & & & 66.67/75.00 & 33.33/25.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 3 & 0.49 & 0.56 & 38.89/1.85/59.26 & 42.90/13.53/43.56 & 1.30/0.0/98.70\ & & & & & & 28.57/50.0 & 57.14/50.00 & 14.29/0.00\ CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$ & 2 & 2 & 0.66 & 0.57 & 77.18/22.82 & 88.76/11.24 & -\ & & & & & & 47.62/50.00 & 52.38/50.00 & -\ CSM–I/CSM–II/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.43 & 34.55/4.07/61.38 & 86.44/11.86/1.69 & -\ & & & & & & 23.81/25.00 & 76.19/75.00 & -\ CSM–I/CSM–II/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 3 & $<$0.01 & 0.32 & 26.19/4.76/69.05 & 82.35/17.65/0.00 & 71.34/2.44/26.22\ & & & & & & 28.57/25.00 & 71.43/75.00 & 0.00/0.00\ CSM–I/CSM–II & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.33 & 82.31/17.69 & 93.75/6.25 & -\ & & & & & & 80.95/75.00 & 19.05/25.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.60 & 31.12/5.81/63.07 & 73.33/26.67/0.00 & -\ & & & & & & 42.86/25.00 & 57.14/75.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 3 & $<$0.01 & 0.33 & 42.31/7.69/50.00 & 24.67/5.26/70.07 & 75.00/25.00/0.00\ & & & & & & 47.62/25.0 & 52.38/75.00 & 0.00/0.00\ CSM–I$\kappa$/CSM–II$\kappa$ & $s_{\rm 1}$,$s_{\rm 2}$,$s_{\rm 3}$ & 3 & 2 & $<$0.01 & 0.50 & 84.18/15.82 & 73.77/26.23 & -\ & & & & & & 38.10/25.00 & 61.90/75.00 & -\ CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.71 & 0.66 & 2.44/18.90/78.66 & 60.09/0.67/39.24 & -\ & & & & & & 38.10/25.00 & 61.90/75.00 & -\ CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 3 & 0.54 & 0.56 & 0.00/16.47/83.53 & 61.97/0.00/38.03 & 26.17/13.42/60.41\ & & & & & & 19.05/0.00 &52.38/50.00 & 28.57/50.00\ CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.84 & 0.63 & 46.77/53.23 & 99.59/0.41 & -\ & & & & & & 42.86/50.00 & 57.14/50.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.77 & 0.64 & 44.81/11.14/44.05 & 11.56/0.00/88.44 & -\ & & & & & & 61.90/75.00 & 38.10/25.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 3 & 0.57 & 0.54 & 38.18/2.42/59.39 & 43.88/13.61/42.52 & 2.41/0.00/97.59\ & & & & & & 33.33/50.0 & 47.62/50.00 & 19.05/0.00\ CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$ & 4 & 2 & 0.76 & 0.55 & 88.51/11.49 & 77.48/22.52 & -\ & & & & & & 61.90/75.00 & 38.10/25.00 & -\ CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.74 & 0.65 & 60.00/0.67/39.33 & 3.03/18.79/78.18 & -\ & & & & & & 61.90/75.00 & 38.10/25.00 & -\ CSM–I/CSM–II/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 3 & 0.57 & 0.55 & 62.11/0.26/37.63 & 0.00/15.48/84.52 & 24.66/13.70/61.64\ & & & & & & 52.38/50.00 & 19.05/0.00 & 28.57/50.00\ CSM–I/CSM–II & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.86 & 0.62 & 46.77/53.23 & 99.59/0.41 & -\ & & & & & & 42.86/50.00 & 57.14/50.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.80 & 0.64 & 45.11/11.03/43.86 & 9.79/0.00/90.21 & -\ & & & & & & 61.90/75.00 & 38.10/25.00 & -\ CSM–I$\kappa$/CSM–II$\kappa$/MAG & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 3 & 0.60 & 0.52 & 37.65/2.35/60.00 & 2.38/0.00/97.62 & 44.44/13.89/41.67\ & & & & & & 28.57/50.00 & 23.81/0.00 & 47.62/50.00\ CSM–I$\kappa$/CSM–II$\kappa$ & $tr_{\rm 1}$,$td_{\rm 1}$,$tr_{\rm 2}$,$td_{\rm 2}$,$tr_{\rm 3}$,$td_{\rm 3}$ & 6 & 2 & 0.82 & 0.54 & 77.18/22.82 & 88.76/11.24 & -\ & & & & & & 33.33/25.00 & 66.67/75.00 & -\ ![Same as in Figure \[Fig:cluster2D\] but for the 3D ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$) CSM–I/CSM–II/MAG dataset. The computed clusters associate with the underlying model categories better than in the 2D case (see \[cluster\]).[]{data-label="Fig:cluster3D"}](scatt_3D_csmmag_k2.png){width="9cm"} $k$–Means Clustering Analysis {#cluster} ============================= $k$–means clustering is a powerful machine learning algorithm used to categorize data via an iterative method [@cluster1; @cluster2]. The standard version of this algorithm finds the locations and boundaries of “clusters” of data by repeatedly minimizing their Euclidian distances from cluster centroids. The user can either input the number of clusters, $k$, based on some assumption about the nature of the data, or can use a density–based (“DBSCAN”) approach [@Ester] to determine the optimal number of clusters. While $k$–means assumes clusters separated by straight–line boundaries, there exist clustering algorithms that relax that criterion. For the scope of this work to quantitatively characterize the LC shape properties of CSM and MAG models, and determine if they occupy distinct areas of the parameter space, we employ $k$–means clustering analysis. More specifically, we use the [*Python scikit–learn*]{} ([sklearn]{}) package. $k$–means clustering analysis is often used in astronomical applications aiming to classify astronomical objects in transient search projects . Recently, it was utilized to classify the properties of SLSNe, based on both LC and spectroscopic features, showcasing the importance it holds for the future of the field. @2018arXiv180800510N presented their work on $k$–means clustering analysis of SLSN nebular spectra properties. @2018ApJ...854..175I illustrated how the method can be used to identify SLSN–I and probe their observed diversity and identified two distinct groups: “fast” and “slow” SLSN–I depending on the evolution of the LC and the implied spectroscopic velocities and SN ejecta velocity gradients. In this work, we use $k$–means clustering to investigate if the SLSN LC shape properties implied by different power input models (MAG, CSM–I and CSM–II) concentrate in distinct clusters. This may allow us to associate observed SLSNe with proposed power input mechanisms based only on the LC properties and thus provide a framework for SLSN characterization for future, big data transient searches like [*LSST*]{}. To do so, we focus on different combinations of $k$ values and LC parameter space dimensionality ($N_{\rm D}$). Given our prior knowledge that we are using LC shape parameter data from two categories (CSM, MAG) of models we focus on two cases: $k =$ 2 (CSM models of both I and II type and MAG) and $k =$ 3 (distinct CSM–I, CSM–II and MAG models). We also look at different values for $N_{\rm D}$: 2D datasets focusing on the primary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$), 3D datasets focusing on the LC symmetry parameters ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$), 4D datasets focusing on the primary and secondary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$, $tr_{\rm 2}$, $td_{\rm 2}$) and 6D datasets focusing on the primary, secondary and tertiary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$, $tr_{\rm 2}$, $td_{\rm 2}$, $tr_{\rm 3}$, $td_{\rm 3}$) thus covering all the LC shape parameters defined in this work (since given the 6 timescales the symmetry parameters can be constrained). Although we only opted to perform clustering analysis for $k =$ 2,3 based on prior knowledge of the number of models used in the datasets, we also estimated the optimal number of clusters in all cases using the “elbow” method [@elbow]. This method is based on plotting the normalized squared error of clustering ($E_{\rm N}$, defined in the next paragraph) as a function of $k$ and finding the value of $k$ that corresponds to the sharpest gradient. This test confirmed that the optimal number of clusters for all datasets is $k =$ 2. While for the 2D and the 3D clustering we can provide visual representations of the clusters, that is impossible for the 4D and the 6D cases. For this reason, and in order to quantify the quality and accuracy of our clustering results, we use silhouette analysis [@silhouette]. Silhouette analysis yields a mean silhouette score, $\bar{S}$, and silhouette diagrams that visualize the sizes of the individual clusters and the $S$ score distribution of the individual data within each cluster. Negative values of $S$ correspond to falsely classified data while values closer to unity indicate stronger cluster association. Silhouette diagrams with clusters of comparable width and with $S$ values above the mean are indicative of accurate clustering. An example silhouette diagram for the $k =$ 2, 3 and $N_{\rm D} =$ 4 case we study in this work is shown in Figure \[Fig:silhouette\]. Figures \[Fig:cluster2D\] and \[Fig:cluster3D\] show the distribution of the computed clusters in the $N_{\rm D} =$ 2 and $N_{\rm D} =$ 3 cases for $k =$ 2 with the SLSN–I/SLSN–II observations overplotted for comparison. The cluster centroids are also marked with black star symbols. Table \[T8\] presents the results of clustering analysis for each $k$–$N_{\rm D}$ combination that we investigated including normalized classification error ($E_{\rm N}$; the square–root of the sum of squared distances of samples to their closest cluster center divided by the cluster size) and $\hat{S}$ as well as the computed cluster compositions (percentage of CSM–I/CSM–II and MAG models within each cluster) and observed SLSN–I/SLSN–II cluster associations. Results ======= $N_{\rm D} =$ 2 {#nd2} --------------- Our clustering analysis on the primary LC timescales ($tr_{\rm 1}$, $td_{\rm 1}$) reveals a clear dichotomy between H–rich and H–poor CSM models in the CSM–I/CSM–II case where the first cluster ($C_{\rm 0}$) is composed by CSM–I (and, respectively CSM–I$\kappa$) models by almost 100%. The observed SLSN–I and SLSN–II sample is not clearly associated with either cluster in the CSM–I/CSM–II case. For all combinations of model datasets and values of $k$ we find the $k =$ 2 choice to correspond to more accurate clustering (higher $\bar{S}$ scores). This is indicative that the value $k =$ 2 may be optimal in distinguishing between CSM–type models of either type against MAG models. The CSM–I/CSM–II/MAG, $k =$ 2 case has the highest $\bar{S}$ score and yields the first cluster ($C_{\rm 0}$) dominated by MAG models ($\sim$ 76% of the cluster data) and the second cluster ($C_{\rm 1}$) dominated by CSM–I/CSM–II models ($\sim$ 60% of the cluster data). Nearly $\sim$ 75% of observed SLSN–I/SLSN–II are associated with $C_{\rm 1}$ implying that, practically, both CSM and MAG type models can reproduce SLSN LCs in terms of the primary LC timescales. As such, the $N_{\rm D} =$ 2 case does not represent a robust way to distinguish between SLSN powered by the CSM or the MAG mechanism. $N_{\rm D} =$ 3 {#nd3} --------------- In this case we explore clustering for the three main LC symmetry parameters as defined in Section \[lcshape\]. As can be seen in Tables \[T8\] the $k =$ 2 cases have, in general, better $\bar{S}$ scores than the $k =$ 3 cases. Another interesting outcome is the very low normalized mean error ($<$ 0.01) for all cases suggesting that clustering based on the \[$s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$\] dataset yields denser, more concentrated clusters around the computed centroids. Rergardless, the most important result in this case is the strong association of observed SLSN symmetries with $C_{\rm 1}$: $\sim$ 75–76% of SLSN–I and SLSN–II are associated with $C_{\rm 1}$ in the CSM–I/CSM–II/MAG, $k =$ 2 case. In addition, $C_{\rm 1}$ is almost entirely composed of CSM models ($\sim$ 98%). This strengthens our previous suggestion (Section \[tigerfit\]) that CSM models are superior to MAG models in reproducing the observed SLSN LC symmetry properties including some fully symmetric LCs. The same result holds in the CSM–I$\kappa$/CSM–II$\kappa$/MAG, $k =$ 2 case with more than half of observed SLSN LCs associated with the cluster that is mostly composed of CSM models. This result appears to hold up in the $k =$ 3 cases. Overall, CSM and MAG models appear to be clearly distinguished in terms of LC symmetry properties (Figure \[Fig:s1s2s3fit\]). [*This indicates that LC shape symmetry may be critical in identifying the power input mechanism associated with observed SLSNe, based only on photometry.*]{} $N_{\rm D} =$ 4 {#nd4} --------------- In this case we investigate $k$–means clustering for the primary and the secondary rise and decline timescales. We elect to focus on the $k =$ 2 cases since, again, they yield higher $\bar{S}$ scores. Clear distinction is recovered between H–poor and H–rich CSM models in the CSM–I/CSM–II and the CSM–I$\kappa$/CSM–II$\kappa$ cases: $\sim$ 100% of H–poor CSM models constitute the $C_{\rm 1}$ data in the CSM–I/CSM–II case and $\sim$ 89% of H–poor CSM models constitute the $C_{\rm 0}$ data in the CSM–I$\kappa$/CSM–II$\kappa$ case. For the CSM–I/CSM–II/MAG dataset we recover a cluster that is mostly composed of CSM–type models ($C_{\rm 1}$; 60% CSM–I/CSM–II models and 40% MAG models) and a cluster that is dominated by MAG models ($C_{\rm 0}$; $\sim$ 20% CSM–I/CSM–II models and $\sim$ 80% MAG models). The majority ($\sim$ 66–75%) of SLSN–I/SLSN–II are associated with $C_{\rm 1}$ indicating preference toward CSM models yet the correlation is not as strong as in the $N_{\rm D} =$ 3 case. $N_{\rm D} =$ 6 {#nd6} --------------- The last clustering analysis was performed on a six–dimensional dataset comprised of the primary, secondary and tertiary rise and decline timescales. This is the most complete LC shape parameter dataset we investigate since it encapsulates the three LC symmetry values, uniquely defined by their corresponding timescales. Furthermore, the use of all relevant LC shape parameters yields the highest $\bar{S}$ scores ($\sim$ 0.8 in some cases) compared to the lower–dimensionality cases. As with all other cases, we observe that $k =$ 2 clustering leads to more accurate classification therefore we only focus on these results for our discussion. Our results are consistent with those of the $N_{\rm D} =$ 4 case yielding a cluster dominated by CSM–type models (60%) and a cluster dominated by MAG models ($\sim$ 80%) with the majority of SLSN–I/SLSN–II associated with the former in the CSM–I/CSM–II/MAG case. In particular, $\sim$  66–75% of observed SLSN LCs are associated with the CSM–dominated cluster. In summary, we find that clustering of LC shape properties generally favors the CSM power input mechanism yet the MAG mechanism cannot be ruled out. While clustering on LC timescales supports this result, it is even more robust in clustering of LC symmetry parameters. Discussion {#disc} ========== In this paper we explored how high–cadence photometric observations of SLSNe detected shortly after explosion can be used to charactize their power input mechanism. In particular, we constrained the LC shape properties of a set of observed SLSN–I and SLSN–II focusing only on events with complete photometric coverage and searched for possible correlations with semi–analytic model LC shapes assuming either a magnetar spin–down (MAG) or a SN ejecta–circumstellar interaction (CSM) power input [@2012ApJ...746..121C; @2013ApJ...773...76C]. We reiterated that there is a number of simplifying assumptions in using these semi–analytical models including issues with the approximation of centrally–located heating sources and homologous expansion in cases like shock heating where the power input can occur close to the photosphere, the assumption of constant opacity and model parameter degeneracy [@2013ApJ...773...76C; @2013MNRAS.428.1020M; @2018arXiv181206522K]. In addition, models predict bolometric LCs while the observed, rest–frame SLSN LCs are pseudo–bolometric LCs computed by fitting the SED of each event based on available observations in different filters. Regardless of all these caveats, semi–analytic models still constitute a powerful tool to study SLSNe, providing us with the potential to investigate LC shape properties across the associated parameter space for each power input by computing a large number of models. Nevertheless, we have supplemented our study with datasets of numerical MAG and CSM model SLSN LCs available in the literature. To quantitatively determine whether the main proposed SLSN power input mechanisms yield model LCs with different shape properties (rise and decline timesales and symmetry around peak luminosity) we applied $k$–means clustering analysis for different combinations of parameters and model datasets and computed cluster associations for the observed SLSN sample. We highlight the main results of our analysis below: - [SLSN exhibit a strong correlation between their primary rise ($tr_{\rm 1}$) and decline ($td_{\rm 1}$ timescales. Although this correlation is reproduced by both MAG and CSM power input models, the larger scatter found in CSM models overlaps better with the SLSN–I/SLSN–II data.]{} - [CSM models generally correspond to faster evolving LCs in agreement with observations of some SLSN–I.]{} - [MAG models fail to produce fully symmetric LCs around peak luminosity. In particular, MAG models are never found to be symmetric around the first luminosity threshold ($s_{\rm 1, max} =$ 0.54), including in cases of high gamma–ray leakage.]{} - [While the majority of CSM models also fail to produce fully symmetric LC shapes, there is a small fraction of them that do. This is in consistent with $\sim$ 24% of SLSN–I LCs in our sample that are measured to be fully symmetric.]{} - [Symmetric SLSN LCs favor a truncated power input source that leads to faster LC decline rates past peak luminosity. The CSM model naturally provides such a framework since forward and reverse shock power inputs are terminated. An alternative truncated input could be energy release by fallback accretion.]{} - [MAG models fail to produce LCs with positive second derivative during the early rise to peak luminosity (concave–up). CSM models can produce both concave–up and concave–down LCs.]{} - [$k$–means clustering analysis suggests that most observed SLSN LCs are associated with CSM power input yet the MAG model cannot be ruled out. A multiple formation channel is therefore possible for SLSNe of both spectroscopic types.]{} - [The most distinct clustering between MAG and CSM data is found in the 3D LC symmetry parameter space ($s_{\rm 1}$, $s_{\rm 2}$, $s_{\rm 3}$). In this case, the majority ($>$ 75%) of SLSNe are strongly associated with the CSM–dominated cluster.]{} - [LC symmetry properties, together with the shape of the LC at early times, may be key in distinguishing between different power input mechanisms in SLSNe.]{} Our results illustrate the importance of early detection and high–cadence multi–band photometric follow–up in determining the nature of SLSNe. As transient search surveys like [*LSST*]{}, [*ZTF*]{} and [*Pan–STARRS*]{} usher the new era of big data transient astronomy, a larger number of well–constrained SLSN LCs will become available providing the opportunity to use photometry to characterize their power input mechanisms. This is of critical importance in the study of luminous and uncharacteristic transients in general, since photometry will be more readily available that spectroscopy in most cases. We have shown that machine learning approaches like $k$–means clustering can be instrumental in helping us characterize SLSNe based on their LC properties, namely rise and decline timescales and LC symmetry. This is made possible by comparing against the LC shape properties of different power input mechanisms using semi–analytic or numerical models. As such, it is of great importance to enhance our numerical modeling efforts for all proposed power input mechanisms and survey a large fraction of the model parameter space. In addition to aiding with SLSN and luminous transient characterization and classification, this will provide us with constrains on the physical domains that enable these extraordinary stellar explosions. We would like to thank Edward L. Robinson and J. Craig Wheeler for useful discussions and comments. We would also like to thank our anonymous referee for suggestions and comments that improved the quality and presentation of our paper. EC would like to thank the Louisiana State University College of Science and the Department of Physics & Astronomy for their support. [@Hunter2007], [numpy]{} [@oliphant], [SciPy]{} [@scipy], [Scikit–learn]{} [@scikit-learn], [SuperBol]{} [@2018RNAAS...2d.230N]. [^1]: https://github.com/mnicholl/superbol
--- abstract: 'Recently, it was argued that charged Anti-de Sitter (AdS) black holes admit critical behavior, without extending phase space, similar to the Van der Waals fluid system in the $Q^2-\Psi$ plans where $\Psi=1/v$ (the conjugate of $Q^2$) is the inverse of the specific volume [@Dehy]. In this picture, the square of the charge of the black hole, $Q^2$, is treated as a thermodynamic variable and the cosmological constant $\Lambda$ is fixed. In this paper, we would like to examine whether this new approach toward critical behaviour of AdS black holes can work in other gravity such as Gauss-Bonnet (GB) gravity as well as in higher dimensional spacetime. We obtain the equation of state, $Q^2=Q^2(\Psi, T)$, Gibbs free energy and the critical quantities of the system, and study the effects of the GB coupling $\tilde{\alpha}$ on their behaviour. We find out that the critical quantities have reasonable values, provided the GB coupling constant, $\tilde{\alpha}$, is taken small and the horizon topology is assumed to be $(d-2)$-sphere. Finally, we calculate the critical exponents and show that they are independent of the model parameters and have the same values as the Van der Waals system which is predicted by the mean field theory.' author: - 'H. Yazdikarimi$^{1}$, A. Sheykhi$^{1,2}$[^1] and Z. Dayyani$^{1}$' title: 'Critical behavior of Gauss-Bonnet black holes via an alternative phase space' --- Introduction ============ Thermodynamics of black holes has been started around five decades ago in $1970's$ by the works of Hawking and Bekenstein. Since the discovery of black holes thermodynamics, a lot of investigations have been carried out to disclose the similarity between the laws of black holes mechanics and the usual thermodynamical systems on the earth. The motivation for this investigation is to understand the microscopic structure of black holes and hence shed light on the quantum theory of gravity as well. Thermodynamics of charged black holes in the background of asymptotically AdS spacetimes is of specific interest, mainly due to the duality between gravity in AdS spacetime and the Conformal Field Theory (CFT) living on its boundary. According to AdS/CFT correspondence [@Maldacena; @Gubser; @Witten], thermodynamics of black holes in an AdS space can be recognized by that of dual strong coupled CFT on the boundary of the AdS spacetime. Besides, it has been shown that there is a complete analogy between charged black holes in AdS space and the Van der Waals liquid-gas system with their critical exponents coincide with those of the Van der Waals system which is predicted by the mean field theory. In this picture, the phase space of black holes thermodynamics is extended such that the cosmological constant is regarded as the thermodynamic pressure and its conjugate quantity as a thermodynamic volume [@Do1; @Kastor; @Do2; @Do3; @Ce1; @Ur; @Mann1]. Interestingly enough, it has been displayed that both systems have extremely similar phase diagrams [@Mann1]. This analogy has been generalized to higher dimensional charged black holes [@Mann2], rotating black holes [@Altam; @Sherkat; @Sherkat1] and dilaton black holes [@Kamrani]. The studies were also enlarged to the critical behavior of nonlinear black holes [@Nonlinear]. When the gauge field is in the form of Born-Infeld nonlinear electrodynamics, one should extend the phase space and introduce a new thermodynamic quantity conjugate to the Born-Infeld parameter which is necessary for consistency of the first law of thermodynamics as well as the corresponding Smarr relation [@MannBI; @Dayyani1]. It is also of great interest to consider the higher curvature corrections to the Einstein gravity. In these theories the entropy expressions are not proportional to the area of the horizon, and instead are given by a more complicated relation depending on higher-curvature terms [@Iyer]. One of the most important and useful batch of these kind of theories are the Lovelock gravity theories [@Lov], which lead to second order differential equations for the metric functions. The second-order Lovelock theory of gravity is well-known as GB gravity, which contains higher curvature terms in the action. The phase structure of GB black holes in AdS spaces has been explored in [@Cai1; @Dey]. Motivated by the idea that the cosmological constant can be regarded as a thermodynamic variable, critical behavior of charged topological GB black holes in $d$-dimensional AdS spacetime has been studied in [@Shao; @Cai2; @Zou]. Thermodynamic analogy between a charged GB-AdS black hole and a Van der Waals liquid gas system has been confirmed and it was shown that the result drastically depend on $\alpha$ and dimensions of the spacetime. It was shown that when one treats the GB coupling constant as a free thermodynamic variable [@Xu], the Van der Waals behavior is occurred, and criticality and reentrant behaviour are observed [@Wei]. Furthermore, the phase structure of asymptotically AdS black holes in Lovelock gravity have also been explored [@Lovelock]. In all works mentioned above, the cosmological constant is regarded as thermodynamic pressure which can vary. Although, there are some motivations to consider the cosmological constant as a variable, but it is more reasonable to keep it as a constant parameter. For example, in general relativity the cosmological constant is usually considered as a constant related to the zero point energy of the vacuum. Motivated by the argument given in [@Dehy], we want to study the critical behavior of GB black hole via an alternative viewpoint, in which we keep the cosmological constant as a constant parameter and instead treat the charge of the black hole (or more precisely $Q^2$) as an external variable which can vary. The advantages of this approach is that it provides more attractive and straightforward results. Phase structure and critical behavior of BI black holes in an AdS space, where the charge of the system can vary and the cosmological constant (pressure) is fixed have been investigated in [@Dehy2]. It was shown the system indeed admits a reentrant phase transition. Recently, it was shown that this method also work for investigating the critical behaviour of Lifshitz dilaton black holes [@Dayani2], which further supports the viability of this new approach. This paper is organized as follows. In the next section we study the critical behavior of $d$-dimensional charged AdS black hole using an alternative phase space. In section \[Structure\] we review the solution of GB black holes and their thermodynamic features. In section \[d5\], we investigate the $(Q^2-\Psi)$ phase space of the GB black holes in five dimensions and obtain the critical quantities. Also, we will calculate the critical exponents and Gibbs free energy of the system. In section \[arbitrary d\] we generalize our study to higher dimension by investigating the critical behavior of GB black holes via an alternative method. We use the results of section \[Field\] to study the accuracy of our calculations in the limit of $\alpha=0$. The last section is devoted to the summery and conclusions. Critical behaviour of AdS black holes in higher dimensions {#Field} ========================================================== As we mentioned before, an alternative approach towards investigating the critical behaviour of AdS black holes was suggested without extending the phase space [@Dehy]. The authors of [@Dehy] completed the analogy between charged AdS black holes in four dimensions and Van der Waals fluid system by treating the square of the charge of the black hole, $Q^2$, as the thermodynamic variable and keeping the cosmological constant fixed. Our aim here is to generalize this new approach to higher dimensional charged AdS black holes. The motivations is to check whether this approach does work in higher dimensions or it is only valid in four-dimensions. Besides, this investigation is of great importance since it provides the background for our calculations in the next section, where we would like to investigate the critical behaviour of GB black holes in all higher dimensions. Critical behaviour of black holes in $d$-dimensions --------------------------------------------------- The action of Einstein-Maxwell theory in the background of AdS spacetime in $d$-dimensions is given by $$\begin{aligned} \label{action} S &=&-\frac{1}{16\pi }\int d^{d}x\sqrt{-g}\left( R -2\Lambda-F_{\mu \nu} F^{\mu \nu}\right),\end{aligned}$$ where $R$ is the Ricci scalar, $\Lambda=-(d-1)(d-2)/2l^2$ is the cosmological constant, $F_{\mu \nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the electrodynamic field tensor with $A_{\mu}$ is the gauge potential [@Bril1; @Cai3] $$\begin{aligned} \label{function} A_{t}=\frac{Q}{(d-3)r^{d-3}}.\end{aligned}$$ The most general $d$-dimensional static metric with constant curvature boundary may be written as $$ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f\left( r\right) }+r^{2} d \Sigma^2_{d-2}, \label{metric}$$where $d\Sigma^2_{d-2}$ stands for the line elements of a $(d-2)$-dimensional hypersurface with constant scalar curvature $\left( d-2\right) \left( d-3\right) k$  and volume $\omega _{d-2}$. Here $k$ is a constant and characterizes the curvature of the hypersurface. Without loss of generality, one can take $k=0, 1, -1$, such that the black hole horizon or cosmological horizon in (\[metric\]) can be a zero (flat), positive (elliptic) or negative (hyperbolic) constant curvature hypersurface. The function $f(r)$ is given by [@Bril1; @Cai3] $$\begin{aligned} \label{f} f(r)=k-\frac{m}{r^{d-3}}+\frac{2Q^2}{(d-2)(d-3)r^{2(d-3)}}+\frac{r^2}{l^2},\end{aligned}$$ where $m$ and $Q$ are, respectively, the mass and the charge parameters which are related to the total mass and charge of the black hole via $$\begin{aligned} \label{mq} M=\frac{m(d-2)}{16\pi}\omega_{d-2},\ \ \quad \mathcal{Q}=\frac{Q}{4\pi} \omega_{d-2}.\end{aligned}$$ The horizon radius $r_{+}$ of the black hole is the largest real root of Eq. $f(r_{+})=0$. Taking into account the energy formation of the thermodynamic system, it was argued that the mass of AdS black hole, $M$, is indeed the enthalpy $H$ [@Kastor]. It is a matter of calculations to show that in terms of the horizon radius the mass is given by $$\begin{aligned} \label{M} M=\frac{(d-2)\omega_{d-2} r_{+}^{d-1}}{16\pi l^2}+\frac{k(d-2)\omega_{d-2} r_{+}^{(d-3)}}{16\pi}+\frac{\omega_{d-2} Q^2}{8\pi (d-3) r_{+}^{d-3}}.\end{aligned}$$ The Hawking temperature of the black hole on the event horizon $r_{+}$ can be calculated as $$\begin{aligned} \label{T} T=\frac{f'(r_{+})}{4\pi}=\frac{(d-1)r_{+}}{4\pi l^2}+\frac{(d-3)k}{4\pi r_{+}}-\frac{Q^2}{2\pi (d-2) r_{+}^{2d-5}}.\end{aligned}$$ The entropy and electric potential $\Phi$ of the black hole are given by [@MannBI] $$\begin{aligned} &&S=\frac{r_{+}^{d-2}}{4}\omega_{d-2}\\ &&\Phi=\frac{Q}{(d-3)r_+ ^{d-3}}.\end{aligned}$$ According to [@MannBI], in the extended phase space towards study the critical behaviour of AdS black holes, the cosmological constant which is interpreted as a thermodynamic pressure $P$, and its conjugate quantity, the thermodynamic volume, are given by $$\begin{aligned} P=-\frac{\Lambda}{8\pi}=\frac{(d-1)(d-2)}{16\pi l^2}, \ \ \ V= \left (\frac{\partial M}{\partial P}\right)_{Q,S}=\frac{\omega_{d-2} r_{+}^{d-1}}{d-1}.\label{pV}\end{aligned}$$ It was shown that all these quantities satisfy the following Smarr formula [@MannBI] $$M=\frac{d-2}{d-3}T S+\Phi \mathcal{Q}-\frac{2}{d-3}V P,$$ Besides, the first law of thermodynamics with variable $P$ and fixed $\mathcal{Q}$ is written as $$dM=TdS+\Phi d\mathcal{Q}+ V dP.$$ It was shown that charged AdS black holes represent a critical behavior similar to Van der Waals fluid, if one treat the cosmological constant as a thermodynamic variable [@Mann1; @MannBI]. Although this idea has got a lot of interests in the literatures, it was argued that by keeping the cosmological constant as a fixed parameter and instead considering $Q^2$ as a thermodynamic variable, the critical behavior can be seen in $Q^2-\Psi$ plane [@Dehy]. Following [@Dehy], we replace the term $\Phi dQ$ in the first law with $\Psi dQ^2$, $$\begin{aligned} dM=TdS+\Psi dQ^2+VdP\end{aligned}$$ It is a matter of calculation to show that the Smarr formula takes the form $$M=\frac{d-2}{d-3}T S+\Psi Q^2-\frac{2}{d-3}V P,$$ where we have defined $$\begin{aligned} \Psi=\left (\frac{\partial M}{\partial Q^2}\right)_{P,S}=\frac{\omega_{d-2}}{8\pi(d-3)r_{+}^{d-3}}.\label{Psi}\end{aligned}$$ Critical behavior ----------------- We start by writing the equation of state in the form $Q^2(T,\Psi)$ by using Eq.(\[T\]). For this purpose, we first write $$\begin{aligned} \label{q2} Q^2(T,r_{+}) =-2T\pi (d-2) r_{+}^{2d-5}+\frac{(d-1)(d-2)r_{+}^{2d-4}}{2l^2} +\frac{k (d-3)(d-2)r_{+}^{2d-6}}{2}.\end{aligned}$$ After replacing $r_+$ from Eq. (\[Psi\]), one may rewrite the equation of state as a function of $\Psi$ and $T$, $$\begin{aligned} \label{eq1} Q^2(T,\Psi)=-2T\pi (d-2)Y^{2d-5} \Psi^{\frac{5-2d}{d-3}}+\frac{(d-1)(d-2)}{2l^2} Y^{2d-4}\Psi^{\frac{4-2d}{d-3}} +\frac{k (d-3)(d-2)}{2 \Psi^2}Y^{2d-6},\end{aligned}$$ where $$Y=\left(\frac{\omega_{d-2}}{8\pi (d-3)}\right)^{\frac{1}{d-3}}.$$ In order to investigate the critical behavior of the system and compare with Van der Waals gas, we should plot isotherm diagrams. The isotherm diagrams $Q^2-\Psi$ given in Fig. \[fig1\] predict a first order phase transition in the system which is in complete analogy with the Van der Waals liquid-gas system. Note that the oscillating part $(\frac{\partial Q^2}{\partial \Psi} >0)$ of the isotherm diagrams show instable regions. The critical point can be obtained by solving the following equations, $$\frac{\partial Q^{2}}{\partial \Psi}\Big|_{T_{c}}=0,\quad \frac{\partial ^{2} Q^{2}}{% \partial \Psi^{2}}\Big|_{T_{c}}=0. \label{CritEq}$$ It is a matter of calculation to show thermodynamic quantities at the critical point are given by $$\begin{aligned} T_c=\frac{\sqrt{(d-2)(d-1)k}\text{ }(d-3)}{\pi l(2d-5)}.\end{aligned}$$ $$\begin{aligned} \Psi_c= \frac{\omega_{d-2} \left[(d-1)(d-2)k\right]^{\frac{(d-3)}{2}}}{8\pi l^{(d-3)}(d-3)^{(d-2)}}.\end{aligned}$$ $$\begin{aligned} Q^2_c=\frac{l^{(2d-6)} k^{(d-2)}(d-3)^{(2d-5)}}{2(2d-5)[(d-1)(d-2)]^{(d-3)}}.\end{aligned}$$ $$\begin{aligned} \rho_c=T_c\Psi_cQ^2_c=\frac{\omega_{d-2}\text{ } l^{(d-4)}(d-3)^{(d-2)}\text{ } k^{(d/2)}}{16\pi^2(2d-5)^2 [(d-1)(d-2)]^{\frac{(d-4)}{2}}}.\end{aligned}$$ In the limiting case where $d=4$ and $k=1$, the above equation reduces to $\rho_c=1/36\pi$ which is consistent with the result in [@Dehy]. Note that $\rho_c$ is independent of $l$ only in four dimension. It is clear from the above equation that the critical quantities have reasonable values only in case $k=1$. This implies that the system admits a critical behaviour only with spherical horizon. The critical behaviour of a system is identified by its partition function. Indeed, the thermodynamic potential, which is proportional to the Euclidean action calculating at fixed $Q$ and $T$, is the Gibbs free energy. To get more information about the phase transition we calculate the Gibbs free energy $G=M-TS$ as [@Cham] $$\begin{aligned} \label{G} G(Q^2,T)=\frac{\omega_{d-2}}{8\pi (d-3)}\left(k (d-2)(d-3) r^{d-3}- 2\pi (2d-5)T r^{d-2}+\frac{(d-2)^2 r^{d-1}}{l^2}\right),\end{aligned}$$ where $r_{+}$ is a function of $T$ and $Q^2$ through Eq. (\[q2\]). The swallowtail behavior of the Gibbs free energy in Fig. \[Fig2\] represents a first-order phase transition in the system. A first order phase transition occurs when the Gibbs free energy is continuous, but its first derivative is discontinuous. Just like as Van der Waals liquid-gas system. Critical exponents ------------------ Our aim here is to calculate the critical exponents by using the alternative approach for higher dimensional charged AdS black holes. The behavior of thermodynamic functions in the vicinity of the critical point are characterized by the critical exponents. To find the critical exponent, let us define the reduced thermodynamic variables, $$\begin{aligned} \Psi_r\equiv \frac{\Psi}{\Psi_c},\quad Q^2_r\equiv \frac{Q^2}{Q^2_c},\quad T_r\equiv \frac{T}{T_c}.\end{aligned}$$ Since the critical exponents should be studied near the critical point, we write the reduced variables in the form $$\begin{aligned} \Psi_r=1+\psi,\quad Q^2_r=1+\phi,\quad T_r=1+t,\end{aligned}$$ where $t$, $\psi$ and $\phi$ indicate the deviation from critical point. One may expand Eq.(\[M\]) near the critical point and write $$\begin{aligned} \label{fi} \phi=-4(d-2)(d-3)t +4 (d-2)(2d-5)\psi t -\frac{2(d-2)(2d-5)}{3(d-3)^2}\psi^3+o(t \psi^2,\psi^4).\end{aligned}$$ Using the Maxwell’s area law and differentiating Eq.(\[fi\]) with respect to $\psi$ at a fixed temperature ($t<0$) leads to $$\begin{aligned} \phi=-4(d-2)(d-3)t+4 (d-2)(2d-5)\psi_l t -\frac{2(d-2)(2d-5)}{3(d-3)^2}\psi_{l}^3 \nonumber\\=-4(d-2)(d-3)t+4 (d-2)(2d-5)\psi_s t -\frac{2(d-2)(2d-5)}{3(d-3)^2}\psi_{s}^3.\end{aligned}$$ Indeed $$\begin{aligned} 0=\Psi_c \int_{\psi_l}^{ \psi_s} \psi \left(\frac{\partial Q^2}{\partial \psi}\right)d\psi=\Psi_c \int_{\psi_l}^{ \psi_s} \psi \left[ 4 (d-2)(2d-5) t-\frac{6(d-2)(2d-5)}{3(d-3)^2}\psi^2\right] d\psi,\end{aligned}$$ where $\psi_l$ and $\psi_s$ denote the event horizon of large and small black hole. The non-trivial solutions of the above equation are given by $$\begin{aligned} \label{sic} \psi_l=-\psi_s= \sqrt{6(d-3)t}.\end{aligned}$$ From Eq.(\[sic\]) we conclude that the order parameter is $\beta =1/2$. To obtain the critical exponent $\gamma$ the isotherm compressibility can be calculated as follows $$\begin{aligned} \kappa_T=\left(\frac{\partial \Psi}{\partial Q^2}\right)_T=\frac{\Psi_c }{4 (d-2)(2d-5)Q_c^2 t }\Longrightarrow \gamma=1.\end{aligned}$$ In addition it can be easily seen that $$\begin{aligned} \phi|_{t=0}= -\frac{2(d-2)(2d-5)}{3(d-3)^2}\psi^3 \Longrightarrow \delta=3.\end{aligned}$$ The heat capacity near critical point is $c_\Psi=T\frac{\partial S}{\partial T}\big|_\Psi=0$. So, the critical exponent $\alpha=0$. Therefore, we have shown that the critical exponents of the higher dimensional black holes in the new approach (with fixed $\Lambda$ and variable $Q^2$) is exactly the same as those presented in [@MannBI] (with $\Lambda$ variable) and coincide with the Van der Waals fluid system. It is worthwhile to mention that the new approach is more realistic from physical and mathematical point of view, because the cosmological constant does not offer a natural variable. Thermodynamics of Gauss-Bonnet black holes in AdS space {#Structure} ======================================================= We consider the action of $d$-dimensional Einstein-Gauss-Bonnet-Maxwell theory in the presence of cosmological constant $\Lambda$ which can be written $$\begin{aligned} S =\frac{1}{16\pi}\int d^dx\sqrt{-g}[{R-2\Lambda-\alpha_{GB}\left(R_{\mu\nu\gamma\delta}R^{\mu\nu\gamma\delta}-4R_{\mu\nu} R^{\mu\nu}+R^2\right) -4\pi F_{\mu\nu} F^{\mu\nu}}], \label{Stress}\end{aligned}$$ where $ \alpha_{GB}$ is the GB coefficient with dimension of \[length\]$^2$ which is proportional to inverse string tension with positive coefficient [@Bou]. The metric function $f(r)$ of charged GB black holes in AdS space is given by [@Cai2] $$\begin{aligned} f(r)= k+ \frac{r^2}{2\tilde{\alpha}}\Bigg[1-\sqrt{1-\frac{8\tilde{\alpha} Q^2}{(d-2)(d-3)r^{2d-4}} +\frac{64\pi\tilde{\alpha} M}{(d-2)\omega_{d-2} r^{d-1}}-\frac{64\pi\tilde{\alpha} P}{(d-2)(d-1)}} \ \ \Bigg].\end{aligned}$$ where $\tilde{\alpha}=(d-3)(d-4)\alpha_{G\beta}$ and $k$ represents the topology of the horizon, and we have replaced $\Lambda$ with $P$ by using Eq. (\[pV\]). The constant $M$ is the mass, while $Q$ is related to the charge of the black hole. The position of the black hole event horizon is determined as a larger root of $f(r_+)=0$ and hence the mass of black hole which is equivalent with enthalpy is calculated as [@Cai2] $$\begin{aligned} \label{mm} H\equiv M =\frac{(d-2)\omega_{d-2}r_{+}^{(d-3)}}{16 \pi}\left(k+\frac{k^2\tilde{\alpha}}{r_{+}^2}+\frac{16\pi P r_{+}^2}{(d-1)(d-2)}+\frac{2Q^2r_{+}^{6-2d}}{(d-2)(d-3)}\right),\end{aligned}$$ where $P$ is pressure and defined in (\[pV\]). The temperature and entropy of the black hole can be given by $$\begin{aligned} T=\frac{f'( r_{+})}{4 \pi} =\frac{\frac{16\pi P r_{+}^4}{(d-2)}+(d-3)k r_{+}^2+(d-5)k^2\tilde{\alpha}-\frac{2Q^2}{(d-2)r_{+}^{(2d-8)} }}{4\pi r_{+}( r_{+}^2+2k\tilde{\alpha})},\label{Temp}\end{aligned}$$ and $$\begin{aligned} S=\int_{0}^{ r_{+}} T^{-1} \left(\frac{\partial M}{\partial r}\right)_{Q,P}dr=\frac{\omega_{d-2}r_{+}^{(d-2)}}{4}\left[1+\frac{2(d-2)\tilde{\alpha} k}{(d-4) r_{+}^2}\right].\end{aligned}$$ We can also calculate the thermodynamic volume $V$ and the electric potential $\Phi$ as $$\begin{aligned} V= \left (\frac{\partial M}{\partial P}\right)_{Q^2,S}=\frac{\omega_{d-2} r_{+}^{(d-1)}}{d-1},\end{aligned}$$ $$\begin{aligned} \Phi=\frac{Q\omega_{d-2}}{4\pi(d-3)r_{+}^{d-3}}.\end{aligned}$$ These quantities satisfy the first law of black holes thermodynamics [@Cai2] $$\begin{aligned} dM=TdS+\Phi dQ+VdP+\Omega d\tilde{\alpha},\end{aligned}$$ where $$\begin{aligned} \Omega= \left(\frac{\partial M}{\partial \tilde{\alpha}}\right)_{S,Q,P},\end{aligned}$$ is the quantity conjugate to the GB coefficient $\tilde{\alpha}$. Since $\tilde{\alpha}$ is a dimensionful parameter, the corresponding term will inevitably appear in the Smarr formula [@Cai2] $$M=\frac{d-2}{d-3}T S+\Phi Q-\frac{2}{d-3}V P + \frac{2}{d-3} \Omega \tilde{\alpha}.$$ As mentioned before, we want to study the phase structure with a different point of view which $Q^2$ plays the role of thermodynamic variable. We also, define a new variable in an our alternative approach, namely $\Psi$ which is $$\begin{aligned} \Psi=\left(\frac{\partial M}{\partial Q^2}\right)_{P,S}=\frac{\omega_{d-2}r_{+}^{(3-d)} }{8\pi (d-3)}.\label{psi}\end{aligned}$$ The new variable $\Psi$, pressure $P$ and temperature $T$ are intensive parameters conjugate to $Q^2$, $V$ and $S$ respectively. We also replace the term $\Phi dQ$ by $\Psi dQ^2$ in the first law of thermodynamics. By using (\[psi\]), one may easily show that the Smarr formula and the first law of thermodynamics are now given by $$M=\frac{d-2}{d-3}T S+\Psi Q^2 -\frac{2}{d-3}V P + \frac{2}{d-3} \Omega \tilde{\alpha}$$ and $$\begin{aligned} dM=TdS+\Psi dQ ^2+VdP+\Omega d\tilde{\alpha}.\end{aligned}$$ Critical behavior of GB black holes in Five dimension {#d5} ===================================================== In order to study the critical behaviour of GB black hole using the alternative approach, we first consider the case $d=5$ dimension. In this case, the equation of state is simple enough so that we can solve the equations and obtain critical quantities exactly. Since in this approach the cosmological constant (pressure) is regarded a constant quantity, so we consider the case with $P=0$ and $P\neq0$, separately. Critical behavior with $P=0$ ---------------------------- Setting $P=0$ and $d=5$, the mass of the GB black hole given in Eq.(\[mm\]) can be written $$\begin{aligned} M= \frac{3 \tilde{\alpha} k^2 \omega_{3}}{16 \pi }+\frac{3 k r_{+}^2 \omega_{3}}{16 \pi }+\frac{{Q^2} \omega_{3}}{64 \pi r_{+}^2}.\end{aligned}$$ Combining the Hawking temperature given in Eq.(\[Temp\]) with the above condition, the equation of state of the black hole can be written as $$\begin{aligned} \label{qqk} Q^2(\Psi, T)=\frac{3 k {\omega_{3}}^2}{256 \pi ^2 \Psi ^2}-\frac{3 \tilde{\alpha} k T {\omega_{3}}^{3/2}}{16 \sqrt{\pi } \Psi ^{3/2}}-\frac{3 T {\omega_{3}}^{5/2}}{512 \pi ^{3/2} \Psi ^{5/2}}.\end{aligned}$$ Critical points occur at stationary points of inflection in $Q^2-\Psi$ diagram, where $$\frac{\partial Q^{2}}{\partial \Psi}\Big|_{T_{c}}=0,\quad \frac{\partial ^{2} Q^{2}}{% \partial \Psi^{2}}\Big|_{T_{c}}=0. \label{crit eq}$$ In case of spherical horizon where $k=1$ and $\omega_{3}=2 \pi^2$, one may obtain the critical quantities of GB black hole as $$\begin{aligned} T_c=\frac{1}{\pi \sqrt{30 \tilde{\alpha}}},\quad \Psi_c= \frac{5 \pi}{48 \tilde{\alpha}},\quad Q^2_c=-\frac{36 \tilde{\alpha}^2}{125}.\end{aligned}$$ We see that in this case $Q^2_c$ is negative, which is physically not acceptable. Therefore, we conclude that in this case there does not exit any phase transition, and therefore $Q^2-\Psi$ diagram, has no similarity with isotherm diagrams of Van der Waals system. Let us check whether or not there is phase transition or critical behavior for other topology of horizon namely flat $(k=0)$ or hyperbolic $(k=-1)$ cases. In hyperbolic $(k=-1)$ case, the equation of state reduces to $$\begin{aligned} \label{qqd5p} Q^2(\Psi, T)=-\frac{3 {\omega_{3}}^2}{256 \pi ^2 \Psi ^2}+\frac{3 \tilde{\alpha} T {\omega_{3}}^{3/2}}{16 \sqrt{\pi } \Psi ^{3/2}}-\frac{3 T {\omega_{3}}^{5/2}}{512 \pi ^{3/2} \Psi ^{5/2}}.\end{aligned}$$ In this equation the value of positive term is smaller than the other two terms with negative values because $\tilde{\alpha}$ always has positive small value $(0<\tilde{\alpha} <1)$. Therefore $Q^2$ is a monotonic function of $\Psi$ and there is no critical point and phase transition. For the flat horizon, the equation of state reads $$\begin{aligned} Q^2(\Psi, T)= -\frac{3 \pi ^{7/2} T}{64 \sqrt{2} \Psi ^{5/2}}.\end{aligned}$$ It is clear that this equation is monotonic function which can not cause any phase transition. One may conclude that the existence of $\Lambda$ (pressure) is essential for the critical behavior in this new standpoint. Similary, the existence of $Q$ is necessary when we treat $\Lambda$ as a thermodynamic variable [@Mann1]. It may show a meaningful symmetry between new approach with fixed $\Lambda$ and old one with fixed $Q$. Critical behavior with $P\neq0$ -------------------------------- Following the approach taken previously, one may find no critical behavior in the cases with for $(k=0,-1 )$. Thus, we just focus on black holes with spherical topology ($k=1$). In this case Eq.(\[mm\]) with new conditions, takes the form $$\begin{aligned} M=\frac{3}{8} \pi \tilde{\alpha} k^2+\frac{3}{8} \pi k r_{+}^2+\frac{1}{2} \pi ^2 P r_{+}^4+\frac{\pi Q^2}{8 r_{+}^2},\end{aligned}$$ where the relation between $r_{+}$ and new parameter $\Psi$ is $r_{+}=\frac{\sqrt{\pi }}{2 \sqrt{2 \Psi }}$. Using the Hawking temperature in Eq.(\[Temp\]), the equation of state may be obtained as $$\begin{aligned} \label{QQ} Q^2(\Psi, T)=\frac{\pi ^2 }{128 \Psi ^3}\left(2 \pi ^2 P-48 \sqrt{2\pi} \Psi ^{3/2} \tilde{\alpha} T-3 \sqrt{2} \pi ^{3/2} \Psi^{1/2 }T+6 \Psi \right)\label{eq state d=5}\end{aligned}$$ As isotherm diagram shows in Fig.\[Fig3\], for constant pressure and $T=T_c$, there is an inflection point in $Q^2-\Psi$ diagrams which is the critical point where the second order phase transition occurs. The critical values reads $$\begin{aligned} T_c=\frac{\left[(3 \Gamma+25) l^2-162 \tilde{\alpha} \right] }{100 \sqrt{3} \pi \sqrt{\tilde{\alpha} } l^2} \text{ }\sqrt{-\Gamma-\frac{54 \tilde{\alpha} }{l^2}+5},\end{aligned}$$ $$\begin{aligned} \Psi_c=-\frac{\pi \left[54 \tilde{\alpha} +(\Gamma-5) l^2\right]}{96 \tilde{\alpha} l^2},\end{aligned}$$ $$\begin{aligned} Q^2_c=-\frac{144\tilde{\alpha} ^2 l^4 \left[126 \tilde{\alpha} +(\Gamma-5) l^2\right]}{5 \left[54 \tilde{\alpha} +(\Gamma-5) l^2\right]^3},\end{aligned}$$ where $$\Gamma=\sqrt{\frac{36 \tilde{\alpha} \left(81 \tilde{\alpha} -25 l^2\right)}{l^4}+25}.$$ We can also find $$\begin{aligned} \rho_c=T_c \Psi_c Q^2_c=\frac{\sqrt{3 \tilde{\alpha}} \left[(3 \Gamma+25) l^2-162 \tilde{\alpha} \right] \left[126 \tilde{\alpha} +(\Gamma-5) l^2\right] }{1000 \left[54 \tilde{\alpha} +(\Gamma-5) l^2\right]^2} \sqrt{-\Gamma-\frac{54 \tilde{\alpha} }{l^2}+5}.\end{aligned}$$ It can be realized that the above equations admit a phase transition with acceptable critical quantities provided the following two constrains are satisfied $$\label{lag} \left\{ \begin{array}{ll} $$\frac{36 \tilde{\alpha} \left(81 \tilde{\alpha} -25 l^2\right)}{l^4}+25 \geq 0,\quad \quad\quad $$ & \\ &\\ $$ -\Gamma-\frac{54 \tilde{\alpha} }{l^2}+5\geq 0,\quad~ \quad\quad \quad$$ \end{array} \right.$$ As one can see these conditions depend on the amount of $l$ and $\tilde{\alpha}$. However, they will be satisfied automatically for small $\tilde{\alpha}$ as one may see in the next section. In the limit of small $\tilde{\alpha}$, the series expansions of critical quantities are $$\begin{aligned} T_c=\frac{4 \sqrt{3}}{5 \pi l}-\frac{72 \sqrt{3} \tilde{\alpha} }{25 \pi l^3},\quad \Psi_c=\frac{27 \pi \tilde{\alpha} }{5 l^4}+\frac{3 \pi }{8 l^2} ,\quad Q^2_c=\frac{l^4}{45}-\frac{32 \tilde{\alpha} l^2}{25},\end{aligned}$$ which reduces to the following equation for small $\tilde{\alpha}$ $$\begin{aligned} \rho_c=T_c \Psi_c Q^2_c=\frac{l}{50 \sqrt{3} }-\frac{39}{125} \left(\frac{ \sqrt{3}}{l}\right) \tilde{\alpha} +O(\tilde{\alpha}^2).\end{aligned}$$ In the absence GB correction terms ($\tilde{\alpha}=0$) all critical values reduce to the results of section \[Field\]. The critical behavior of a thermodynamic system can be characterized by the Gibbs free energy, which in our case it can be written as $$\begin{aligned} G(Q^2,T)=\frac{3 \pi \tilde{\alpha} }{8}+\frac{9 \pi r_{+}^4}{16 l^2}-\frac{1}{16} \pi ^2 r_{+} T \left(54 \tilde{\alpha} +11 r_{+}^2\right)+\frac{15 \pi r_{+}^2}{32}.\end{aligned}$$ The behavior of the Gibbs free energy is depicted in Fig.\[Fig4\] in terms of $Q^2$ for various temperature. Evidently, for $T>T_c$ the Gibbs free energy develops a swallowtail shape which shows first order phase transition. The critical behavior of GB black hole ($k=1$) can be characterized by the critical exponent. In order to examine the critical exponents we introduce the following reduced thermodynamic variables $$\begin{aligned} \Psi_r\equiv \frac{\Psi}{\Psi_c}=1+\psi,\quad Q^2_r\equiv \frac{Q^2}{Q^2_c}=1+\phi,\quad T_r\equiv \frac{T}{T_c}=1+t.\end{aligned}$$ The Taylor expansion of Eq.(\[QQ\]) is $$\begin{aligned} \label{abc} \phi=A t +B\psi t - C\psi^3+o(t \psi^2 ,\psi^4),\end{aligned}$$ where $A=\left(-24-\frac{576\tilde{\alpha}}{l^2}\right)$, $B=\left(60+\frac{1296\tilde{\alpha}}{l^2}\right)$ and $C=\left(\frac{5}{2}+\frac{18\tilde{\alpha}}{l^2}\right)$. Applying Maxwell’s equal area law and considering the fact that during the phase transition the charge remains constant we have $$\begin{aligned} \label{abcabc} A t +B\psi_l t - C\psi_l^{3}=A t +B\psi_s t - C\psi_s^3.\end{aligned}$$ $$\begin{aligned} \label{max} 0=\Psi_c \int_{\psi_l}^{ \psi_s} \psi \left(\frac{\partial Q^2}{\partial \psi}\right )d\psi=\Psi_c \int_{\psi_l}^{ \psi_s} \psi (B t-3 C \psi^2)d\psi.\end{aligned}$$ The nontrivial solution for Eqs.(\[abcabc\]) and (\[max\]) reads $$\begin{aligned} \psi_l=- \psi_s \longrightarrow |\psi_l-\psi_s|= 2 \sqrt{\frac{-B}{C}}t^{\frac{1}{2}}.\end{aligned}$$ We conclude that the order parameter $\beta$ which is appropriate with the power of $t$ is $(\beta=\frac{1}{2})$. The next critical exponent is $\gamma$ which can be obtained by the following relation $$\chi _{T}=\frac{\partial \Psi}{\partial Q^2}\Big|_{T}\propto \frac{ \Psi_{c}}{B Q^2_{c}}\frac{1}{t}\quad \Longrightarrow \quad \gamma=1$$ The shape of the critical isotherm at $t=0$ is given by Eq.(\[abc\]). We find $$\phi|_{t=0}=-C\psi ^{3}\quad \Longrightarrow \quad \delta =3.$$ Finally, the heat capacity near the critical point at fixed $\Psi$ reads $$c_\Psi=T\frac{\partial S}{\partial T}\big|_\Psi=0$$ Since the entropy is independent of $T$, the critical exponent $\alpha=0$. Therefore, we have obtained all critical exponents in $Q^2-\Psi$ plans for GB black holes in five dimensions with spherical horizon. We have treated the charge of the black hole as a thermodynamic variable and kept the pressure as a fixed parameter. We have also confirmed that these critical exponents are similar to those of Van der Waals liquid-gas system. Critical behavior of Gb black holes in arbitrary dimensions {#arbitrary d} =========================================================== In this section we are going to extend our investigation on the critical behavior of GB black hole to all higher dimensions. Our approach for calculating the critical quantities and critical exponents in $d>5$ is exactly the same as in five dimensions. Therefore, for the economic reasons we do not repeat the calculations and only give the results. Using the Hawking temperature (\[Temp\]), we can write the equation of state in $d$-dimensions as $$\begin{aligned} Q^2(\Psi,T)&=& \frac{1}{2} Y^{2d-8} \Psi^{\frac{2d-8}{3-d}} \left((d-5)(d-2)k^2 \tilde{\alpha}+(d-2) k \Psi^{\frac{1}{3-d}}\Bigg(-8\pi T \tilde{\alpha}+ Y(d-3) \Psi^{\frac{1}{3-d}}\right)\nonumber\\&& +4 Y^3 \pi \Psi^{\frac{3}{3-d}}\left(-(d-2) T +4 Y P\Psi^{\frac{1}{3-d}}\right )\Bigg).\end{aligned}$$ The behavior of the isotherm diagrams in $d=6,7$ are shown in Fig.\[Fig5\] which show the same behavior as Van der Waals system and predict a first order phase transition in the system. The equation of state is complicated and it is not easy to obtain the critical point analytically, however, we can still calculate the critical quantities for small $\tilde{\alpha}$. As we have explained in the previous section, we do not expect to see critical behavior in the absence of cosmological constant ($P=0$), or in case with $k=0, -1$. Thus, we consider the spherical horizon in the presence of $\Lambda$. Using Eq. (\[crit eq\]) in case $k=1$, one can find $$\begin{aligned} T_c&=&\frac{d^3-6 d^2+11 d-6}{\pi \sqrt{(d-1)(d-2)} (2 d-5) l}+\frac{\tilde{\alpha} (d-2)^{3/2} (d-1)^{3/2} \left(-6 d^3+53 d^2-155 d+152\right)}{2 \pi (5-2 d)^2 (d-3)^3 l^3}+O(\tilde{\alpha}^2),\\ \Psi_c &=&\frac{(d-3)^{2-d} (d-2)^{\frac{d-3}{2}} (d-1)^{\frac{d-3}{2}} \omega_{d-2} l^{3-d}}{8 \pi } \nonumber\\&& -\frac{(d-3)^{(-1-d)} (d-2)^{\frac{d-1}{2}}\label{eq d} (d-1)^{\frac{d-1}{2}}}{16 \pi (2d-5)} \left[204+d(-225+(83-10d)d)\right] l^{1-d} \omega_{d-2} \tilde{\alpha}+O(\tilde{\alpha}^2),\\ Q^2_c &=&\frac{(d-3)^{2 d-5} (d-2)^{3-d} (d-1)^{3-d} l^{2 d-6}}{2 (2 d-5)} \nonumber\\&& -\frac{\tilde{\alpha} (d-3)^{2 d-8} (d-2)^{5-d} (d-1)^{4-d} \left(10 d^2-51 d+69\right) l^{2 d-8}}{2 (5-2 d)^2}+O(\tilde{\alpha}^2),\end{aligned}$$ which leads to $$\begin{aligned} \rho_c&=&Q^2_c Tc \Psi_c= \frac{(d-3)^{d-2} (d-2)^{2-\frac{d}{2}} (d-1)^{2-\frac{d}{2}} \omega_{d-2} l^{d-4}}{16 \pi ^2 (2 d-5)^2}- \\ && \nonumber\frac{\tilde{\alpha} \left((d-3)^{d-6} (d-2)^{3-\frac{d}{2}} (d-1)^{3-\frac{d}{2}} \left(10 d^4-83 d^3+241 d^2-268 d+64\right) \omega_{d-2} l^{d-6}\right)}{32 \pi ^2 (2 d-5)^3}+O\left(\tilde{\alpha} ^2\right)\end{aligned}$$ It is worthwhile to note that all above equations reduce to the results in section \[Field\] when $\tilde{\alpha}$. The Gibbs free energy of the GB black hole $G=M-T S$ can be calculated as $$\begin{aligned} G(Q^2,T)&=&\frac{\omega_{d-2} }{64\pi (d-3)}\Bigg(\frac{(d-2) (5 d-13) r^{d-1}}{ l^2}+r^{d-3} \left[5 d (d-4 \pi r T-5)+56 \pi r T+30 \right] \notag \\ && +\frac{\tilde{\alpha} (d-2) r^{d-5} \left(5 d^2+8 \pi (16-5 d) r T-37 d+68\right)}{ (d-4) }\Bigg) + O\left(\tilde{\alpha} ^2\right) \end{aligned}$$ Fig. \[Fig6\] shows the behaviour of Gibbs free energy versus $Q^2$ which shows that the phase transition can occur when the temperature is more than $T_c$. From these figures we find that GB black holes admit a first order phase transition in higher dimensions. Following the method of the previous section, we can calculate the critical exponents of GB black holes in higher dimensions. The result is $$\begin{aligned} \alpha=0,\quad \beta=1/2, \quad \gamma=1,\quad \delta=3.\end{aligned}$$ A close look at the behavior of $f(r)$ in Figs.\[Fig7\], shows the existence of zero, one or two roots for the metric function depending on the value of $\tilde{\alpha}$. It is worthwhile to note that the event horizon disappears with increasing $\tilde{\alpha}$. So we have a naked singularity when $\tilde{\alpha}$ is larger than specific value. In other words, we did not see any critical behavior for large value of $\tilde{\alpha}$ because there is no event horizon in this range of $\tilde{\alpha}$. It is worth studying the behavior of the critical values with respect to $\tilde{\alpha}$, which displayed in Figs. \[Fig9\] and \[Fig10\]. We see from Fig. \[Fig9\] that $T_c$ and $Q^2_c$ decrease when $\tilde{\alpha}$ increases. In addition, the large value of $\tilde{\alpha}$ leads to negative values for $T_c$ and $Q^2_c$. This implies that $\tilde{\alpha}$ should have an upper bound which depends on the dimension of spacetime. On the other hand, as one may see from Fig. \[Fig10\] the values of $\psi_c$ and $\rho_c$ increase with increasing $\tilde{\alpha}$ with no upper bound. We also plot the Gibbs free energy versus $Q^2$ for diffract values of $\tilde{\alpha}$ for both $d=5$ and $d=6$ in Fig. \[fig11\] which show that all cases have upward trends. Summery and conclusion ====================== We have investigated the critical behavior of charged GB black holes in AdS spaces via an alternative phase space. In this approach, one can treat the square of the charge of the black hole as a thermodynamic variable and fix the cosmological constant. It is more reasonable to take the charge of the black hole as an external variable which can vary, instead of the cosmological constant which basically has a constant value. For example, one may think that the charge of the black hole can change by absorbing or emitting the charged particles. The advantages of this new approach is that we do not need to extend the thermodynamical phase space, in order to see the critical behaviour of the system. It was argued that this new approach admits a critical behavior for the black holes similar to the Van der Waals liquid-gas system with the same critical exponents provided one treat the square of the charge of the black hole ($Q^2$) as the thermodynamic variable [@Dehy]. In this paper, we first generalized the method developed in [@Dehy] to all higher dimensions by investigating the critical behaviour of $d$-dimensional charged AdS black holes and treating $Q^2$ as the thermodynamic variable and keeping $\Lambda$ constant. We found that the critical behaviour of the system in $Q^2-\Psi$ plane and the critical exponents are similar to the the Van der Waals fluid system. Then, we applied this new approach to string inspired GB gravity. We found out that the phase transition occurs only for the small values of GB coupling constant ($\tilde{\alpha}$). Besides, the critical quantities are reasonable, provided $\tilde{\alpha}$ to be small and the topology of the horizon is assumed spherical. We found that these black holes may have one or two horizons for small $\tilde{\alpha}$. Therefore, there is neither horizon nor phase transition for larger value of the dilaton coupling constant $\tilde{\alpha}$. We calculated the critical quantities such as $T_c$, $\Psi_c$, $Q_c$ and $\rho_c$ and the critical exponents and observed the critical temperature and critical charge go to zero as $\tilde{\alpha}$ increases. Furthermore, we calculated the Gibbs free energy of the system. The swallowtail shapes of the Gibbs diagrams show the existence of first order phase transition in the system. Also the zero order phase transition are not seen in the diagrams. Finally, we calculated the critical exponents of the GB black holes in all higher dimensions and observed that they are independent of the details of the system and are the same as those of Van der Waals fluid. [99]{} A. Dehyadegari, A. Sheykhi ,A. Montakhab, Phys. Lett. B **768**, 02064 (2017). J. M. Maldacena, Adv. Theor. Math. Phys. **2**, 231 (1998). S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B 428, 105 (1998). E. Witten, Adv. Theor. Math. Phys.**2** 253 (1998). B. P. Dolan, Class. Quant. Grav. **28**, 235017 (2011). D. Kastor, S. Ray, and J. Traschen, Class. Quant. Grav. **26**, 195011 (2009). B. Dolan, Class. Quant. Grav. **28**, 125020 (2011). B. P. Dolan, Phys. Rev. D **84**, 127503 (2011). M. Cvetic, G. W. Gibbons, D. Kubiznak, and C. N. Pope, Phys. Rev. D **84**, 024037 (2011). M. Urana, A. Tomimatsu, and H. Saida, Class. Quant. Gravit. **26**, 105010 (2009). D. Kubiznak and R. B. Mann, JHEP[07]{} 033 (2012). S. Gunasekaran, R. B. Mann and D. Kubiznak, JHEP **11**, 110 (2012). N. Altamirano, D. Kubiznak, R.B. Mann and Z. Sherkatghanad, Galaxies **2**, 89 (2014). M. B. Jahani Poshteh, B. Mirza and Z. Sherkatghanad, Phys. Rev. D [**88**]{}, 024005 (2013). Z. Sherkatghanad, B. Mirza, Z. Mirzaeyan and S. A. Hosseini Mansoori, Int. J. Mode. Phys. D, Vol. [**26**]{}, 1750017 (2017). M. H. Dehghani, S. Kamrani, A. Sheykhi, Phys. Rev. D. **90**, 104020 (2014). S. H. Hendi, and M. H. Vahidinia, Phys. Rev. D **88**, 084045 (2013);\ Z. Dayyani, A. Sheykhi and M. H. Dehghani, S. Hajkhalili, Eur. Phys. J. C **78**, 152 (2018). Sh. Gunasekaran, D. Kubiznak, R. B. Mann, JHEP **11**, 110 (2012). M. H. Dehghani, A. Sheykhi and Z. Dayyani, Phys. Rev. D **93**, 024022 (2016). V. Iyer and R.M. Wald, Phys. Rev. D **50**, 846 (1994). D. Lovelock, J. Math. Phys. **12**, 498 (1971). R. G. Cai, Phys. Rev. D **65**, 084014 (2002). T. K. Dey, S. Mukherji, S. Mukhopadhyay, and S. Sarkar, JHEP **0704**, 014 (2007). Shao-Wen Wei, Yu-Xiao Liu ,Physical Review D **87**, 044014 (2013). R. G. Cai, L. M. Cao, L. Li and R. Q. Yang, JHEP **9** (2013). D. Zou, Y.i Liu, B. Wang, Phys. Rev. D [**90**]{}, 044063 (2014). W. Xu, H. Xu and L. Zhao, Eur. Phys. J. C **74**, 3074 (2014) S. W. Wei and Y. X. Liu, Phys. Rev. D **90**, 044057 (2014). A. Frassino, D. Kubiznak, R. Mann, and F. Simovic, JHEP [**09**]{}, 080 (2014);\ J. X. Mo, W. B. Liu, Eur. Phys. J. C. **74**, 2836 (2014). A. Dehyadegari and A. Sheykhi, Phys. Rev. D **98**, 024011 (2018). Z. Dayyani and A. Sheykhi, Phys. Rev. D [**98**]{}, 104026 (2018). D. R. Brill, J. Louko, and P. Pelda�n, Phys. Rev. D [**56**]{}, 3600 (1997). R. G. Cai and K. S. Soh, Phys. Rev. D [**59**]{}, 044013 (1999). A. Chamblin, R. Emparan, C.V. Johnson and R.C. Myers, Phys. Rev. D [**60**]{},064018 (1999). D. G. Boulware and S. Deser, Phys. Rev. Lett **55**, 2656 (1985). [^1]: asheykhi@shirazu.ac.ir
--- author: - 'P. Bonifacio' - 'M. Spite' - 'R. Cayrel' - 'V. Hill' - 'F. Spite' - 'P. François' - 'B. Plez' - 'H.-G Ludwig' - 'E. Caffau' - 'P. Molaro' - 'E. Depagne' - 'J. Andersen' - 'B. Barbuy' - 'T.C. Beers' - 'B. Nordström' - 'F. Primas' date: 'Received 16 July 2008/ Accepted 15 March 2009' title: 'First stars XII. Abundances in extremely metal-poor turnoff stars, and comparison with the giants. [^1] ' --- [We aim to compare the results for giants with new, accurate abundances for all observable elements in 18 EMP turnoff stars. ]{} [VLT/UVES spectra at $R\sim45,000$ and S/N$\sim$ 130 per pixel ($\lambda\lambda$ 330-1000 nm) are analysed with OSMARCS model atmospheres and the TURBOSPECTRUM code to derive abundances for C, Mg, Si, Ca, Sc, Ti, Cr, Mn, Co, Ni, Zn, Sr, and Ba. ]{} [For Ca, Ni, Sr, and Ba, we find excellent consistency with our earlier sample of EMP giants, at all metallicities. However, our abundances of C, Sc, Ti, Cr, Mn and Co are $\sim$0.2 dex larger than in giants of similar metallicity. Mg and Si abundances are $\sim$0.2 dex lower ( the giant \[Mg/Fe\] values are slightly revised), while Zn is again $\sim$0.4 dex higher than in giants of similar \[Fe/H\] (6 stars only). ]{} [For C, the dwarf/giant discrepancy could possibly have an astrophysical cause, but for the other elements it must arise from shortcomings in the analysis. Approximate computations of granulation (3D) effects yield smaller corrections for giants than for dwarfs, but suggest that this is an unlikely explanation, except perhaps for C, Cr, and Mn. NLTE computations for Na and Al provide consistent abundances between dwarfs and giants, unlike the LTE results, and would be highly desirable for the other discrepant elements as well. Meanwhile, we recommend using the giant abundances as reference data for Galactic chemical evolution models. ]{} Introduction ============ The surface composition of a cool star is a good diagnostic of the chemical composition of the gas from which it formed, if mixing with material processed inside the star itself has not occurred. Cool, long-lived stars have thus been extensively used to study the early chemical evolution of our Galaxy (and, by implication, other galaxies as well). The trends in abundance ratios which have been established over the last 30 years provide important constraints on the early chemical evolution of the Milky Way (see Cayrel [@Cayrel1996], [@Cayrel2006] for classic and recent reviews of the topic). Our own programme, “First Stars”, is a comprehensive spectroscopic study of extremely metal-poor (EMP) stars to obtain precise information on the chemical composition of the early ISM and the yields of the first generation(s) of supernovae, conducted with the VLT and UVES spectrograph. The target stars have been selected from the medium-resolution follow-up (Beers et al. in preparation; Allende Prieto et al. [@allende]) of the HK objective-prism survey (Beers et al. [@beers85; @beers92] and Beers [@beers99]), initiated by George Preston and Steve Shectman, and later substantially extended and followed up by Beers as part of many collaborations, including the present one. Several papers have presented our results on the giant stars, which lend themselves most readily to the study of many elements: Hill et al. ([@HPC02] - First Stars I), Depagne et al. ([@DHS02] - First Stars II), François et al. ([@FDH03] - First Stars III), Cayrel et al. ([@CDS04] - First Stars V), Spite et al. ([@SCP05] - First Stars VI), François et al. ([@FDH06] - First Stars VIII), and Spite et al. ([@SCH06] - First Stars IX). In these papers, we found the abundances in some giants to have been altered with respect to their initial chemical composition, due to mixing with layers affected by nuclear burning. All the stars have undergone the first dredge-up, so their abundances of Li, C, and N are under suspicion. However, our detailed analysis (First Stars VI and IX), showed that the surface abundances of the less luminous giants (those below the “bump” in the luminosity function) are not significantly affected by mixing. It is therefore expected that the less-luminous giants and dwarfs should display the same abundances, provided that the surface composition of the latter has not been changed by atmospheric phenomena, such as diffusion. Comparing abundance ratios in dwarfs and giants can therefore, in principle, yield insight into the degree of mixing in giants and diffusion in dwarfs as well as which element ratios are reliable guides to the composition of the early ISM in the Galaxy. So far, only few of our papers have discussed results for EMP dwarfs: Sivarani et al. ([@SBM04] - First Stars IV, [@SBB06] - First Stars X), Bonifacio et al. ([@BMS06] - First Stars VII), and Gonz[á]{}lez Hern[á]{}ndez et al. ([@jonay] - First Stars XI). First Stars VII focused on the Li abundance, but also discussed the model parameters and \[Fe/H\] of the dwarf sample in considerable detail. Here we discuss the abundances from C to Ba in the same stars and compare the results for dwarfs and giants. Observations and reduction ========================== The sample of stars and the observational data are the same as discussed in Paper VII (Bonifacio et al. [@BMS06]). The observations were performed with the ESO VLT and the high-resolution spectrograph UVES (Dekker et al. [@DDK00]) at a resolution of $R$= 45,000 and typical S/N ratios per pixel of $\sim$130 on the coadded spectra (average 5 pixels per resolution element). The spectra were reduced using the UVES context within MIDAS (Ballester et al. [@BMB00]); see paper V for details. The region of the b triplet in our spectra is shown in Fig. \[Mgb\] (see also Fig. 1 of Paper VII, which shows the Li line in the same stars). Equivalent widths were measured on the coadded spectra. For a few stars, for which spectra with different resolutions (different slit-width, or image slicer used) were available, we coadded separately the spectra with the same resolution and then averaged the equivalent widths. Determination of atmospheric parameters {#analysis} ======================================= We have carried out a classical 1D LTE analysis using OSMARCS models (see, e.g., Gustafsson et al. [@GBE75], [@GEE03], [@G2008]). Estimates of were derived from the wings of H$\alpha$; log g estimates were obtained by consideration of the ionisation equilibrium of iron. Microturbulent velocities were fixed by requiring no trend of \[/H\] with equivalent width. Details are given in “First Stars VII", together with an extensive discussion of the effective temperature scale. In that paper we established that our H$\alpha$ based temperatures satisfy the iron excitation equilibrium and are also in good agreement with the Alonso et al. ([@alonso]) colour-temperature calibration, which we used for the giant stars (Cayrel et al. [@CDS04]) The adopted parameters are listed in Table \[tabmod\]. The parameters of the subgiant star BS16076-006 require a comment, because the Balmer line broadening in this star increases from H$\alpha$ towards the higher members of the Balmer series. Our adopted   (5199K) is derived from the wings of H$\alpha$, but the wings of H$\delta$ correspond to a much higher effective temperature, of the order of 5900K. All values of  derived from colours are also consistently higher than derived from the H$\alpha$ profile, confirming this peculiarity. This star was also analysed from medium-resolution ESI - Keck spectra ($R$=7000) by Lai et al. ([@lai2004]), who adopted a = 5458K, based on photometry. Such a  is compatible with the profile of H$\gamma$, but too low to reproduce the profile of H$\delta$. The reason for this peculiar behaviour (e.g. a binary companion or chromospheric activity), needs further investigation, but the three radial velocities of derived from our two spectra and that of Lai et al. ([@lai2004]) show no evidence of variation. None of our results depends critically on the abundances of this star, however. Abundance determination ======================= The abundance analysis was performed using the LTE spectral line analysis code “Turbospectrum” (Alvarez and Plez, [@AP98]). The abundances of the different elements have been determined mainly from the equivalent widths of unblended lines. However, synthetic spectra have been used to determine abundances from the molecular bands, or in cases when the lines were severely blended, affected by hyperfine structure, or were strong enough to show significant damping wings (see Sect. \[mag\]). The abundances of C and the $\alpha$ elements (as well as for Sc) are listed in Table \[tababund\], those of the heavier (neutron-capture) elements are listed in Table \[tababund1\]. Abundance uncertainties are discussed in detail in Cayrel et al. ([@CDS04]) and Bonifacio et al. ([@BMS06]). For a given temperature, the ionisation equilibrium provides an estimate of the gravity with an internal precision of about 0.1 dex in log $g$, and the microturbulent velocity can be constrained within about 0.2 . The largest uncertainty comes from the temperature determination, which is uncertain by $\sim$100K. The total error estimate is not the quadratic sum of the various sources of uncertainty, because the covariance terms are important. As an illustration of the total expected uncertainty we have computed the abundances of CS 29177-009 with different models: Model A has the nominal temperature 6260 K, gravity (log $g$ = 4.5), and microturbulent velocity ($v_t$ = 1.3 ), while Models B and C differ in log $g$ and $v_t$ by 1$\sigma$. Model D has a temperature 100 K lower and the same log $g$ and $v_t$, while in Model E we have determined the “best” values of log $g$ and $v_t$ corresponding to the lower temperature. The detailed results of these computations are given in Table \[errors\]. Star $T_{\rm eff}$  log g      \[Fe/H\]  Rem ---- -------------- --------------- --------- ----- ------------ ----- -- -- -- -- -- -- -- -- -- 1 BS 16023-046 6364 4.50 1.3 -2.97 2 BS 16968-061 6035 3.75 1.5 -3.05 3 BS 17570-063 6242 4.75 0.5 -2.92 4 CS 22177-009 6257 4.50 1.2 -3.10 5 CS 22888-031 6151 5.00 0.5 -3.30 6 CS 22948-093 6356 4.25 1.2 -3.30 7 CS 22953-037 6364 4.25 1.4 -2.89 8 CS 22965-054 6089 3.75 1.4 -3.04 9 CS 22966-011 6204 4.75 1.1 -3.07 10 CS 29499-060 6318 4.00 1.5 -2.70 11 CS 29506-007 6273 4.00 1.7 -2.91 12 CS 29506-090 6303 4.25 1.4 -2.83 13 CS 29518-020 6242 4.50 1.7 -2.77 14 CS 29518-043 6432 4.25 1.3 -3.20 15 CS 29527-015 6242 4.00 1.6 -3.55 bin 16 CS 30301-024 6334 4.00 1.6 -2.75 17 CS 30339-069 6242 4.00 1.3 -3.08 18 CS 31061-032 6409 4.25 1.4 -2.58 19 BS 16076-006 5199 3.00 1.4 -3.81 sg : Adopted model atmosphere parameters. Our UVES spectra show that BS 16076-006 is in fact a subgiant (First Stars VII), and CS 29527-015 a double-lined binary; these stars are omitted in Fig. \[Mgb\]. All the others seem to be single turnoff stars. []{data-label="tabmod"} C, N, O abundances ================== Carbon ------ The carbon abundance was determined by spectrum synthesis of the $A^{2}\Delta - X^{2}\Pi$ band of CH (the G band). Wavelengths of the CH lines are from Luque and Crosley ([@LC99]); transition energies are from the list of J[ø]{}rgensen et al. ([@JLI96]); isotopic shifts were computed using the best set of available molecular constants. The strongest lines of ${\rm ^{13}CH}$ at 423nm are invisible in all of our stars, so the ${\rm ^{12}C/^{13}C}$ ratio could not be measured. In computing the total C abundance, we have therefore assumed a solar ${\rm ^{12}C/^{13}C}$ ratio. In Fig. \[carbon\] we present the measured \[C/Fe\] values in our dwarf stars and compare them to values for our unmixed giants from Paper V. In this figure we have omitted the mixed giants, located above the bump, since we have shown (First Stars VI and IX) that the abundances of C and N in the atmospheres of these stars are strongly affected by mixing and thus are not good diagnostics of their initial chemical compositions. The mean \[C/Fe\] value for the turnoff stars is $\rm \overline{[C/Fe]} = 0.45 \pm 0.10$ (s.d.), but $\rm \overline{[C/Fe]} = 0.19 \pm 0.16$ (s.d.) for the giants. Thus, we find a moderately significant difference between the C abundances in the giants and the turnoff stars (Fig.\[carbon\]). We discuss the possible causes of this discrepancy in Sec.\[disc:3d\]. The mean \[C/Fe\] has been computed excluding the binary turnoff star CS 29527-015, which appears to be quite carbon rich (Fig. \[carbon\]). Nitrogen -------- Generally, the NH (and CN) bands are not visible in the spectra of EMP turnoff stars (the stars are too hot), so N abundances can only be measured in strongly N-enhanced stars (First Stars X). The subgiant BS 16076-006 exhibits a weak NH band, however, and we find \[N/Fe\]= +0.29 for this star, taking into account the correction of $-0.4$ dex derived in Paper VI. Figure \[azote\] shows the measured \[N/Fe\] ratios for our sample of “unmixed” giants (Paper VI). BS 16076-006 agrees with (and thus supports) the high \[N/Fe\] values found in the giants at the lowest metallicities. Star \[Fe/H\] \[C/Fe\] $\sigma$   \[Mg/Fe\] $\sigma$ N   \[Si/Fe\] N   \[Ca/Fe\] $\sigma$ N   \[Sc/Fe\] N   \[Ti/Fe\] $\sigma$ N ---- -------------- ---------- ---------- ---------- --- ----------- ---------- --- --- ----------- --- --- ----------- ---------- ---- --- ----------- --- --- ----------- ---------- ---- -- -- -- 1 BS 16023-046 -2.97 0.55 0.15   0.06 0.06 7   -0.07 1   0.29 0.09 10   0.10 1   0.36 0.06 16 2 BS 16968-061 -3.05 0.45 0.15   0.29 0.06 7   0.31 1   0.37 0.10 12   0.44 1   0.38 0.05 20 3 BS 17570-063 -2.92 0.40 0.15   0.08 0.06 7   0.04 1   0.29 0.10 11   0.29 1   0.45 0.06 16 4 CS 22177-009 -3.10 0.38 0.15   0.22 0.06 7   0.15 1   0.27 0.08 9   0.21 1   0.27 0.05 15 5 CS 22888-031 -3.30 0.38 0.15   0.23 0.10 7   0.31 1   0.31 0.16 8   0.28 1   0.39 0.04 11 6 CS 22948-093 -3.30 [–]{} -   0.05 0.05 6   -0.13 1   0.30 0.12 4   0.35 1   0.49 0.11 15 7 CS 22953-037 -2.89 0.37 0.15   0.36 0.08 7   -0.01 1   0.24 0.10 9   0.35 1   0.26 0.06 17 8 CS 22965-054 -3.04 0.62 0.15   0.25 0.07 7   -0.02 1   0.47 0.16 13   0.16 1   0.44 0.14 25 9 CS 22966-011 -3.07 0.45 0.15   0.21 0.08 7   0.27 1   0.32 0.14 10   0.21 1   0.38 0.07 16 10 CS 29499-060 -2.70 0.38 0.15   0.19 0.06 7   0.00 1   0.28 0.06 13   0.10 1   0.50 0.07 27 11 CS 29506-007 -2.91 0.49 0.15   0.28 0.05 7   0.17 1   0.49 0.07 13   0.36 1   0.52 0.08 23 12 CS 29506-090 -2.83 0.41 0.15   0.27 0.06 7   0.17 1   0.46 0.10 13   0.27 1   0.47 0.07 20 13 CS 29518-020 -2.77 [–]{} -   0.06 0.03 3   [–]{} 1   0.40 0.22 7   [–]{} 1   [–]{} - - 14 CS 29518-043 -3.20 [–]{} -   0.19 0.09 7   0.01 1   0.40 0.11 9   0.41 1   0.49 0.03 15 15 CS 29527-015 -3.55 1.18 0.15   0.43 0.08 7   0.15 1   0.36 0.23 4   0.26 1   0.35 0.12 10 16 CS 30301-024 -2.75 0.23 0.15   0.28 0.07 7   0.17 1   0.45 0.08 14   0.20 1   0.45 0.12 25 17 CS 30339-069 -3.08 0.56 0.15   0.18 0.03 7   -0.12 1   0.43 0.13 10   0.17 1   0.38 0.09 20 18 CS 31061-032 -2.58 0.56 0.15   0.22 0.06 7   0.14 1   0.40 0.14 15   0.31 1   0.45 0.11 25 - BS 16076-006 -3.81 0.34 0.10   0.58 0.05 7   0.31 1   0.39 0.14 10   0.42 1   0.34 0.07 17   Star \[Fe/H\] \[Cr/Fe\] $\sigma$ N \[Mn/Fe\]\* $\sigma$ N \[Co/Fe\] $\sigma$ N \[Ni/Fe\] $\sigma$ N \[Zn/Fe\] N \[Sr/Fe\] N \[Ba/Fe\] N   ---- -------------- ---------- ----------- ---------- --- --- ------------- ---------- ------- --- ----------- ---------- ------- --- ----------- ---------- --- --- ----------- --- --- ------------ --- --- ----------- --- --- -- -- -- -- -- -- 1 BS 16023-046 -2.97 -0.12 0.07 5   -0.55 0.03 3   0.28 0.03 2   -0.03 0.15 3   $<0.54$ -   -0.18 1   [–]{} -   2 BS 16968-061 -3.05 -0.24 0.06 5   -0.64 0.00 3   0.40 0.04 4   0.04 0.07 3   $<0.28$ -   -0.57 1   [–]{} -   3 BS 17570-063 -2.92 -0.23 0.12 5   -0.76 0.01 3   0.31 0.08 3   -0.07 0.18 3   $<0.41$ -   -0.02 1   -0.26 1   4 CS 22177-009 -3.10 -0.22 0.04 5   -0.57 0.05 3   0.37 0.08 3   0.02 0.01 2   $<0.37$ -   -0.36 1   [–]{} -   5 CS 22888-031 -3.30 -0.28 0.09 4   -0.74 0.00 2   0.57 0.11 3   0.08 0.08 2   [–]{} -   0.18 1   [–]{} -   6 CS 22948-093 -3.30 -0.21 0.08 3   -0.69 0.00 2   0.50 - 1   -0.01 0.04 2   $<0.82$ -   -0.16 1   -0.23 1   7 CS 22953-037 -2.89 -0.32 0.05 5   -0.78 0.03 3   0.39 0.13 3   0.04 0.11 3   $<0.39$ -   -0.57 1   [–]{} -   8 CS 22965-054 -3.04 -0.16 0.04 5   -0.51 0.02 3   0.44 0.21 4   0.03 0.07 3   0.67 1   +0.31 1   [–]{} -   9 CS 22966-011 -3.07 -0.23 0.03 5   -0.70 0.00 3   0.48 0.12 4   0.08 0.06 2   $<0.50$ -   0.03 1   -0.05 1   10 CS 29499-060 -2.70 0.01 0.04 6   -0.28 0.02 3   0.36 0.09 4   0.19 0.09 3   0.73 1   -0.60 1   [–]{} -   11 CS 29506-007 -2.91 -0.12 0.05 5   -0.59 0.01 3   0.39 0.03 3   0.04 0.08 3   0.71 1   0.16 1   0.18 1   12 CS 29506-090 -2.83 -0.16 0.06 5   -0.62 0.02 3   0.45 0.11 4   0.04 0.12 3   0.66 1   0.36 1   -0.35 1   13 CS 29518-020 -2.77 -0.18 0.05 2   [–]{} [–]{} [–]{}   [–]{} [–]{} [–]{}   0.04 [–]{} 1   $<0.33$ -   [–]{} -   [–]{} -   14 CS 29518-043 -3.20 -0.20 0.08 4   -0.64 0.00 2   0.57 - 1   0.07 0.01 2   $<0.68$ -   0.08 1   [–]{} -   15 CS 29527-015 -3.55 -0.21 0.15 4   -0.66 - 1   0.70 - 1   -0.09 0.05 2   $<0.98$ -   0.34 1   [–]{} -   16 CS 30301-024 -2.75 -0.16 0.06 5   -0.59 0.01 3   0.30 0.11 4   0.02 0.04 3   0.55 1   -0.32 1   -0.28 1   17 CS 30339-069 -3.08 -0.24 0.06 5   -0.71 0.00 3   0.33 0.05 2   -0.01 0.17 3   $<0.47$ -   -0.10 1   [–]{} -   18 CS 31061-032 -2.58 -0.10 0.16 6   -0.51 0.02 3   0.38 0.15 4   0.03 0.05 3   0.40 1   0.21 1   -0.40 1   - BS 16076-006 -3.81 -0.41 0.16 6   -0.93 0.10 3   0.39 0.05 4   -0.05 0.04 3   [–]{} -   $\le$-1.59 1   $\le$-1.0 1   \* \[Mn/Fe\] has been determined only from the lines of the resonance triplet. Oxygen ------ We have not been able to measure O abundances for any of our dwarf stars. The \[\] line at 630.03 nm, which we used for giants, is too weak, as is the permitted triplet at 770 nm, given the S/N we achieve in this spectral region. Only for the dwarf binary system CS 22876-032 have we been able to measure O abundances, using the OH lines in the UV (Gonzàlez Hernàndez et al. [@jonay]; Paper XII), and these are compatible with the O abundances measured in giants. Our spectra of the dwarfs discussed in the present paper do not cover the OH lines in the UV. The success in the case of CS 22876-032 suggests that these lines probably offer the only option for measuring O abundances in EMP dwarfs. The $\alpha$ elements: Mg, Si, Ca, Ti ===================================== Fig. \[MgSiCaTi\] presents the observed \[$\alpha$/Fe\] ratios in our EMP dwarf and giant samples. A priori, we expect to find the same mean abundance for these elements in dwarfs and in giants, and this is what we see for Ca. However, the mean \[Mg/Fe\] and \[Si/Fe\] ratios are $\sim$ 0.2 and 0.3 dex lower in the EMP dwarfs than in the giants, while the mean abundance of \[Ti/Fe\] ratio is [*higher*]{} in the dwarfs by about 0.2 dex. What are the possible causes of these differences? Magnesium {#mag} ---------- In Fig. \[MgSiCaTi\], the Mg abundance for the giant stars has been derived from a full fit to the profiles of the Mg lines, in contrast to the results given by Cayrel et al. ([@CDS04], Paper V). The equivalent widths of the Mg lines are often quite large ($\rm EW > 120mA$), and in Paper V we underestimated the equivalent widths of these lines by neglecting the wings. For the most Mg-poor stars in our sample the lines are weak and the difference negligible, but it is quite significant in most of our stars, with a mean systematic difference of about 0.15 dex. In the dwarfs, the abundance has been derived from profile fits to the strongest lines (the lines at $\sim$383 nm, which are also located in the wings of a Balmer line); and from equivalent widths for the weak lines. Silicon -------- In the cool giants the Si abundance is derived from a line at 410.3 nm. This line (multiplet 2) is located in the wing of H$\delta$, and the hydrogen line has been included in the computations. There is another line at 390.6 nm (multiplet 3), but in giants this line is severely blended by CH lines. In turnoff stars the line at 410.3 nm is invisible, but the CH lines are weak enough that the line at 390.6 nm can be used. Thus, in the end, only a single Si line (but not the same one) could be used in both dwarfs and giants; a systematic error in the log [*gf*]{} of these lines could explain the observed difference. Both lines are in fact measured in the subgiant star BS16076-006 and yield consistent Si abundances, but, given the uncertain atmospheric parameters of this star (see \[analysis\]), a systematic error in log [*gf*]{} cannot be ruled out. Our new \[Si/Fe\] ratios are in good agreement with the value found from the same Si line by Cohen ([@CCM04]), also for EMP turnoff stars. Titanium --------- The Ti I lines are very weak in turnoff stars, so the Ti abundance can only be determined from the Ti II lines. About 15 Ti II lines could be used, and the internal error of the mean is very small (less than 0.1 dex). Fig. \[MgSiCaTi\] clearly shows higher \[Ti/Fe\] ratios in the dwarfs than in the giants ($\rm \Delta [Ti/Fe] = 0.2$ dex). Even if we use exactly the same lines in the giants as in the dwarfs, we observe the same effect; thus, an error in log [*gf*]{} values cannot explain the difference. On the other hand, to reduce the derived \[Ti/Fe\] by 0.2 dex (keeping the same temperature) would require changing log $g$ in the turnoff stars by about 1 dex, which is quite incompatible with the ionisation equilibrium of the iron lines. The light odd-Z metals: Na, Al, K, and Sc ========================================= [lrrrr]{}\ \ \ \ \ \ El. & $\Delta_{B-A} $ & $\Delta_{C-A} $& $\Delta_{D-A} $& $\Delta_{E-A} $\ &-0.01 & 0.03 &-0.05 &-0.06\ $[$Na I/Fe\] & 0.02 &-0.02 &-0.01 & 0.01\ $[$Mg I/Fe\] & 0.03 &-0.01 &-0.02 & 0.00\ $[$Al I/Fe\] & 0.01 &-0.03 &-0.03 &-0.01\ $[$Si I/Fe\] & 0.03 & 0.01 &-0.03 & 0.02\ $[$Ca I/Fe\] & 0.01 &-0.02 & 0.00 & 0.01\ $[$Sc II/Fe\]&-0.02 &-0.02 & 0.00 &-0.05\ $[$Ti I/Fe\] & 0.01 &-0.03 &-0.03 &-0.03\ $[$Ti II/Fe\]&-0.02 &-0.01 & 0.01 &-0.03\ $[$Cr I/Fe\] & 0.01 &-0.02 &-0.03 &-0.02\ $[$Mn I/Fe\] & 0.01 &-0.02 &-0.04 &-0.03\ $[$Fe I/Fe\] & 0.02 & 0.01 &-0.03 & 0.01\ $[$Fe II/Fe\]&-0.03 &-0.02 & 0.03 &-0.02\ $[$Co I/Fe\] & 0.01 &-0.03 &-0.04 &-0.03\ $[$Ni I/Fe\] & 0.01 &-0.01 &-0.04 &-0.03\ $[$Sr II/Fe\]&-0.02 & 0.01 &-0.01 &-0.04\ $[$Ba II/Fe\]&-0.02 & 0.00 &-0.01 &-0.04\ Sodium and Aluminium --------------------- In both dwarf and giant EMP stars, Na and Al abundances can only be derived from the resonance lines, which are very sensitive to NLTE effects (Cayrel et al. [@CDS04]). The Na and Al abundances in our two stellar samples have been derived using the NLTE line formation theory by Andrievsky et al. ([@ASK07]) and Andrievsky et al. ([@ASK08]) for Na and Al, respectively. When NLTE effects are taken into account, the \[Na/Fe\] and \[Al/Fe\] abundance ratios are found to be constant and equal in the dwarfs and giants in the interval $\rm -3.7<[Fe/H]<-2.5$ (\[Na/Fe\] = $-0.2$ and \[Al/Fe\] = $-0.1$). This can be appreciated visually by looking at figure 7 of Andrievski et al. ([@ASK07]) and figure 3 of Andrievski et al. ([@ASK08]). Potassium and Scandium ----------------------- The K lines are very weak in our EMP turnoff stars, and \[K/Fe\] could not be determined. The Sc abundance in the dwarf stars has been measured from the Sc II line at 424.6 nm. In giants, 7 Sc lines could be used, and the scatter in the abundances from individual lines is very small (below 0.1 dex). There is a systematic difference of about 0.2 dex between the Sc abundances in the giants and the dwarfs (Fig. \[Sc\]). Iron-peak elements ================== Chromium, Cobalt, and Nickel\[CrCoNi\] -------------------------------------- Fig. \[ironpeak\] shows the \[Cr/Fe\], \[Co/Fe\], and \[Ni/Fe\] ratios for our dwarf and giant samples. There is rather good agreement for Ni, but \[Cr/Fe\] and \[Co/Fe\] are about 0.2 dex higher in the dwarfs than in the giants. Recently Lai et al. (2008) have measured the chromium abundance in a sample of giants and turnoff stars in the same range of metallicity. The same shift appears between their giants and turnoff stars (Fig. \[CrLai\]). Lai et al. also found an offset between the abundances derived from Cr I and Cr II. Cr II can only be measured in giants, and only a single Cr II line ($\rm \lambda=455.865nm$) appears at the edge of our blue spectra, but the same offset as observed by Lai et al. is clearly visible in our data (Figure \[Cr1-2\]). The discrepancy between Cr I and Cr II, and between giants and turnoff stars, may point to non-LTE effects. The main Cr I lines are resonance lines. Unfortunately no precise structure model for the Cr atom exists, so it is not possible to explore this hypothesis at present. If significant NLTE effects were confirmed, the most reliable abundances should be those from the Cr II line, suggesting that $\rm [Cr/Fe]\approx +0.1$ at low metallicity. Nissen & Schuster ([@NS97]) found a close correlation between the abundances of Na and Ni in the interval $\rm -0.7<[Fe/H]<-1.3$. To explain this correlation, it has been suggested that the production of $\rm ^{58}Ni$ during an SN II event depends on the neutron excess, which itself depends mainly on the amount of $\rm ^{23}Na$ produced during hydrostatic carbon burning. However, this correlation is not observed in our sample (Fig. \[nani\]). In fact, \[Ni/Fe\] and \[Na/Fe\] have the same mean value in turnoff stars as in unmixed giants (\[Ni/Fe\] = 0.0 and \[Na/Fe\] = $-0.2$). \[Na/Fe\] is larger in several of the mixed giants, but this is due to mixing with the H-burning shell in layers that are sufficiently deep to bring products of the Ne-Na cycle to the surface (see Andrievsky et al. [@ASK07]). Manganese --------- The Mn abundances have been derived by fitting synthetic spectra to the observations, taking into account the hyperfine structure of the lines. We noted in Paper V that, in the giant stars, Mn abundances determined from the resonance lines were lower than those from the lines of higher excitation potential by about 0.4 dex. At this stage, we prefer the abundances from the high-excitation lines, because the resonance lines are more susceptible to non-LTE effects. However, in the five most metal-poor giants only the resonance triplet is detected, so for these stars the Mn abundance was determined from the triplet and corrected by the adopted 0.4dex offset. For most of the turnoff stars analysed here, again only the resonance triplet can be detected. In Fig. \[manga\]a the Mn abundances from these lines have been systematically increased by 0.4 dex, while in Fig. \[manga\]b \[Mn/Fe\] is derived from the resonance triplet profiles in all the stars and plotted without any correction. In both cases we find a systematic abundance difference of about 0.2 dex between the giants and the dwarfs. Zinc ---- Zinc cannot be unambiguously assigned to the iron-peak category, since it may be formed by $\alpha$-rich freeze-out and neutron capture as well as by burning in nuclear statistical equilibrium. In our sample, the only usable line is the strongest line of Mult. 2, at 481 nm. The line is very weak in all our stars, and we only consider it reliably detected and provide a measurement when the equivalent width is larger than 0.35 pm. Thus, Table \[tababund1\] gives only six measurements and eleven upper limits; for two stars, the spectrum was affected by a defect, and it is not even possible to provide an upper limit. Figure \[zinc\] shows \[Zn/Fe\] versus \[Fe/H\]; upper limits are shown as downward arrows and the giant stars from Cayrel et al. ([@CDS04]) as open circles. The upper limits are consistent with the trend defined by the giant stars, but the six actual measurements appear to define a similar trend, shifted upwards by about 0.4 dex. This could be another example of the dwarf/giant discrepancy found for some other elements. Since the majority of our \[Zn/Fe\] data are upper limits, we use survival statistics to analyse them. The giant stars with \[Fe/H\]$\ge -3.0$ show a constant level of \[Zn/Fe\]$=+0.199\pm0.080$. We selected the dwarf stars in the same metallicity range and used [asurv Rev 1.2]{} [^2] (Lavalley et al. [@baas]) to compute the Kaplan-Meier statistics, as described in Feigelson & Nelson ([@FN]). The mean is $+0.491\pm 0.055$; since the lowest point is an upper limit, it has been changed to a detection to compute the Kaplan-Meier statistics, which implies that this mean value is biased. The comparison of the two mean values, for giants and dwarfs suggests that they are only marginally consistent: The 75th percentile of the \[Zn/Fe\] values for dwarfs (+0.223) corresponds to the mean value for giants. Changing the upper limits to $2 \sigma$ or $3 \sigma$ would push the mean value for dwarfs even higher, thus making the values of dwarfs and giants even more inconsistent. We used the generalized version of Kendall’s $\tau$ (Brown et al. [@brown]), as described in Isobe et al. ([@isobe]), to check if there is support for a correlation between \[Fe/H\] and \[Zn/Fe\] for the dwarf stars. The sample is composed of 6 detections and 11 upper limits. The probability of correlation is 91.3%, so there is a hint of a correlation, but no conclusive evidence. For Zn it appears unlikely that the giant/dwarf discrepancy is due to NLTE effects: Takeda et al. ([@takeda]) computed NLTE corrections for the line at 481 nm; the corrections are small and [*negative*]{} for metal-poor giants, [*positive*]{} for metal-poor TO stars. Thus, if we applied these corrections to our sample, the discrepancy would increase from 0.2 to $\sim 0.4$ dex. It is surprising that several stars display upper limits which are [*lower*]{} than the \[Zn/Fe\] ratios found in other stars of similar metallicity, suggesting a real cosmic scatter in the Zn abundance. It is interesting to note that while for giant stars Lai et al. ([@LBJ08]) are in good agreement with our determinations, the two dwarf stars for which they have Zn measurements appear to be in line with the measurements of giants. This may give further support to the idea of a cosmic scatter of Zn abundances, or to the existence of a Zn-rich population. However, it should be kept in mind that the available Zn lines are all very weak (detections are about 0.4 pm, upper limits 0.1–0.2 pm), and the data should not be overinterpreted. We have, perhaps somewhat naïvely, placed the upper limit at the measured value for all stars below our chosen threshold. Had we decided to put the upper limit at $3\sigma$ above the measured EW, all the upper limits would move up among the measurements or beyond, and there would be no hint of any scatter in the Zn abundance. From the point of view of survival statistics, the fact that the standard deviation from the mean is small, compared to observational errors, does not support the presence of a real dispersion. The question of a scatter in Zn abundance in EMP dwarf stars clearly needs further study, if possible based on different lines. Neutron-capture elements ======================== Very few neutron-capture elements can be measured in turnoff stars, because their lines are generally very weak. We could, however, measure Sr abundances from the blue resonance line of Sr II, and sometimes also Ba abundances from the Ba II line at 455.4 nm. The Ba line is generally weak (about 0.5 pm) and located at the very end of the blue spectrum, where the noise is higher. As Fig. \[heavy\] shows, we find good agreement between dwarfs and giants, although the star-to-star scatter is very large, as has already been observed for the giant stars. In Fig. \[srba\] we show the \[Sr/Ba\] ratio as a function of \[Ba/H\]. As already noticed in Paper VIII (Fig. 15), the scatter in this plane is greatly reduced. The dwarf stars appear to behave exactly in the same way as giants. We have recently studied the Ba abundance in dwarfs and giants taking into account non-LTE effects (Andrievsky et al. [@ASK09]), but after correcting for NLTE the general behaviour of this element remains the same. Comparison with other investigations. ===================================== Several other groups have now published detailed analyses of EMP stars similar to our own, and it is interesting to compare their results to ours. We focus on the results of the 0Z project (Cohen et al. [@CCM04; @CCM08]) and Lai et al. ([@LBJ08]). The details of the comparison are provided in appendices \[0Zcomp\], \[laicomp\] and \[BScomp\]. The final conclusion of this comparison is that there is excellent agreement between the three groups, and the small differences can be understood in terms of differences in the adopted atmospheric parameters, model atmospheres or line selection. The MARCS model atmospheres used by us agree with the ATLAS non-overshooting models adopted by Lai et al. ([@LBJ08]), and both yield abundances which are about 0.1 dex lower than the ATLAS overshooting models adopted by the 0Z project. Discussion\[disc:3d\] ===================== For most elements, the overall abundance trends defined by dwarfs and giants show good agreement. For example, \[Ti/Fe\] is constant at low metallicity in both giants and dwarfs, and \[Mn/Fe\] decreases with metallicity in both giants and dwarfs. However, some elements show systematic shifts in \[X/Fe\] between turnoff stars and giants of the same metallicity. Generally, $\rm [X/Fe]_{dwarfs} - [X/Fe]_{giants} \approx +0.2$ dex, except for Mg and Si, which show a negative shift. Also, \[Cr/Fe\] appears to be flat in the dwarfs, but displays a significant slope for the giants. It is difficult to explain these shifts by systematic errors in the models (error in temperature or in gravity) because the effects on the abundance of all the elements are very similar (see Table \[errors\]), so the ratios \[X/Fe\] are little affected. These differences are rather puzzling because, except for C, N, and possibly Na, the chemical composition of the giant stars should be unaltered since the star formed, so one would expect that the abundances in giants should match those in dwarfs at any given metallicity. The discrepancy we find is most likely due to shortcomings in our analysis, but we do not know whether we should trust the derived results for giants or dwarfs (or perhaps neither!). In the following we discuss the two main simplifications of standard model atmospheres, the neglect of effects of granulation (“3D effects” for short) and deviations from local thermodynamic equilibrium (NLTE), as possible causes of the observed discrepancies. Granulation (3D) effects ------------------------ It is well known that hydrodynamical simulations (“3D models”) predict much cooler temperatures in the outer layers of metal-poor stars than 1D models (Asplund et al. [@asp99], Collet et al. [@collet], Caffau & Ludwig [@CL07], González Hernández et al. [@jonay], Paper XI). The effect is more pronounced for dwarfs than for giants. The species most affected by this difference are clearly those which predominantly reside in such cool layers, most notably the diatomic molecules such as CH and NH. Since one of the most striking differences between dwarfs and giants is in fact the C abundance, which we derive from CH lines, we decided to investigate the effects of granulation in more detail. To accomplish this, we used the two [$\mathrm{CO}^5\mathrm{BOLD}$]{} (Freytag et al. [@freytag02], Wedemeyer et al. [@wedemeyer04]) 3D models described in Paper XI (/log g/\[Fe/H\]: 6550/4.50/–3.0 and 5920/4.50/–3.0). Unfortunately, we do not yet have any fully relaxed models for giant stars, so we decided to use a representative snapshot of a 3D simulation of a giant close to relaxation (/log g/\[Fe/H\]: 4880/2.00/–3.0). Table \[3dcor\] lists the mean 3D corrections as defined by Caffau & Ludwig ([@CL07]) for the three models described above. The sense of the correction is always 3D-1D. Approximating the 3D correction for the G-band as the average for just 4 lines is admittedly somewhat crude, but should provide a reliable order-of-magnitude estimate for the effect. For the C abundance, the effect is quite prominent for dwarfs. The magnitude of the correction is such that, if applied, the discrepancy in \[C/Fe\] between dwarfs and giants would be somewhat reduced (from 0.27 dex to 0.13 dex), but with the opposite sign, the dwarfs now showing a slightly lower C abundance. Given the crudeness of our 3D computations, we cannot claim with certainty that 3D effects will explain the discrepancy. To the extent that our order-of-magnitude estimates are reliable, it is possible that more accurate computations with a larger set of parameters, encompassing the full range of our dwarf and giant stars, would yield \[C/Fe\]$\sim 0.2$ for both dwarfs and giants. For the giant model, our computed correction for C is a factor of two smaller than the results of Collet et al. ([@collet]). Also, for Fe, our corrections are considerably smaller than found by Collet et al., especially for the resonance line. The issue clearly requires further investigation, which we will undertake when we have several fully relaxed 3D models of giants. A detailed discussion is therefore premature. At present it is unclear whether the different results we find are due to some fundamental difference between the 3D codes: [$\mathrm{CO}^5\mathrm{BOLD}$]{} (Freytag et al. [@freytag02], Wedemeyer et al. [@wedemeyer04]) in our case, and the Stein & Nordlund ([@SN98]) code in the case of Collet et al. ([@collet]), or simply to the choice of a MARCS model as the 1D reference by Collet et al. ([@collet]). We performed spectrum synthesis computations, using [Linfor3D]{}[^3] to estimate 3D corrections for a few selected lines, listed in Table \[tab3dlin\]. This is not meant to substitute for a full 3D investigation of the sample, but should provide an indication whether the differences found between giants and dwarfs might vanish if suitable 3D models were used. For Si, Co, and Zn the predicted corrections are the same for giants and dwarfs, so the discrepancy for these elements should not be due to granulation effects. For Sc and Ti, however, the differences go in the direction of increasing the discrepancy between dwarfs and giants. For Mn and Cr, the difference in correction between dwarfs and giants is such as to exactly cancel the discrepancies. We caution, however, that the corrections listed in Table \[3dcor\] are the average of those for the resonance and high excitation lines. The difference in correction between the two lines is smaller for the giant model (0.1 dex for Cr, 0.3 dex for Mn) than for the dwarf models (0.4–0.5 dex for Cr, 0.7 dex for Mn). This difference is still somewhat problematic, however, in the sense that while the 1D analysis achieved a good excitation equilibrium for Cr, an analysis based on the 3D atmospheres does not. This suggests that the temperature scale appropriate for 3D models may in fact be different from those adopted in this paper and by Cayrel et al. ([@CDS04]). As mentioned above, a 1D analysis implies a Mn abundance about 0.4 dex [*lower*]{} for the resonance lines than for the high-excitation lines, and the 3D corrections for Mn [*increase*]{} this difference, up to 1.1 dex. --------- ----------- -------- Species $\lambda$ $\chi$ nm eV CH 430.0317 0.00 CH 430.0587 0.36 CH 430.1072 1.44 CH 430.1135 0.31 390.5523 1.91 410.2916 1.91 424.6822 0.31 376.1323 0.57 391.3468 1.12 425.4332 0.00 520.8419 0.94 403.0753 0.00 404.1355 2.11 382.4444 0.00 400.5242 1.56 418.7795 2.42 422.7427 3.33 384.5461 0.92 481.0528 4.08 --------- ----------- -------- : \[tab3dlin\]Lines used to test the granulation effects. -- ---------------- ---------------- ---------------- model 4880/2.00/–3.0 5920/4.50/–3.0 6550/4.50/–3.0 $ -0.1 $ $ -0.5 $ $ -0.6$ $ -0.1 $ $ -0.1 $ $ -0.2$ $ -0.2 $ $ -0.1 $ $ -0.1$ $ -0.1 $ $ 0.0 $ $ 0.0$ $ -0.3 $ $ -0.6 $ $ -0.5$ $ -0.3 $ $ -0.5 $ $ -0.5$ $ -0.2 $ $ -0.2 $ $ -0.3$ $ -0.3 $ $ -0.3 $ $ -0.4$ $ +0.1 $ $ +0.1 $ $ +0.1$ $ +0.1 $ $-0.3 $ $ -0.3$ $ +0.1 $ $+0.1 $ $ +0.1$ $ 0.0 $ $+0.1 $ $ +0.2$ $ +0.1 $ $+0.2 $ $ +0.3$ $ -0.1 $ $-0.4 $ $ -0.2$ $ -0.1 $ $-0.3 $ $ -0.2$ $ -0.1 $ $-0.1 $ $ -0.1$ $ +0.3 $ $+0.3 $ $ +0.4$ -- ---------------- ---------------- ---------------- : Mean 3D corrections for selected elements.\[3dcor\] It is unlikely that the use of 3D models will bring the abundances in giants and dwarfs into agreement for all elements, although it may be possible for a few (most likely C, Cr, and Mn). However, a full re-analysis based on 3D models, including a redetermination of the atmospheric parameters, is needed before reaching a firm conclusion on this point. For the time being, since the predicted 3D corrections are always smaller for our giant model than for the dwarf models, we consider the 1D abundances for giants to be more reliable than for the dwarfs. Deviations from local thermodynamic equilibrium. ------------------------------------------------ The analysis in this paper and in Cayrel et al. ([@CDS04]) is based on the assumption of local thermodynamic equilibrium (LTE), both in the computation of the model atmospheres and in the line transfer computations. For Na and Al, results based on NLTE line transfer computations have been presented in Andrievsky et al. ([@ASK07; @ASK08]). For both elements, the LTE computations implied a discrepancy between dwarfs and giants, while the NLTE computations provided consistent abundances between the two sets of stars. In the case of Na, the NLTE corrections are not very different for dwarfs or giant models for lines of a given equivalent width, but the correction depends strongly on the equivalent width. The giant stars, which are cooler, have larger equivalent widths and larger NLTE corrections. In this case the LTE abundances of dwarfs are to be considered more reliable than those of giants. The result cannot be generalized, however, so detailed NLTE computations should be carried out for all the elements for which we find a discrepancy between dwarfs and giants. Also, from the point of view of departures from NLTE, one cannot [*a priori*]{} assume that the departures are larger for the stronger lines (i.e. for giants), although this is often the case. Accordingly, except for the two elements Na and Al for which we already have NLTE computations, we cannot at present say whether accounting for NLTE effects could remove the discrepancy between dwarfs and giants. Computations of the NLTE abundance of Mg are under way. Could the dwarf/giant discrepancy be real ? ------------------------------------------- For C, the difference in \[C/Fe\] between dwarfs and giants might represent the effect of the first dredge-up, which could be responsible for a decrease of the C abundance due to a first mixing with the H-burning layer, where C is transformed into N. For the other elements we see no possible nucleosynthetic origin for the dwarf/giant discrepancy. Another possibility is that the abundances in EMP turnoff stars are seriously affected by diffusion (see e.g. Korn et al. [@KGR] and Lind et al. [@lind]). From Table 2 of Lind et al. ([@lind]) one can deduce the following variations in abundance ratios between TO stars and RGB stars in the globular cluster NGC 6397: $\Delta \rm [Mg/Fe] = -0.04 \pm 0.17$, $\Delta \rm [Ca/Fe] = +0.06 \pm 0.13$, $\Delta \rm [Ti/Fe] = +0.16 \pm 0.12$. So only for \[Ti/Fe\] is a variation marginally detected, which happens to be of the same order of magnitude and sign as the giant/dwarf discrepancy observed by us. Although a role of diffusion cannot be ruled out, the evidence in favour is, at best, very weak. Confirmation of the results of Korn et al. ([@KGR]) and Lind et al. ([@lind]) by an independent analysis would be useful, especially in view of the fact that previous investigations of the same cluster (Castilho et al. [@bruno], Gratton et al. [@gratton]) gave different results. As we pointed out in Paper VII, the adoption of a higher effective temperature for the turn-off stars of this cluster, as done by Bonifacio et al. ([@B02]), would largely cancel the abundance differences between TO and RGB. Even if the results for NGC 6397 were confirmed, it is not obvious that they would apply to the field stars analysed in the present paper. Unlike the stars in a globular cluster, these stars are not necessarily strictly coeval, and their metallicities range from $\sim$0.7 to $\sim$1.7 dex below that of NGC 6397. Do the giant and dwarf samples belong to the same population? ------------------------------------------------------------- It could be argued that the observed giant and turnoff samples might belong to different populations, since the giants would, on average, be more distant than the turnoff stars. To test this, we have compared the radial velocities of the two samples (it would have been preferable to compare the space velocities, but the distances and proper motions of the giants are generally very uncertain). Barycentric radial velocities for the turnoff stars are given in Bonifacio et al. ([@BMS06]). For the giants they are given in Table \[vr\]; they are based on the yellow spectra centered at 573nm with laboratory and measured wavelengths of numerous lines (Nave et al. [@NJL94]). The wavelengths for the telluric lines for the zero points have been taken from Jacquinet-Husson et al. ([@telluric]). Velocity errors should be below 0.3 km s$^{-1}$, more than adequate for the present purpose (see also Hill et al. [@HPC02]). Since all the program stars (except for a few of the giants) have been selected from the HK survey (Beers et al. [@beers85; @beers92] and Beers [@beers99]), which is kinematically unbiased, their radial velocities should be an unbiased estimate of the kinematic properties of the population. Thus, if the stars were indeed drawn from different populations, we would expect their radial-velocity distributions to differ. The mean radial velocities and standard deviations are -12 and 141 for the giants, -32  and 159 for the turnoff stars, respectively. A Kolmogorov-Smirnov test shows only a 10-15% probability that the two samples have not been drawn from the same parent population. Thus, the radial-velocity data support the assumption that the dwarfs and giants belong to the same population. Conclusions =========== We have determined abundances of C, Mg, Si, Ca, Sc, Ti, Cr, Mn, Co, Ni, Zn, Sr and Ba for a sample of 18 EMP turnoff stars, which complements the sample of giants discussed by Cayrel et al. ([@CDS04]). For the subgiant BS 16076-006 it was possible also to determine the N abundance. For Ca, Ni, Sr, and Ba we find excellent consistency between the abundances in dwarfs and giants at any given metallicity. For the other elements we find abundances for the dwarfs which are about 0.2 dex larger than for giants, except for Mg and Si, for which the abundance in dwarfs is about 0.2 dex [*lower*]{} than in the giants, and Zn, for which the abundances in dwarfs are about 0.4 dex [*higher*]{} than in the giants. The only element for which such a discrepancy could have an astrophysical explanation is C. In fact, if the first dredge-up were capable of bringing into the atmosphere material which had undergone CN processing, one would expect to find lower C abundance in giants than in dwarfs. Such an effect is not predicted by standard models of stellar evolution and would require some extra-mixing mechanism. For all the other elements which display a discrepancy between dwarfs and giants we are unable to find any plausible astrophysical explanation. We conclude that the discrepancies arise from shortcomings in our analysis, probably also for C, but certainly for all other elements for which discrepancies are found. We have made an approximate assessment of the effects of granulation and conclude that they are unlikely to explain the discrepancies, except perhaps for C, Mn and Cr. In any case, the 3D corrections appear to be smaller for giants than for dwarfs, which suggests that the 1D abundances of giants are preferable as reference data for studies of the chemical evolution of the Galaxy. The other obvious shortcoming in our analysis is the assumption of local thermodynamic equilibrium. Detailed NLTE line transfer computations exist for Na and Al (Andrievsky et al. [@ASK07; @ASK08]), and for these two elements they in fact remove the dwarf/giant discrepancy implied by the LTE analysis. Computations for Mg are in progress, and it seems that the agreement between giants and dwarfs is at least improved. This result cannot be generalized to other elements, and it is not clear whether NLTE computations might remove any of the other discrepancies. Clearly, NLTE computations for other key elements are urgently needed. For readers who wish to use our data for comparison with Galactic evolution models we suggest that, for elements for which a dwarf/giant discrepancy exists, the abundances in giants are to be preferred. We plan to publish an updated table of all the abundances in the First Stars programme in a final paper of the series. For the time being we direct the reader who wants the most updated abundances of the First Stars giants, to the following papers: for Li, C, N and O, Spite et al. ([@SCP05], First Stars VI; [@SCH06], First Stars IX); for Na, Andrievsky et al. ([@ASK07]); for Mg to the NLTE abundances in Fig.\[MgSiCaTi\] of the present paper (to be published in full soon); for Al, Andrievsky et al. ([@ASK08]); for K, Ca, Sc, Ti, Mn, Fe, Co, Ni, Zn, Cayrel et al. ([@CDS04]); for Cr, the abundances given in Fig.\[Cr1-2\] should be preferred; for Ba Andrievsky et al ([@ASK09]); for all the other elements heavier than Zn François et al. ([@FDH06], First Stars VIII). The reasons for recommending the use of abundances in giants are threefold: 1) granulation effects are smaller for giants than for dwarfs; 2) giants have lower effective temperatures and stronger lines, so from the observational point of view their abundances are better determined; and 3) the atmospheres of giants stars are well mixed by convection and should be immune to chemical anomalies driven by diffusion. One should, however, bear in mind that future NLTE analyses of our data could imply substantial revision of the abundances in both giants and dwarfs. We thank the ESO staff for assistance during all the runs of our Large Programme. R.C., P.F., V.H., B.P., F.S. & M.S. thank the PNPS and the PNG for their support. P.B., H.G.L. and E.C. acknowledge support from EU contract MEXT-CT-2004-014265 (CIFIST). T.C.B. acknowledges partial funding for this work from grants AST 00-98508, AST 00-98549, AST 04-06784, AST 07-07776, as well as from grant PHY 02-16783: Physics Frontiers Center/Joint Institute for Nuclear Astrophysics (JINA), all from the U.S. National Science Foundation. B.N. and J.A. thank the Carlsberg Foundation and the Swedish and Danish Natural Science Research Councils for partial financial support of this work. We acknowledge use of the supercomputing centre CINECA, which has granted us time to compute part of the hydrodynamical models used in this investigation, through the INAF-CINECA agreement 2006,2007. Allende Prieto, C., Rebolo, R., L[ó]{}pez, R. J. G., Serra-Ricart, M., Beers, T. C., Rossi, S., Bonifacio, P., & Molaro, P. 2000, , 120, 1516 Alonso, A., Arribas, S., & Mart[í]{}nez-Roger, C. 1996, , 313, 873 Alvarez R., Plez B. 1998, A&A 330, 1109 Andrievsky S. M., Spite M., Korotin S. A., et al. 2007, A&A 464, 1081 Andrievsky S. M., Spite M., Korotin S. A., et al. 2008, A&A 481, 481 Andrievsky S. M., Spite M., Korotin S. A., et al. 2009, A&A $in press$ Asplund, M., Nordlund, [Å]{}., Trampedach, R., & Stein, R. F. 1999, , 346, L17 Asplund, M., & Garc[í]{}a P[é]{}rez, A. E. 2001, , 372, 601 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Kiselman, D. 2004, , 417, 751 Ballester P., Modigliani A., Boitiquin O., et al. 2000, ESO Messenger 101, 31 Beers, T. C. 1999, , 265, 547 Beers, T. C., Preston, G. W., & Shectman, S. A. 1985, , 90, 2089 Beers, T. C., Preston, G. W., & Shectman, S. A. 1992, , 103, 1987 Boesgaard, A. M., King, J. R., Deliyannis, C. P., & Vogt, S. S. 1999, , 117, 492 Bonifacio, P., et al. 2002, , 390, 91 Bonifacio, P., Molaro, P., Sivarani, T., et al. 2007, , 462, 851 ([**“First Stars VII”**]{}) Brown, B .J. Jr., Hollander, M., and Korwar, R. M., 1974, “Nonparametric Tests of Independence for Censored Data, with Applications to Heart Transplant Studies" from [ Reliability and Biometry]{}, eds. F. Proschan and R. J. Serfling (SIAM: Philadelphia) p.327 Caffau, E., & Ludwig, H.-G. 2007, , 467, L11 Caffau, E., Ludwig, H.-G., Steffen, M., Ayres, T. R., Bonifacio, P., Cayrel, R., Freytag, B., & Plez, B. 2008, , 488, 1031 , F. & [Kurucz]{}, R. L. 2003, in IAU Symposium, ed. N. [Piskunov]{}, W. W. [Weiss]{}, & D. F. [Gray]{}, 20P Castelli, F., Gratton, R. G., & Kurucz, R. L. 1997a, , 324, 432 Castelli, F., Gratton, R. G., & Kurucz, R. L. 1997b, , 318, 841 Castilho, B. V., Pasquini, L., Allen, D. M., Barbuy, B., & Molaro, P. 2000, , 361, 92 Cayrel, R. 1996, , 7, 217 Cayrel, R. 2006, Reports on Progress in Physics, 69, 2823 Cayrel R., Depagne E., Spite M., et al. 2004, A&A 416, 1117 ([**“First Stars V”**]{}) Cohen J.G., Christlieb N., McWilliam A., et al. 2004, ApJ 612, 1107 Cohen, J. G., Christlieb, N., McWilliam, A., Shectman, S., Thompson, I., Melendez, J., Wisotzki, L., & Reimers, D. 2008, , 672, 320 Collet, R., Asplund, M., & Trampedach, R. 2007, , 469, 687 Dekker H., D’Odorico S., Kaufer A., et al. 2000 in Optical and IR Telescopes Instrumentation and Detectors, eds I. Masanori & A.F. Morwood Proc. SPIE 4008, 534 Delahaye, F., & Pinsonneault, M. H. 2006, , 649, 529 Depagne E., Hill V., Spite M., et al. 2002, A&A 390 187 ([**“First Stars II”**]{}) Feigelson, E. D., & Nelson, P. I. 1985, , 293, 192 François P., Depagne E., Hill V., et al. 2003, A&A 403 1105 ([**“First Stars III”**]{}) François P., Depagne E., Hill V., et al. 2007, A&A 476, 935 ([**“First Stars VIII”**]{}) Freytag, B., Steffen, M., & Dorch, B. 2002, Astronomische Nachrichten, 323, 213 Goldman, A., & Gillis, J. R. 1981, Journal of Quantitative Spectroscopy and Radiative Transfer, 25, 111 González Hernández, J., Bonifacio, P., Ludwig, H.-G., et al.  2008, A&A, 480, 233 ([**“First Stars XI”**]{}) Gratton, R. G., et al.  2001, , 369, 87 Gustafsson B., Bell R. A., Eriksson K., Nordlund Å. 1975, A&A 42, 407 Gustafsson B., Edvardsson B., Eriksson K., Graae-J[ø]{}rgensen U., Mizuno-Wiedner M., Plez B. 2003 in Stellar Atmosphere Modeling, eds. I. Hubeny D. Mihalas K., Werner ASP Conf. Series 288 331. Gustafsson, B., Edvardsson, B., Eriksson, K., Graae-J[ø]{}rgensen, U., Nordlund, Å., & Plez, B. 2008, A&A in press, arXiv:0805.0554 Hill V., Plez B., Cayrel C., et al. 2002, A&A 387, 560 ([**“First Stars I”**]{}) Isobe, T., Feigelson, E. D., & Nelson, P. I. 1986, , 306, 490 Israelian, G., Rebolo, R., Garc[í]{}a López, R. J., Bonifacio, P., Molaro, P., Basri, G., & Shchukina, N. 2001, , 551, 833 Jacquinet-Husson, N., Scott, N-A., Chédin, A., et al. 2005, JRSRT, 95, 429 J[ø]{}rgensen U.G., Larsson M., Iwamae A. Yu B. 1996, A&A 315, 204 Korn A., Grundahl F., Richard O. et al. 2007, ApJ 671, 402 , R. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. Kurucz CD-ROM No. 13.  Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1993., 13 Lai, D. K., Bolte, M., Johnson, J. A., & Lucatello, S. 2004, , 128, 2402 Lai D.K., Bolte M., Johnson J., Lucatello S., Heger A., Woosley S.E. 2008, ApJ, 681, 1524 Lind, K., Korn, A. J., Barklem, P. S., & Grundahl, F. 2008, , 490, 777 Lavalley, M. P., Isobe, T., & Feigelson, E. D. 1992, , 24, 839 Luque J., Crosley D.R. 1999, SRI International report MP 99-099 Meléndez J., Shchukina N.G., Vasiljeva I.E., Ramirez I. 2006 ApJ 642, 1082 Molaro, P., Bonifacio, P., & Primas, F. 1995, Memorie della Società Astronomica Italiana, 66, 323 Nave G., Johansson S., Learner R.C.M., Thorne A.P., Brault J.W. 1994 ApJS 94, 221 Nissen P.E., Schuster W.J. 1997, A&A 326, 751 Meynet G., Maeder A. 2002, A&A 390, 561 Palacios A., Charbonnel C. Talon S., Siess L. 2006, Astro.ph.0602389 Plez B., Hill V., Cayrel R., et al. 2004, ApJ 617, L.119 Sbordone, L., Bonifacio, P., Castelli, F., & Kurucz, R. L. 2004, Memorie della Società Astronomica Italiana Supplement, 5, 93 Sivarani T., Bonifacio P., Molaro P., et al. 2004, A&A 413, 1073 ([**“First Stars IV”**]{}) Sivarani T., Beers T.C., Bonifacio P., et al. 2006, A&A 459, 125 ([**“First Stars X”**]{}) Sneden, C. A. 1973, Ph.D. Thesis, Sneden, C. 1974, , 189, 493 Sneden, C. 2007, http://verdi.as.utexas.edu/moog.html Spite M., Cayrel R., Plez B., et al. 2005, A&A 430, 655 ([**“First Stars VI”**]{}) Spite M., Cayrel R., Hill V., et al. 2006, A&A 455, 291 ([**“First Stars IX”**]{}) Stein, R.F. & Nordlund, Å. 1998 ApJ 499, 914 Takeda, Y., Hashimoto, O., Taguchi, et al. 2005, , 57, 751 Wedemeyer, S., Freytag, B., Steffen, M., Ludwig, H.-G., & Holweger, H. 2004, , 414, 1121 Comparison with the 0Z project. \[0Zcomp\] ========================================== The 0Z project (Cohen et al. [@CCM04; @CCM08]) has produced a data set similar to that of the “First Stars” project, it is therefore of some interest to verify how these data sets compare. Cohen et al. ([@CCM04]) analysed a set of dwarf stars which is directly comparable to those analysed in the present paper. The spectra were acquired with the HIRES spectrograph at the Keck I Telescope, at a resolution only slightly lower than our UVES-VLT data (34000 rather than 45000), and the S/N ratios are comparable. The equivalent widths were measured using an automatic code which fits gaussians, therefore the general philosphy of EW measurement does not differ from ours. In fact Cohen et al. ([@CCM08]) observed two giant stars measured by Cayrel et al. ([@CDS04]), and the equivalent widths compare very well (see figures 13 and 14 of Cohen et al. [@CCM08], and related text). The two projects differ in the method used to fix the atmospheric parameters: we use the wings of H$\alpha$ for dwarf stars, while the 0Z project relies on photometry to derive . For surface gravity we use the iron ionisation equilibrium, while the 0Z project relies on theoretical isochrones. We have also investigated the $gf$ values used by the two projects, and they are very similar; the use of the one or the other set would not imply differences in the derived abundances smaller or equal to 0.02dex. Thus, part of the differences will depend on the different adopted atmospheric parameters. There is no dwarf star in common between the two groups; thus it is not straightforward to compare the results of the two projects. For the analysis the two projects use different model atmospheres and different line formation codes. We use MARCS model atmospheres and [turbospectrum]{}, while the 0Z project uses ATLAS models interpolated in the grid of Kurucz (1993), with the overshooting option switched on, and the MOOG code (Sneden [@Sneden73; @Sneden74; @snedenweb]). As we shall show below, the different choice of line formation is relatively unimportant, implying differences in the abundances of a few hundredths of dex; on the other hand the choice of ATLAS overshooting models implies abundances which are higher by about 0.1dex for all the models. Such a behaviour was already noticed by Molaro et al. ([@molaro]) for Li, but we show here that it is indeed true for all species. ----- ------ ---------------- ------ ---------------- ------ ---------------- ----------- ---------------- Ion A $\rm \sigma_A$ A $\rm \sigma_A$ A $\rm \sigma_A$ A $\rm \sigma_A$ 5.58 0.27 5.54 0.29 5.68 0.27 5.70 0.28 3.28 0.06 3.24 0.06 3.38 0.06 3.91 0.09 5.19 5.14 5.31 5.23 4.15 0.08 4.11 0.08 4.24 0.08 4.22 0.11 0.99 0.03 0.96 0.04 1.04 1.02 0.03 3.18 0.13 3.14 0.13 3.26 0.13 3.23 0.14 3.05 0.09 3.02 0.09 3.10 0.09 3.08 0.10 2.94 0.03 2.89 0.03 3.01 0.03 2.99 0.04 2.31 0.05 2.26 0.05 2.38 0.04 2.35 0.05 5.00 0.16 4.96 0.16 5.09 0.15 5.07 0.17 5.11 0.16 5.09 0.16 5.16 0.16 5.13 0.17 2.88 0.11 2.84 0.11 2.96 0.10 2.92 0.11 3.77 3.72 3.86 3.83 ----- ------ ---------------- ------ ---------------- ------ ---------------- ----------- ---------------- : Abundances for HE 0508-1555 for different model atmospheres.[]{data-label="HE0508"} In Table \[HE0508\] we list the abundances for the star HE0508-1555 derived by using the equivalent widths of Cohen et al. ([@CCM04]) and their atmospheric parameters (= 6365, log g = 4.4 and a microturbulent velocity of 1.6 ) with three different models: a MARCS model interpolated in our grid, an ATLAS model computed without overshooting and an ATLAS model computed with overshooting. For all the models we assumed \[M/H\]=–3.0. Our ATLAS models are somewhat different from those of the Kurucz (1993) used by the 0Z project. In the first place we use the “NEW” opacity distribution functions (Castelli & Kurucz [@CK]) computed with 1  microturbulence. In the second place we use the Linux version of ATLAS (Sbordone et al. [@SBCK]). In all cases the line formation code used was [turbospectrum]{}. In the last two columns of Table \[HE0508\] we provide the abundances of Cohen et al. ([@CCM04]), for the reader’s convenience. Inspection of Table \[HE0508\] immediately suggests that both the difference in ATLAS versions and the different line formation codes used are immaterial, since the abundances we find for almost all elements are within 0.04 dex of those of Cohen et al. ([@CCM04]). The two exceptions are Al and Si. For Al there is a good reason for the discrepancy: Both lines used are affected by the neighbouring Balmer lines. In our analysis we used spectrum synthesis to derive the abundances. Instead, MOOG can take into account the absorption due to the Balmer lines, either using the [**opacit**]{} switch to introduce a fudge factor on the continuum opacity or using the [**strong**]{} keyword to read strong lines to be considered. For Si the difference between our result with the ATLAS overshooting model and the published value of Cohen et al. ([@CCM04]) is 0.08 dex. This abundance is based on a single line of about 10pm of EW, therefore clearly saturated. The precise value of the damping constants used for this line and the way the different codes use them may have an impact. Another inference which can be drawn from Table \[HE0508\] is that MARCS models and ATLAS non-overshooting models provide results which are quite similar. This is not the case for the ATLAS overshooting models, which imply abundances which are higher by about 0.1 dex for all elements. The reason for this behaviour may be understood by looking at the temperature structure of the different models. In Fig. \[over\_dwarfs\] we compare the temperature structures of our MARCS model (solid line), the ATLAS non-overshooting model (dashed-dotted line) and the ATLAS overshooting model (dashed line). The temperature structure of the ATLAS non-overshooting and of the MARCS model are quite similar. In fact, the only difference is for the deepest layers and is driven by the different choice made for the mixing length. In Fig. \[ML\] we show the temperature structure of the deeper layers of our MARCS model (solid line) together with two ATLAS non-overshooting models with different values of [$\alpha_{\mathrm{MLT}}$]{} 1.25 (dashed-dotted line) and 1.00 (dotted line). The ATLAS model with [$\alpha_{\mathrm{MLT}}$]{}= 1.00 is closer to the MARCS model, up to $\log\tau_{500}\sim 0.7$, but then becomes hotter than the MARCS model. In general it is impossible to chose a [$\alpha_{\mathrm{MLT}}$]{} such as a MARCS and an ATLAS model have exactly the same structure in depth, due to the different formulation of the mixing-length therory in the two codes. Such differences in the very deepest layers have very little influence on a typical abundance analysis. In fact only the lines which form in these very deep layers are affected, i.e. very weak lines of 0.1pm or smaller, and the wings of H$\beta$ and higher members of the Balmer series. In general we can conclude that MARCS and ATLAS non-overshooting models are very similar, and an abundance analysis based on the two models will yield abundances which are consistent within a few hundredths of dex. The situation is dramatically different when we consider the ATLAS overshooting models. Such models present a temperature structure which is very different from both ATLAS non-overshooting and MARCS models in the region $-1 \le \log\tau_{500} \le 1$ where the majority of lines used in abundance analysis are formed. Castelli, Gratton & Kurucz ([@CGK; @CGKE]) investigated extensively the effects of the approximate overshooting present in ATLAS and concluded that the no-overshooting models are capable of reproducing a larger set of observables, thus discouraging the use of overshooting models. To these considerations we may add that having investigated the mean temperature structures of [$\mathrm{CO}^5\mathrm{BOLD}$]{}3D hydrodynamical models we never see the typical “bump” in the temperature structure seen in ATLAS overshooting models. The real effect of the overshooting is the over-cooling of the outer layers with respect to what is predicted in radiative equilibrium models (Asplund et al. [@asp99], Collet et al. [@collet], Caffau & Ludwig [@CL07], González Hernández et al. [@jonay], Paper XI). This is a further reason to avoid the use of the ATLAS overshooting models. It can be appreciated that the differences due to different models largely cancel out when considering abundance ratios, like e.g. \[Mg/Fe\], rather than abundances relative to hydrogen. For example \[Mg/H\] is -2.04 for the ATLAS non-overshooting model, but -1.90 for the ATLAS overshooting one, however \[Mg/Fe\] is 0.50 in the first case and 0.51 in the second case. A difference in the average \[Mg/Fe\] is found between us and the 0Z project, of the order of 0.2dex (the 0Z project being higher), both if we consider only dwarf stars, only giants, or the full samples. Such an offset is roughly compatible with a $1\sigma$ error on each side, but perhaps a little disturbing. Only a 0.01 dex difference is due to the different adopted solar abundances. The use of different models and different atmospheric parameters should largely cancel out when considering a ratio such as \[Mg/Fe\]. Largely does not mean totally, however: Table \[BS\] shows a 0.06 dex difference in \[Mg/Fe\] for BS 16467-062, depending on the adopted atmospheric parameters. Table 10 of Cohen et al. (2004) is also illuminating; it shows how the average \[Mg/Fe\] changes if one considers the mean computed from the abundances derived from a single line of . Of the five lines used by Cohen et al. (2004) three tend to give systematically higher abundances, while two give systematically lower abundances. The final result depends on the set of adopted lines. This issue requires further investigation in the light of the study of deviations from thermodynamic equilibrium for the lines. Our abundance ratios are in agreement with those provided by the 0Z project, within the stated errors. At the end of this exercise we conclude that our measurements and those of the 0Z Team are highly consistent. Differences in the published abundances can be traced back to the different atmospheric parameters adopted, the different treatment of convection in the adopted model atmospheres (approximate overshooting versus no overshooting), and for some elements to the particular choice of lines. Details of the comparison with Lai et al. 2008\[laicomp\] ========================================================= Lai et al. (2008) also analysed a set of stars which is comparable to that of the First Stars project with respect to metallicity. Their sample is also extracted from the HK survey and comprises both dwarfs and giants. Their method to determine atmospheric parameters is similar to that of the 0Z project, photometric temperatures from the $V-K$ colour and gravities derived from isochrones. They observe the giant star BS 16467-062, also observed by us (Paper V) and in the 0Z project (Cohen et al. 2008) and, not surprisingly, derive atmospheric parameters very close to those of Cohen et al. This allows a very tight comparison of the analysis by the three groups, which we defer to Sec. \[BScomp\]. Lai et al. use the same spectrum synthesis code as we and also use ATLAS 9 non-overshooting models which, as discussed in Sec. \[0Zcomp\], are very similar to our MARCS models. It is therefore to be expected that the abundance ratios determined by the two groups are quite similar. In Fig. \[LaiMg\] we compare the \[Mg/Fe\] ratios of the First Stars project with those of Lai et al. The overall agreement is satisfactory. In Fig. \[LaiO\] we compare the \[O/Fe\] ratios of the First Stars project (only giants) with those of Lai et al. The figure seems to indicate a good agreement; however we believe that this agreement is in fact fortuitous, as our oxygen abundances were based on the 630nm \[OI\] line, while those of Lai et al. have been derived from one OH line of the UV $A^2\Sigma - X^2\Pi$ electronic system around 318.5nm (although the precise line used is not specified). These OH lines are known to provide very high \[O/Fe\] ratios when analysed with 1D model atmospheres (e.g. Boesgaard et al. [@boe99], Israelian et al. [@isr01]). Asplund & Garc[í]{}a P[é]{}rez ([@agp01]) have explained this behaviour as due to overcooling of the outer layers of the stars, caused by the overshooting of the convective elements and not properly described by 1D model atmospheres. Our own hdyrodynamical computations (González Hernández et al. [@jonay], Paper XI) confirm this interpretation. In view of this fact it is, at first sight, surprising to find that Lai et al. determine rather low \[O/Fe\] ratios from the OH lines. Closer inspection of their analysis reveals, however, that this is mainly driven by their adopted $gf$ values for these lines. In Fig. \[CS31085-024\] we show a portion of the spectrum of CS 31085-024, used by Lai et al., which we downloaded from the Keck Observatory Archive [^4] compared with two synthetic spectra computed using an ATLAS 9 model with the atmospheric parameters adopted by Lai et al. and two different OH line lists. In the first case (red line) we adopted the $gf$ values for the OH lines of the (0-0) vibrational band of the $A^2\Sigma - X^2\Pi$ electronic system computed from the lifetimes of Goldman & Gillis ([@GG]), which we used in Paper IX. In the second case (green spectrum) we used the lines computed by R.L. Kurucz. This second list is far richer, since it includes also lines from other vibrational bands and not only the (0-0) band. However, even from this limited portion of the spectrum it can be appreciated that the Kurucz $gf$ values are larger than those derived from the Goldman & Gillis ([@GG]) lifetimes; use of the latter $gf$ values would lead to considerably larger OH abundances. For this reason we believe that the oxygen abundances in the stars of the Lai et al. sample should be reinvestigated using a different set of $gf$ values and hydrodynamical model atmospheres. It is likely that the 3D corrections for the giant stars (the majority of the Lai et al. sample with oxygen measurements) are smaller than those for dwarf stars (see Paper XI), since the overcooling is far less extreme in giants than in dwarfs, it is however unlikely that the effect is negligible. We disagree with the statement by Lai et al., who discard the use of 3D models for the analysis of the OH lines since “these models seem to overpredict the solar oxygen abundance derived from helioseismology (Delahaye & Pinsonneault 2006)”. In the first place the oxygen abundance in the Sun is not derived from OH UV lines; in the second place it is now clear that the low solar oxygen abundances which have been claimed in the past (Asplund et al. 2004) are not due to the use of 3D hydrodynamical models, but to low measured EWs and extreme assumptions on the role of collisions with H atoms in the NLTE computations (see Caffau et al. 2008, for a discussion and a new measurement of the solar oxygen abundance). In our view the use of 3D hydrodynamical models is necessary for a reliable analysis of OH lines in metal-poor stars. The \[Cr/Fe\] ratios were compared in Fig. \[CrLai\] and we see that the picture which emerges is very consistent between the two analyses, including the dwarf–giant discrepancy discussed in Sec. \[CrCoNi\]. In agreement with us, Lai et al. note that when lines are measurable the \[/Fe\] ratio remains close to zero, suggesting that the decrease in \[Cr/Fe\] with decreasing metallicity, seen when lines are used, is probably an artifact due to deviations from LTE. Finally in Fig. \[LaiZn\] we compare the \[Zn/Fe\] ratios with those of Lai et al. (2008). The have measured Zn only in two dwarfs, slightly more metal-rich than our ones and the Zn abundances for these two are in line with what derived from the giants. We note that the $gf$ value adopted by Lai et al. is 0.04dex lower than adopted by us. Comparison for BS 16467-062 \[BScomp\] ====================================== ----- ------------- ------------- -------------- Ion MARCS ATLAS MARCS OVER T=5364K T=5364 T=5200 log g =2.95 log g =2.95 log g = 2.50 4.24 4.39 4.15 2.01 2.15 1.85 4.20 4.38 4.07 2.83 2.98 2.71 -0.29 -0.20 -0.54 1.71 1.83 1.52 1.66 1.74 1.42 1.55 1.68 1.37 1.18 1.29 0.97 3.87 4.00 3.70 3.93 4.02 3.73 2.06 2.20 1.86 ----- ------------- ------------- -------------- : Abundances for BS 16467-062 different model atmospheres and the EWs of Cohen et al. (2008).[]{data-label="BS"} ----- ------- ---------- ------- ---------- Ion A $\sigma$ A $\sigma$ 4.06 0.10 3.98 0.10 4.12 4.01 2.90 0.09 2.78 0.07 -0.30 0.02 -0.59 0.02 1.79 1.57 1.71 0.12 1.43 0.12 1.47 0.06 1.27 0.07 1.05 0.01 0.82 0.02 3.77 0.19 3.61 0.18 3.76 0.15 3.54 0.16 1.88 0.06 1.66 0.06 2.80 0.02 2.60 0.04 ----- ------- ---------- ------- ---------- : Abundances for BS 16467-062 for different atmospheric parameters and the EWs of Lai et al. (2008).[]{data-label="BS_2"} The giant star BS 16467-062 has been observed independently by all three groups, ourselves (Cayrel et al. 2004, Paper V), the 0Z project (Cohen et al. 2008) and Lai et al. (2008). The latter two groups have used HIRES@Keck, while we have used UVES@VLT. In their appendix B, Cohen et al. ([@CCM08]) make a detailed comparison between the analysis of giant stars analysed by us and their own analysis. They conclude that the same star analysed by the two groups will show a difference of 0.3dex in \[Fe/H\]. This is based on their analysis of the giant star BS 16467-062. We wish to explain how this difference arises. We used the EWs of Cohen et al. ([@CCM08]) for this star and their gf values to redetermine the abundances using four models: a MARCS model and two ATLAS models (overshooting and non-overshooting) for = 5365 K = 2.95, which are the parameters of Cohen et al. ([@CCM08]) and the MARCS model with = 5200 K , = 2.50, which was used in Cayrel et al. ([@CDS04], Paper V). The results are shown in Table \[BS\]. We omit the results from the ATLAS non-overshooting model, since they are identical to those obtained from the MARCS model. This could be expected by looking at Fig. \[over\_giants\] in which the temperature structures of the two models are compared. The differences in the abundances between the MARCS model with the parameters of Paper V and those from an ATLAS overshooting model with the higher  and  of Cohen et al. ([@CCM08]) may indeed be as large as 0.3dex. However, it is important to understand that this difference is due to two distinct factors: on the one hand the change in  and , as each species displays a slightly different sensitivity to these; on the other hand the use of approximate overshooting in the models of Cohen et al. ([@CCM08]). The two effects are comparable. The difference in the abundances obtained from the two different MARCS models allow to estimate the sensitivity of the various abundances to the model parameters. The difference between the MARCS and ATLAS overshooting model allow to see the effect of the approximate overshooting. We confirm that \[Fe/H\] for this star is 0.3 dex higher using the parameters of Cohen et al. ([@CCM08]) and an ATLAS overshooting model, relative to that derived using the parameters of Paper V and a MARCS model (or an ATLAS non-overshooting model). However, 0.17dex of this difference arises from the different choices in  and , and 0.13dex comes about from the use of the approximate overshooting. Having understood these differences we may conclude that there is excellent agreement between the two analysis. Note that with our MARCS model and atmospheric parameters, but the EWs and $gf$ values of Cohen et al. ([@CCM08]), \[Fe/H\] for this star is -3.80, which compares very well with -3.77 given in Paper V. Note also that when using MARCS models (or ATLAS non-overshooting models) our atmospheric parameters achieve a slightly better iron ionisation equilibrium (0.03 dex) than the parameters chosen by Cohen et al. ([@CCM08], 0.06 dex). However since both these difference are much smaller than the line-to-line scatter it is impossible to chose which set of parameters is better by just looking at the iron ionisation equilibrium. As noted above most of these differences tend to cancel out when considering abundance ratios. Iron is the element for which the largest number of lines is measured and in this respect its abundance is more robust. For other elements the difference between the values published in Paper V and an analysis by the 0Z Team may also reflect the different choice of lines. For instance for BS 16467-062 Cohen et al. ([@CCM08]) measure 4 lines, while in Paper V we measured 8 lines, but used only 7 to derive the mean Mg abundance. The Mg lines in BS 16467-062 are all weak; thus, the re-measurement of the Mg abundance using line profile fitting (see section \[mag\]) confirms the abundances provided in Paper V. We discarded 416.7271nm because the abundance derived from this line deviates strongly from those derived from the other lines. The line is rather weak (0.75pm as measured in our data or 0.68 pm as measured by Cohen et al. [@CCM08]), but even for these very weak lines the measurements are highly consistent. Thus the mean Mg abundance from our 7 lines is, as given in Paper V, 3.97, with a rather small scatter of 0.09 dex. On the other hand the mean Mg abundance from the four lines measured by Cohen et al.([@CCM08]), including 416.7271nm, and using the atmospheric parameters and model of Paper V, is 4.15 with a rather large scatter of 0.33 dex. The mean Mg abundance for these four lines from our measurements is 4.12 with a scatter of 0.38dex. Finally if we take the measurements of Cohen et al.([@CCM08]) and discard the 416.7271nm line we obtain 3.99 with a scatter of 0.11, highly consistent with our published value in Paper V. The three groups (First Stars, 0Z project, Lai et al.) have used different atmospheric parameters for this star, and the sensitivity of abundances to these is detailed in all the three papers. In order to make a stringent comparison between the results of the three groups it is advisable to derive abundances from each set of EWs and $gf$ values for a same model atmosphere and with the same spectrum synthesis code. We did so in Table \[BS\_2\] in which we used the MARCS model used in Paper V to rederive all the abundances. We compare the atomic species in common, excluding Al, for which both ourselves and Lai et al. have used spectrum synthesis. Inspection of Table \[BS\_2\] immediately reveals that, with very few exceptions, the abundances of the First Stars project rely on a larger number of lines than those of the other teams. This is particularly striking for iron, for which we use 130 lines compared to 55 of Cohen et al. and 52 of Lai et al.; a similar situation is found for Ti, where we use 11 and 23 lines while Cohen et al. use 2 and 14, respectively and Lai et al. 1 and 9. This probably reflects the fact that the First Stars spectra have a larger total wavelength coverage and a more uniform high S/N ratio across the spectra. This is due in part to the fact that UVES, as a two-arm spectrograph, covers roughly a 30% larger spectral range in a single exposure than HIRES, and in part to the large amount of telescope time invested in the First Stars project. [lrrrrrrrrr]{} Ion & & &\ & &\ & A & $\sigma$ & $N$ & A & $\sigma$& $N$ &A & $\sigma$& $N$\ & 3.99$^a$& 0.11 & 3 & 3.98&0.10 & 4 & 3.97 & 0.09 & 7\ & 4.07 & & 1 & 4.01& & 1 & 4.20 & & 1\ & 2.71 & 0.12 & 3 & 2.78&0.07 & 4 & 2.94 & 0.19 & 12\ &–0.54 & 0.03 & 3 & –0.59&0.02& 2 & –0.59 & 0.06 & 4\ & 1.52 & 0.05 & 2 & 1.57 & & 1 & 1.65 & 0.17 & 11\ & 1.42 & 0.11 & 14 & 1.43 &0.12 & 9 & 1.43 & 0.18 & 23\ & 1.37 & 0.10 & 5 & 1.27&0.07 & 4 & 1.49 & 0.29 & 5\ & 0.97 & 0.22 & 5 & 0.82&0.02 & 2 & 1.07 & 0.03 & 3\ & 3.70 & 0.16 & 55 & 3.61&0.18 &52 & 3.67 & 0.13 & 130\ & 3.73 & 0.14 & 8 & 3.54&0.16 & 3 & 3.79 & 0.12 & 4\ & 1.86 & 0.26 & 3 & 1.66&0.06 & 3 & 1.70 & 0.10 & 4\ & & & 0 & 2.60&0.04 & 2 & 2.56 & 0.03 & 3\ Once the line at 416.7nm has been removed from the set of Cohen et al. the Mg abundance appears to be in remarkably good agreement, in spite of the much larger number of lines used by the First Stars team. That the actual choice of lines does make a difference is obvious if we look at the Ca abundances. There is a difference of 0.23dex in the Ca abundance derived in Paper V and that of Cohen et al. (2008). Of the three lines measured by Cohen et al. (2008) we have only two. The mean Ca abundance for these two lines is 2.81 with a 0.05dex deviation, thus the discrepancy is reduced to 0.1dex, totally consistent with the observational errors. We have measured all four Ca lines used by Lai et al. (2008), and the mean of these four lines is close to the abundance given in Table \[BS\_2\]. However, 443.5nm appears to be discrepant by 0.39 dex with respect to the mean of the other three lines, which is 2.86, only 0.08dex higher than the value of Lai et al. and fully consistent with observational errors. It is then clear that for the species for which a limited number of lines is available, the actual choice of lines can make a difference. Another noticeable difference is for Si. All three groups have determined the Si abundance from a single Si line, however the other two teams have used the 390.6nm line, while we have used the 410.3nm line since the other line is heavily contaminated by CH lines in the spectra of giant stars. On the other hand, the EWs for the 390.6nm line agree well among the three investigations (9.18pm for us, 9.34pm for Cohen et al. 2008 and 9.06nm for Lai et al. 2008); thus the Si abundance derived from this line agrees well among the three investigations. It is reassuring that for iron, for which all three groups have measured a large number of lines, the results are fully consistent. The conclusion of these comparisons is that the results of the three teams are consistent, once the different choice of atmospheric parameters and models has been factored out. Some caution must be exercised for the species which are represented by few lines, where the actual choice of lines can make a difference, especially if differential NLTE effects are present. [^1]: Based on observations obtained with the ESO Very Large Telescope at Paranal Observatory, Chile (Large Programme “First Stars”, ID 165.N-0276; P.I.: R. Cayrel, and Programme 078.B-0238; P.I.: M. Spite). [^2]: http://astrostatistics.psu.edu/statcodes/asurv [^3]: http://www.aip.de/ mst/Linfor3D/linfor\_3D\_manual.pdf [^4]: http://www2.keck.hawaii.edu/koa/public/koa.php
--- abstract: 'The convergence of the iterative solutions of the transport equations of cosmic muon and tau neutrinos propagating through Earth is studied and analyzed. For achieving a fast convergence of the iterative solutions of the coupled transport equations of $\nu_\tau$, $\bar{\nu}_\tau$ and the associated $\tau^{\pm}$ fluxes, a new semi–analytic input algorithm is presented where the peculiar $\tau$–decay contributions are implemented already in the initial zeroth order input. Furthermore, the common single transport equation for muon neutrinos is generalized by taking into account the contributions of secondary $\nu_\mu$ and $\bar{\nu}_\mu$ fluxes due to the prompt $\tau$–decay $\tau\to \nu_\mu$ initiated by the associated tau flux. Differential and total nadir angle integrated upward–going $\mu^-+\mu^+$ event rates are presented for underground neutrino telescopes and compared with the muon rates initiated by the primary $\nu_\mu$, $\nu_\tau$ and $\tau$ fluxes.' --- 21.0cm 0.0cm 0.0cm 0.0cm DO-TH 06/05\ July 2006 \ S. Rakshit and E. Reya\ \ [*D-44221 Dortmund, Germany*]{}\ Introduction ============ Upward–going cosmic neutrinos with energies below $10^8$ GeV play a decisive role for underground neutrino telescopes, since the atmospheric background can be more effectively controlled, in contrast to downward–going cosmic neutrinos. While traversing through the Earth, upward–going muon (anti)neutrinos undergo attenuation (absorption) due to weak charged current (CC) and neutral current (NC) interactions as well as regeneration [@ref1; @ref2] due to NC interactions. The latter shift the energy of the neutrinos, rather than absorbing them, to lower energies and populate the lower energy part of the initial cosmic neutrino flux spectra, thus adding to the naive non–regenerated $\mu^- +\mu^+$ event rates at the detector. Such propagation effects of muon (and electron) neutrinos through Earth are described by a single transport (integro–differential) equation which can be rather easily solved iteratively [@ref1; @ref2; @ref3; @ref4; @ref5]. On the other hand tau (anti)neutrinos are not absorbed, but degraded in energy, in the Earth as long as the interaction length of the produced tau leptons is larger than their decay length (which holds for energies up to about $10^9$ GeV). Because of these latter (semi)leptonic decays $\tau\to\nu_\tau X$, the Earth will not become opaque to $\nu_\tau$ [@ref6] since the $\tau^-$ produced in CC interactions decays back to $\nu_\tau$. This ‘regeneration chain’ $\nu_\tau\to\tau\to\nu_\tau\to\ldots$ continues until the $\nu_\tau$ and $\bar{\nu}_\tau$, as well as the $\tau^{\pm}$ leptons, reach the detector on the opposite side of the Earth. Thus the propagation of high–energy tau neutrinos through the Earth is very different from muon and electron neutrinos, and we have now to deal with coupled transport equations for the $\stackrel{(-)}{\nu}_\tau$ and $\tau^{\pm}$ fluxes [@ref4; @ref7; @ref8; @ref9; @ref10; @ref11; @ref12; @ref13]. Obtaining stable iterative solutions of these coupled integro–differential equations is far more involved as compared to the single transport equation for muon neutrinos. It is one of our main objectives to discuss the general qualitative and quantitative structure of these solutions and to present an efficient input algorithm which allows for a rather fast convergence of the iterative procedure. This applies to all present model cosmic neutrino fluxes. Moreover the $\tau^- +\tau^+$ flux, generated by the initial cosmic $\nu_\tau+\bar{\nu}_\tau$ flux while traversing the Earth, gives rise to a secondary $\bar{\nu}_\mu+\nu_\mu$ flux [@ref14] via $\tau\to\nu_\mu$ due to the prompt $\tau$–decays like $\tau^-\to\nu_\tau \mu^-\bar{\nu}_\mu$. This adds considerable contributions to the primary cosmic $\nu_\mu+\bar{\nu}_\mu$ flux and may increase the $\mu^- +\mu^+$ rates at the detector site sizeably [@ref9; @ref12], depending on the cosmic flux and nadir angle considered. Such effects require an extension of the simple single transport equation for $\stackrel{(-)}{\nu}_\mu$ and the inclusion of the appropriate prompt decay term reduces the convergence of the iterative procedure considerably. The simple single transport equation for $\stackrel{(-)}{\nu}_\mu$ will be discussed for completeness in Sec. 2. Although frequently used, the excellent convergence of its iterative solutions has not been explicitly demonstrated thus far for more realistic cosmic neutrino fluxes, apart from some specific steep toy model neutrino fluxes [@ref2]. In Sec. 3 we turn to the iterative solutions of the far more complicated coupled transport equations for $\stackrel{(-)}{\nu}_\tau$ and their associated $\tau^{\pm}$ fluxes. A new semi–analytic input algorithm is presented which allows for a fast convergence of the iterative solutions. The implications for the upward–going $\mu^- +\mu^+$ event rates for underground neutrino detectors for some relevant cosmic neutrino fluxes will be briefly outlined as well. The solutions of the generalized single transport equation for muon neutrinos, by taking into account the contributions of the secondary $\nu_\mu +\bar{\nu}_\mu$ flux from prompt $\tau^{\pm}$ decays based on the calculated associated $\tau^{\pm}$ fluxes, are discussed in Sec. 4. Their implications for the expected $\mu^- +\mu^+$ event rates, as initiated by various relevant cosmic neutrino model fluxes, are presented as well. Finally, our conclusions are summarized in Sec. 5. The transport equation of muon neutrinos ======================================== Disregarding possible contributions from other neutrino flavors for the time being, the transport equation for upward–going cosmic muon (anti)neutrinos $\stackrel{(-)}{\nu}_\mu$ passing through Earth can be written as [@ref1; @ref2; @ref3; @ref4; @ref5] $$\frac{\partial F_{\nu_\mu}(E,X)}{\partial X} = -\frac{F_{\nu_\mu}(E,X)}{\lambda_\nu(E)} +\frac{1}{\lambda_\nu(E)} \int_0^1 \frac{dy}{1-y}\, K_\nu^{\rm NC}(E,y)\, F_{\nu_\mu}(E_y,X)$$ where $F_{\nu_\mu}\equiv d\Phi_{\nu_\mu}/dE$ is the differential cosmic neutrino flux and $E_y=E/(1-y)$. The column depth $X=X(\theta)$, being the thickness of matter traversed by the upgoing leptons, depends on the nadir angle of the incident neutrino beam ($\theta=0^{\rm o}$ corresponds to a beam traversing the diameter of the Earth); it is obtained from integrating the density $\rho(r)$ of the Earth along the neutrino beam path $L'$ at a given $\theta$, $X(\theta)=\int_0^L\rho (L')dL'$ with $L= 2 R_{\oplus}\cos\theta$, $R_{\oplus}\simeq 6371$ km, denoting the position of the underground detector, and $X(\theta)$ can be found, for example, in Fig. 15 of [@ref15] in units of g/cm$^2$ = cm we. Furthermore $\lambda_\nu^{-1} = N_A \sigma_{\nu N}^{\rm tot}$, $N_A = 6.022 \times 10^{23} g^{-1}$, is the inverse neutrino interaction length where $\sigma_{\nu N}^{\rm tot} = \sigma_{\nu N}^{\rm CC} +\sigma_{\nu N}^{\rm NC}$ and $$K_\nu^{\rm NC}(E,y) = \frac{1}{\sigma_{\nu N}^{\rm tot}(E)} \,\, \frac{d\sigma_{\nu N}^{\rm NC}(E_y,y)}{dy}\, .$$ The various CC and NC $\stackrel{(-)}{\nu}\!\!N$ cross sections are calculated as in [@ref5; @ref13], with the relevant details to be found in [@ref16], utilizing the QCD inspired dynamical small–$x$ predictions for parton distributions according to the radiative parton model [@ref17]. Notice that conventionally fitted parton distributions at the relevant weak scale $Q^2=M_W^2$ would require additional ad hoc assumptions (see, e.g., [@ref15; @ref18]) for the necessary extrapolations into the yet unmeasured small Bjorken–$x$ region $x<10^{-3}$ ($x\simeq M_W^2/2m_N E$). The first term in (1) describes the attenuation (absorption) of neutrinos when penetrating through the Earth, and the second one their regeneration consisting of the degrading shift in their energy. For definiteness all formulae are given for an incoming neutrino beam, but similar expressions hold of course for antineutrinos. Equation (1) can be efficiently solved by the ansatz [@ref2] $$F_{\nu_\mu}(E,X) = F_{\nu_\mu}^0(E)\exp \left[-\frac{X}{\Lambda_{\nu_\mu}(E,X)}\right]$$ with an effective absorption (interaction) length $$\Lambda_{\nu_\mu}(E,X) = \frac{\lambda_\nu(E)}{1-Z_{\nu_\mu}(E,X)}$$ and where $F_{\nu_\mu}^0(E)\equiv F_{\nu_\mu}(E,X=0)$ denotes the initial cosmic neutrino flux which reaches the Earth’s surface. Depending on the assumed cosmic neutrino flux, the $Z_\nu$–factor can take any non–negative values. Its physics interpretation and the consequences for the shadowing factor $S\equiv\exp\left[ -X/\Lambda_\nu\right]$ in (3) are immediate: $Z_\nu<1$ (the only case considered in [@ref2] relevant for steeper, i.e., soft model fluxes) implies $\Lambda_\nu>\lambda_\nu>0$ thus $S<1$, i.e. the neutrino flux will be further attenuated since absorption plays the dominant role; for $Z_\nu =1$, $\Lambda_\nu=\infty$, i.e. $S=1$ which means that regeneration and absorption compensate each other; finally $Z_\nu>1$ implies $\Lambda_\nu<0$ and $S>1$, and consequently the NC regeneration in (1) can even cause an enhancement of the neutrino spectrum with respect to the initial flux $F_{\nu_\mu}^0(E)$ for certain energies and depths $X$. Inserting (3) into (1) yields $$Z_{\nu_\mu}(E,X) =\frac{1}{X}\int_0^X dX' \int_0^1 dy\, K_\nu^{\rm NC}(E,y)\, \eta_\nu\, (E,y)\, e^{-X'D_{\nu_\mu}(E,E_y,X')}$$ with $\eta_\nu(E,y)=F_{\nu_\mu}^0(E_y)/(1-y)F_{\nu_\mu}^0(E)$ and $D_{\nu_\mu} (E,E_y,X') = \Lambda_{\nu_\mu}^{-1}(E_y,X')-\Lambda_{\nu_\mu}^{-1}(E,X')$. Using an iteration algorithm to solve for $Z_{\nu_\mu}(E,X)$, one can formally rewrite the solution of (5) after the n–th iteration as $$Z_{\nu_\mu}^{(n+1)}(E,X) = \frac{1}{X} \int_0^X dX' \int_0^1 dy\, K_\nu^{\rm NC}(E,y)\, \eta_\nu(E,y)\, e^{-X'D_{\nu_\mu}^{(n)}(E,E_y,X')}$$ where $$D_{\nu_\mu}^{(n)}(E,E_y,X') = \frac{ 1-Z_{\nu_\mu}^{(n)}(E_y,X')}{\lambda_\nu(E_y)} - \frac{1-Z_{\nu_\mu}^{(n)}(E,X')}{\lambda_\nu(E)} \,\, .$$ The reason why this iteration is expected to converge very fast is as follows: the kernel $K_\nu^{\rm NC}$ peaks very strongly [@ref2; @ref19] at $y=0$ and $y=1$, with the contribution at $y\simeq 1$ being, however, exponentially suppressed in (6); thus the main contribution to the integral over $y$ in (6) comes from the region around $y\simeq 0$ where $D_{\nu_\mu}(E, E_y, X')\to 0$ as $y\to 0$. Therefore the iteration should be robust with respect to choosing the $n=0$ approximation [@ref2]. The most simple input choice is $Z_{\nu_\mu}^{(0)}(E,X')=0$ in (7). For this case the analytic $X'$–integration in (6) yields $$Z_{\nu_\mu}^{(1)}(E,X)=\int_0^1 dy K_\nu^{\rm NC}(E,y)\, \eta_\nu (E,y)\, \frac{1-e^{-XD_\nu(E,E_y)}}{XD_\nu(E,E_y)}$$ with $$D_\nu(E,E_y) \equiv D_{\nu_\mu}^{(0)}(E,E_y,X') = \frac{1}{\lambda_\nu(E_y)} - \frac{1}{\lambda_\nu(E)}\,\, .$$ With the $n=1$ solution in (8) at hand, it is now straightforward to obtain iterations in higher orders, for example, for $n=2$ by inserting (8) into (7) gives $Z_{\nu_\mu}^{(2)}$ in (6). Representative cosmic neutrino fluxes of some hypothesized sources are displayed in Fig. 1 which we shall partly use for all our subsequent calculations. Recent diffuse neutrino flux upper limits of AMANDA [@ref20; @ref21] are shown by the bars with arrows – the latter indicate the still allowed region. Although the huge flux from active galactic nuclei of Stecker and Salamon (AGN–SS) [@ref22] has been already excluded, we shall use it merely as a theoretical playground due to its unique spectrum at lower energies where $F_{\nu_\mu}^0(E)\sim$ const. for $E$ $10^5$ GeV. On the other hand the AGN–M95 flux [@ref23] is still compatible (although slightly in conflict) with the AMANDA upper bound, as are the gamma ray burst (GRB–WB) [@ref24] and topological defect (TD–SLBY) [@ref25] fluxes. These latter three fluxes will be used for our ‘realistic’ model calculations. The TD–SLSC [@ref26] and $Z$–burst [@ref27] fluxes are shown just for illustration since they are too minute for being tested with upward–going event rates [@ref5]. Note that the initial cosmic (anti)neutrino fluxes $F_{\nu,\bar{\nu}}^0(E)$ in (3) which reach the Earth’s surface are given by $F_{\nu_\mu}^0 = F_{\bar{\nu}_\mu}^0 = F_{\nu_\tau}^0 = F_{\bar{\nu}_\tau}^0 = \frac{1}{4}d\Phi/dE$ with $\Phi$ being the cosmic $\nu_\mu+\bar{\nu}_\mu$ flux at the production site in Fig. 1. For a better comparison of our quantitative results with the ones obtained in the literature, we also employ two generic initial fluxes incident on the surface of the Earth at a nadir angle $\theta =0^{\rm o}$ of the form [@ref4; @ref7] $$\begin{aligned} F_{\nu_\mu+\bar{\nu}_\mu}^0(E) & = & N_1 E^{-1}(1+E/E_0)^{-2},\quad E_0 = 10^8\,\,{\rm GeV}\\ F_{\nu_\mu+\bar{\nu}_\mu}^0(E) & = & N_2 E^{-2}\end{aligned}$$ with adjustable normalization factors $N_i$, for example, $N_1=\frac{1}{2} \times 10^{-13}/$(cm$^2$ sr s) and $N_2=\frac{1}{2}\times 10^{-7}$ GeV/(cm$^2$ sr s). Notice that the generic $E^{-1}$ energy dependence is representative for the TD and $Z$–burst fluxes in Fig. 1 for $E$ $10^7$ GeV; and also for the GRB–WB flux for $E$ $10^5$ GeV. Furthermore the latter GRB–WB flux behaves like $E^{-2}$ in (11) for $10^5 < E$ $10^7$ GeV, where such a power spectrum with index $-2$ is typical for shock acceleration (see, e.g., [@ref21]). Our results for $Z_{\nu_\mu}^{(1)}$ and $Z_{\nu_\mu}^{(2)}$ are shown in Figs. 2 and 3 for two typical values of the nadir angle, $\theta =0^{\rm o}$ ($X=1.1\times 10^{10}$ cm we) and $\theta = 50^{\rm o}$ ($X=3.6\times 10^{9}$ cm we). The iteration converges very fast since in general the maximum difference between $Z_{\nu_\mu}^{(1)}$ and $Z_{\nu_\mu}^{(2)}$ is less than about 5%, $|Z_{\nu_\mu}^{(2)}/ Z_{\nu_\mu}^{(1)}-1|<0.05$, and moreover $|Z_{\nu_\mu}^{(3)}/ Z_{\nu_\mu}^{(2)}-1|<0.005$. Thus the first $n=1$ iteration is already sufficiently stable and suffices for [*all*]{} cosmic neutrino fluxes considered at present [@ref19]. Notice that for larger $\theta$ (smaller $X$) the difference between $Z_{\nu_\mu}^{(2)}$ and $Z_{\nu_\mu}^{(1)}$ decreases and therefore the stability increases. The results for $Z_{\bar{\nu}_\mu}$ are similar but $Z_{\bar{\nu}_\mu} >Z_{\nu_\mu}$ for $E$ $10^6$ GeV where $\lambda_{\bar{\nu}}>\lambda_{\nu}$. The resulting $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ fluxes follow from (3) and can be found in [@ref4; @ref5; @ref7]. The transport equations of tau neutrinos and taus ================================================= Apart from the absorption (attenuation) due to $\sigma_{\nu N}^{\rm tot}$ and regeneration due to $\sigma_{\nu N}^{\rm NC}$ in (1), for upward–going cosmic tau (anti)neutrinos $\stackrel{(-)}{\nu}_\tau$, it is important to take into account the regeneration from the $\tau^{\pm}$ decays as well as the contributions from the CC tau interactions. The tau neutrino and tau fluxes then satisfy the following coupled transport equations: $$\begin{aligned} \frac{\partial F_{\nu_{\tau}}(E,X)}{\partial X} & = & -\frac{F_{\nu_{\tau}}(E,X)}{\lambda_{\nu}(E)} + \frac{1}{\lambda_{\nu}(E)} \int_0^1\frac{dy}{1-y}\, K_{\nu}^{\rm NC}(E,y)\, F_{\nu_{\tau}}(E_y,X) \nonumber\\ & & + \int_0^1\frac{dy}{1-y}\, K_{\tau}(E,y)F_{\tau}(E_y,X)\\ \nonumber\\ \frac{\partial F_{\tau}(E,X)}{\partial X} & = & - \frac{F_{\tau}(E,X)}{\hat{\lambda}(E)} + \frac{\partial\left[\gamma(E)F_{\tau}(E,X)\right]}{\partial E}\nonumber\\ & & + \frac{1}{\lambda_{\nu}(E)} \int_0^1 \frac{dy}{1-y}\, K_{\nu}^{\rm CC} (E,y)\, F_{\nu_{\tau}}(E_y,X)\end{aligned}$$ where $F_{\nu_\tau}\equiv d\Phi_{\nu_\tau}/dE$ and $F_\tau\equiv d\Phi_\tau/dE$ are the differential energy spectra (fluxes) of tau (anti)neutrinos and tau leptons and the initial fluxes at the surface of the Earth ($X=0$) being given by $F_{\nu_\tau}^0(E) = F_{\bar{\nu}_\tau}^0(E) = \frac{1}{4} d\Phi/dE$ with $\Phi$ being the $\nu_\mu +\bar{\nu}_\mu$ flux at the cosmic production site in Fig. 1. The cross section kernel $K_{\nu}^{\rm NC}$ is defined in (2) and a similar expression holds for $K_\nu^{\rm CC}$. Furthermore $$K_{\tau}(E,y)= \frac{1}{\lambda_{\tau}(E)}\, K_{\tau}^{\rm CC}(E,y) + \frac{1}{\lambda_{\tau}^{\rm dec}(E)}\, K_{\tau}^{\rm dec}(E,y)$$ where $$K_{\tau}^{\rm CC}(E,y) = \frac{1}{\sigma_{\tau N}^{\rm tot}(E)}\,\, \frac{d\sigma_{\tau N}^{\rm CC}(E_y,y)}{dy},\quad\quad K_{\tau}^{\rm dec}(E,y) =\frac{1}{\Gamma_{\tau}^{\rm tot}(E)}\,\, \frac{d\Gamma_{\tau\to \nu_{\tau}X'}(E_y,y)}{dy}\\$$ and $\lambda_\tau^{-1} = N_A\sigma_{\tau N}^{\rm tot} = N_A(\sigma_{\tau N}^{\rm CC} +\sigma_{\tau N}^{\rm NC})$, and $\hat{\lambda}^{-1} = (\lambda_{\tau}^{\rm CC})^{-1} + (\lambda_{\tau}^{\rm dec})^{-1}$ with $(\lambda_\tau^{\rm CC})^{-1} = N_A\sigma_{\tau N}^{\rm CC}$ in (13). The decay length of the $\tau^{\pm}$ is $\lambda_\tau^{\rm dec}(E,X,\theta)=(E/m_\tau)c\tau_\tau \rho(X,\theta)$ with $m_\tau = 1.777$ GeV, $c\tau_\tau = 87.11$ $\mu$m and $\rho$ denoting the Earth’s density (see, e.g., [@ref15]). Furthermore, since $1/\Gamma_\tau^{\rm tot}(E) = (E/m_\tau)\tau_\tau$, the $\tau$–decay distribution in (14) becomes $K_\tau^{\rm dec}(E,y) = (1-y)\, dn(z)/dy$ with $z\equiv E_{\nu_\tau}/E_\tau=E/E_y=1-y$ and [@ref7; @ref28] $$\frac{dn(z)}{dy} = \sum_i B_i\left[g_0^i(z)+Pg_1^i(z)\right]$$ with the polarization $P=\pm 1$ of the decaying $\tau^{\pm}$. The $\tau\to \nu_\tau X'$ branching fractions $B_i$ into the decay channel $i$ and the functions $g_{0,1}^i(z)$ are given in Table I of [@ref7]. The decay channels $i$ considered are $\tau\to\nu_\tau \mu\nu_\mu$, $\tau\to\nu_\tau\pi$, $\tau\to\nu_\tau\rho$, $\tau\to\nu_\tau a_1$ and $\tau\to\nu_\tau X$ which have branching fractions of 0.18, 0.11, 0.26, 0.13 and 0.13, respectively. The lepton energy–loss is treated continuously [@ref29; @ref30; @ref31] by the term proportional to $\gamma(E)$ in (13). Alternatively, the average energy–loss can be treated separately (stochastically) [@ref32; @ref33], i.e., not including the term proportional to $\gamma(E)$ in (13) but using instead $-dE/dX=\gamma(E) =\alpha+\beta E$. We shall compare these two approaches for taus and muons toward the end of this Section. The most general solution of Eqs. (12) and (13) has been presented in [@ref10; @ref13], and in the context of atmospheric muons in [@ref31]. For the time being, however, we disregard the $\gamma$-term in (13) since observable non–negligible upward–going event rates are obtained only for energies $E<10^8$ GeV [@ref7; @ref13] where the energy–loss of the taus can be neglected [@ref10; @ref32; @ref33; @ref34; @ref35]. In the relevant energy region below $10^8$ GeV, the tau–lepton interaction length is much larger than the decay length of the $\tau$ (see, e.g., [@ref33] and below), $\lambda_\tau(E)\gg \lambda_\tau^{\rm dec}(E)$, i.e., $K_\tau \simeq K_\tau^{\rm dec}/\lambda_\tau^{\rm dec}$ in (14) and $\hat{\lambda}_\tau^{-1} \simeq (\lambda_\tau^{\rm dec})^{-1}$ in (13). Solving (12) and (13) with a similar ansatz as for muon neutrinos in (3), we write $$F_{\nu_\tau}(E,X) = F_{\nu_\tau}^0(E)\exp \left[-\frac{X}{\Lambda_{\nu_\tau}(E,X)}\right]$$ with an effective interaction (absorption) length $$\Lambda_{\nu_\tau}(E,X) = \frac{\lambda_\nu(E)}{1-Z(E,X)}$$ where $Z=Z_{\nu_\tau}+Z_\tau$. Inserting (16) into (12) and (13) yields [@ref4; @ref13] $$Z_{\nu_\tau}(E,X) = \frac{1}{X}\int_0^X dX'\int_0^1 dy\, K_\nu^{\rm NC} (E,y)\, \eta_\nu(E,y)\, e^{-X'D_{\nu_\tau}(E,E_y,X')}$$ with $\eta_\nu$ as in (5) since $F_{\nu_\mu}^0=F_{\nu_\tau}^0$ and $D_{\nu_\tau}(E,E_y,X')=\Lambda_{\nu_\tau}^{-1}(E_y,X')- \Lambda_{\nu_\tau}^{-1}(E,X')$, and $$Z_\tau(E,X) = \frac{\lambda_\nu(E)}{X}\int_0^X dX'\int_0^1 dy\, \frac{K_\tau^{\rm dec}(E,y)}{\lambda_\tau^{\rm dec}(E,X')}\, F_\tau(E_y,X')\, \frac{\eta_\nu(E,y)}{F_{\nu_\tau}^0(E_y)}\, e^{X'/\Lambda_{\nu_\tau}(E,X')}$$ where the obvious dependence of $\lambda_\tau^{\rm dec}$ on $\theta'= \theta(X')$ has been suppressed and $$\begin{aligned} F_\tau(E_y,X') & = & \frac{F_{\nu_\tau}^0(E_y)}{\lambda_\nu(E_y)} \int_0^{X'} dX'' \int_0^1 dy'\, K_\nu^{\rm CC} (E_y,y')\, \eta_\nu(E_y,y')\, e^{-X''/\Lambda_{\nu_\tau}(E_{yy'},X'')}\nonumber\\ & & \times \exp \left[-\int_{X''}^{X'} dX'''/\lambda_\tau^{\rm dec} (E_y,X''')\right]\end{aligned}$$ with $E_{yy'}=E_y/(1-y')=E/(1-y)(1-y')$. Notice that the tau–flux $F_\tau$ is generated by the CC interactions of the initial $F_{\nu_\tau}^0$ flux and attenuated in addition due to its decay. In order to solve for $Z(E,X)$ iteratively as for the $\stackrel{(-)}{\nu}_\mu$ fluxes in the previous Section, one has to make a proper choice for the initial input. Due to the $D_{\nu_\tau}$ function in the exponential in (18) with $D_{\nu_\tau}\to 0$ in the relevant $y\to 0$ region, the iterative result for $Z_{\nu_\tau}(E,X)$ is very robust with respect to the initial input choice, as discussed after (7). Therefore we use again $Z_{\nu_\tau}^{(0)}(E,X')=0$ on the rhs of (18). In the case of $Z_\tau(E,X)$ in (19) there is, however, no equivalent exponential as in (18) and thus the convergence of the iterative procedure becomes sensitive to the input choice. It turns out that a convenient and efficient input choice can be obtained by implementing the peculiar $E$ and $X$ dependence as implied by the $\tau$–decay contributions in (19) from the very beginning. This can be achieved by choosing a vanishing $Z$–factor on the rhs of $Z_\tau$ in (19), in which case the $X'$–integral can be performed analytically [@ref36] and the input for the total $Z$–factor becomes [@ref13] $$\begin{aligned} Z^{(0)}(E,X) & = & \frac{\lambda_\nu(E)}{\lambda_\tau^{\rm dec}(E,\theta)} \int_0^1 dy \int_0^1 dy'\, K_\tau^{\rm dec}(E,y)\, K_\nu^{\rm CC}(E_y,y')\, \lambda_\nu^{-1}\, (E_y)\, \eta_\nu(E,y)\, \eta_\nu(E_y,y')\nonumber\\ & & \times \frac{1}{XD_{\nu\tau}(E_y,E_{yy'})}\, \Big\{ \frac{1}{D_{\tau\nu}(E,E_y)}\left(1-e^{-XD_{\tau\nu}(E,E_y)}\right) \nonumber\\ & & -\frac{1}{D_\nu(E,E_{yy'})} \left(1-e^{-XD_\nu(E,E_{yy'})}\right)\Big\} \nonumber\\ & \simeq & \lambda_\nu(E)\int_0^1 \frac{dy}{1-y} \int_0^1 dy'\, K_\tau^{\rm dec}(E,y)\, K_\nu^{\rm CC}(E_y,y')\, \lambda_\nu^{-1}(E_y)\, \eta_\nu(E,y)\, \eta_\nu(E_y,y')\nonumber\\ & & \times \frac{1}{XD_\nu(E,E_{yy'})}\, \left(1-e^{-XD_\nu(E,E_{yy'})}\right)\end{aligned}$$ where the last approximation is due to $\lambda_\tau^{\rm dec}\ll \lambda_\nu$ in the relevant energy region $E<10^8$ GeV, i.e., $Z^{(0)}$ becomes practically independent of the decay length $\lambda_\tau^{\rm dec}$. Furthermore $D_\nu(E,E_y)$ is given in (9), $D_{\nu\tau}(E,E_y)=1/\lambda_\nu(E_y)-1/\lambda_\tau^{\rm dec}(E,\theta)$ and $D_{\tau\nu}(E,E_y)= -D_{\nu\tau}(E_y,E)$. We have checked that this input guarantees, for all cosmic neutrino fluxes considered at present, a faster convergence of the iterations than choosing [@ref4] the solution for the $\nu_\mu$ flux as an input, $Z^{(0)}=Z_{\nu_\mu}^{(1)}$ with $Z_{\nu_\mu}^{(1)}$ given in (8). Moreover, choosing [@ref10] a vanishing initial input, $Z^{(0)}=0$, as was perfectly sufficient for the $\nu_\mu$ fluxes, results in the worst, i.e., slowest convergence of the iterative procedure. One can now rewrite the solution for $Z(E,X)$ in (18) and (19) after the n–th iteration as [@ref36] $$\begin{aligned} Z^{(n+1)}(E,X) & = & \frac{1}{X}\int_0^X dX'\int_0^1 dy\, K_\nu^{\rm NC} (E,y)\, \eta_\nu(E,y)\, e^{-X'D_{\nu_\tau}^{(n)}(E,E_y,X')} \nonumber\\ & & +\, \frac{\lambda_\nu(E)}{\lambda_\tau^{\rm dec}(E,\theta)}\, \frac{1}{X} \int_0^X dX' \int_0^1 dy \, K_\tau^{\rm dec}(E,y)\, \eta_\nu(E,y) \lambda_\nu^{-1}(E_y) \nonumber\\ & & \times \, e^{-X'/\lambda_\tau^{\rm dec}(E_y,\theta)} \, e^{X'/\Lambda_{\nu_\tau}^{(n)}(E,X')} \int_0^{X'} dX'' \int_0^1 dy'\, K_\nu^{\rm CC}(E_y,y')\, \eta_\nu(E_y,y') \nonumber\\ & & \times \, e^{-X''/\Lambda_{\nu_\tau}^{(n)}(E_{yy'},X'')}\, e^{X''/\lambda_\tau^{\rm dec}(E_y,\theta)}\end{aligned}$$ where $\Lambda_{\nu_\tau}^{(n)}(E,X') = \lambda_\nu(E)/[1-Z^{(n)}(E,X')]$, i.e., $$D_{\nu_\tau}^{(n)}(E,E_y,X') = \frac{1-Z^{(n)}(E_y,X')}{\lambda_\nu(E_y)} \, - \, \frac{1-Z^{(n)}(E,X')}{\lambda_\nu(E)}\,\, .$$ Accordingly, the iterations have to be started with our initial $n=0$ input in (21). After having obtained the final convergent result for $Z^{(n+1)}$, the final $\nu_\tau$ flux $F_{\nu_\tau}^{(n+1)}(E,X)$ follows from (16), $$F_{\nu_\tau}^{(n+1)}(E,X) = F_{\nu_\tau}^0(E)\, e^{-X/\Lambda_{\nu_\tau}^{(n+1)}(E,X)} \,\, ,$$ which in turn gives the $\tau$–flux $$F_\tau^{(n+1)}(E,X) = \frac{1}{\lambda_\nu(E)}\, e^{-X/\lambda_\tau^{\rm dec}(E,\theta)} \int_0^X dX' \int_0^1 \frac{dy}{1-y}\, K_\nu^{\rm CC}(E,y)\, F_{\nu_\tau}^{(n+1)}(E_y,X')\, e^{X'/\lambda_\tau^{\rm dec}(E,\theta)}\, \, .$$ Similar expressions hold for antineutrinos as well. The iterative results for the total $Z$–factor in (17) are shown in Figs. 4 and 5 where the initial input $Z^{(0)}$, as given in (21), is displayed by the dotted curves. For the generic initial $E^{-1}$ and $E^{-2}$ fluxes in (10) and (11) we also show in Fig. 4 the results after the third iteration, $Z^{(3)}$, in order to illustrate the rate of convergence as well as its dependence on the nadir angle $\theta=0^{\rm o}$ ($X=1.1\times 10^5$ km we) and $\theta = 50^{\rm o}$ ($X = 3.6\times 10^4$ km we). In general it turns out that already the second ($n=2$) iteration yields sufficiently accurate results, $Z^{(2)}$, provided one uses as input $Z^{(0)}$ in (21) as implied by the $\tau$–decay. This holds also for the rather hard initial $E^{-1}$ flux in Fig. 4 and the AGN–SS flux in Fig. 5 which imply large $Z$–factors, $Z\gg 1$. This is so because the maximum difference between the results of the next $n=3$ iteration $Z^{(3)}$ and $Z^{(2)}$ is less than about 5% for all relevant initial cosmic neutrino fluxes. An accuracy of less than about 5% is certainly sufficient in view of the uncertainties inherent to models of cosmic neutrino fluxes (cf. Fig. 1). Obviously the iterative convergence improves even more for larger values of $\theta$, i.e., smaller depths $X$, as can be deduced from Fig. 4. It should be emphasized that, in contrast to the case of muon neutrinos in Sec. 2, the [*first*]{} $n=1$ iterative results for $Z^{(1)}$ are [*not*]{} sufficiently accurate as can be seen from Figs. 4 and 5 by comparing the dashed curves ($Z^{(1)}$) with the solid ones ($Z^{(2)}$): in some cases (harder initial fluxes) $Z^{(2)}$ becomes larger than $Z^{(1)}$ by about 20%. The results for $\bar{\nu}_\tau$, $Z=Z_{\bar{\nu}_\tau}+Z_{\tau^+}$, are again similar but larger than for $\nu_\tau$, $ Z=Z_{\nu_\tau}+Z_{\tau^-}$, for $E$ 10$^6$ GeV where $\lambda_{\bar{\nu}}>\lambda_\nu$. Inserting the various iterative solutions of Figs. 4 and 5 into (16) we obtain the $\nu_\tau$ fluxes for a given n–th iteration, $F_{\nu_\tau}^{(n)}(E,X)$. The ratios of these fluxes for two consecutive iterations, $F_{\nu_\tau}^{(n+1)}/F_{\nu_\tau}^{(n)}$, are displayed in Figs. 6 and 7. Whereas the first iteration relative to the zeroth input, $F_{\nu_\tau}^{(1)}/F_{\nu_\tau}^{(0)}$, is way off the final result as shown by the dashed curves, the second iteration suffices already for obtaining a sufficiently accurate result as illustrated by $F_{\nu_\tau}^{(2)}/F_{\nu_\tau}^{(1)}$ by the solid curves. This is supported by the fact that an additional third iteration results in $|F_{\nu_\tau}^{(3)}/F_{\nu_\tau}^{(2)}-1|<0.05$ for all relevant initial cosmic neutrino fluxes considered. (This stability does [*not*]{} hold [@ref7] for initial fluxes $F_{\nu_\mu +\bar{\nu}_\mu}^0\sim E^{-1}$ without an appropriate $E^{-2}$ cutoff in (10) at very high energies, or for fluxes which are partly even flatter than $E^{-1}$ up to highest energies like the $Z$–burst flux in Fig. 1. This instability is caused by the fact that $\eta_\nu(E,y) =1$ for $F_\nu^0\sim E^{-1}$ in $Z_\tau$ in (19); thus the huge spike of $d\sigma_{\nu N}^{\rm CC}/dy$ at $y\to 1$ in (19) and (20) does not get damped by powers of $(1-y)$ – as opposed to, e.g., $F_\nu^0 \sim E^{-2}$ where $\eta_\nu= 1-y$. This, however, is of no concern for $Z_{\nu_\mu}$ and $Z_{\nu_\tau}$ in (5) and (18), respectively, since there the integrands are exponentially suppressed as $y\to 1$ via $\exp\left[ -X'D_\nu(E,E_y,X)\right]$. Notice that the $Z$–burst flux is far too small for being tested with upward–going muon events [@ref5; @ref13].) Therefore we consider $F_{\stackrel{(-)}{\nu}_\tau}^{(2)}(E,X)$ as our final result. It is furthermore obvious from Figs. 6 and 7 that the convergence of the iterative procedure strongly improves for increasing values of $\theta$ (decreasing $X$) as illustrated for $\theta =50^{\rm o}$. The resulting total $\nu_\tau+\bar{\nu}_\tau$ fluxes for the various initial total cosmic fluxes $F_{\nu_\tau+\bar{\nu}_\tau}^0(E)$ are shown in Figs. 8 and 9 for three typical nadir angles $\theta=0^{\rm o}$ ($X=1.1\times 10^5$ km we), $\theta =30^{\rm o}$ ($X=6.8\times 10^4$ km we) and $\theta =60^{\rm o}$ ($X=2.6 \times 10^4$ km we). The typical enhancement (‘bump’) of the attenuated and regenerated $\stackrel{(-)}{\nu}_\tau$ flux around $10^4$ – $10^5$ GeV at small values of $\theta$, which is prominent for harder (flatter) initial fluxes like $F_{\stackrel{(-)}{\nu}_\tau}^0\sim E^{-1}$ in Fig. 8, and which is absent for $\stackrel{(-)}{\nu}_\mu$ fluxes, agrees with the original results of [@ref4; @ref7; @ref9], as was also confirmed by a Monte Carlo simulation [@ref34]. Such an enhancement is less pronounced for the GRB–WB and TD–SLBY fluxes in Fig. 9, and is absent for steeper fluxes like for the $E^{-2}$ one in Fig. 8 and for the even steeper AGN–M95 flux in Fig. 9. Regeneration is responsible for an even more pronounced enhancement below $10^4$ GeV for the AGN–SS flux in Fig. 9 since this flux is particularly hard below $10^5$ GeV (cf. Fig. 1). This latter result is shown mainly for theoretical curiosity. From now on we shall disregard the cosmic AGN–SS flux since it is in serious conflict with recent experimental upper bounds [@ref20; @ref21] as can be seen in Fig. 1. The results for the absolute total $\nu_\tau+\bar{\nu}_\tau$ and $\tau^-+\tau^+$ fluxes, arising from the initial cosmic $\nu_\tau+\bar{\nu}_\tau$ fluxes, are presented in Figs. 10 and 11. The $\nu_\tau+\bar{\nu}_\tau$ results correspond of course to the relative ratios shown in Figs. 8 and 9. Besides the generic initial fluxes in (10) and (11), we have in addition used only those initial cosmic fluxes in Fig. 1 which give rise to large enough upward–going $\mu^- +\mu^+$ event rates [@ref5; @ref13] measurable in present and future experiments. Note that the $\tau^- +\tau^+$ fluxes in Figs. 10 and 11 at the detector site, despite being (superficially) suppressed with respect to the $\nu_\tau +\bar{\nu}_\tau$ fluxes, sizeably contribute to the upward–going $\mu^- +\mu^+$ and shower event rates [@ref13]. This is due to the fact that the $\tau$ fluxes do not require additional weak interactions for producing $\mu$–events in contrast to the $\nu_\tau$ fluxes. Because of the prompt $\tau^{\pm}$ decays, they furthermore give rise to a sizeable secondary $\bar{\nu}_\mu+\nu_\mu$ flux contribution to the original cosmic $\nu_\mu+\bar{\nu}_\mu$ flux, which will be discussed in Sec. 4. In [@ref13] a semi–analytic solution of the coupled transport equations (12) and (13) has been presented and used, which was obtained from the first $n=1$ iteration starting with a vanishing input $Z^0(E,X)=0$, instead of using (21). As we have seen, this approach does not provide sufficiently accurate results, despite opposite claims in the literature [@ref10] (the first $n=1$ iteration is sufficient only for very large values of $\theta$ close to $90^{\rm o}$, i.e., very small values of $X/\rho={\cal{O}}$(100 km), relevant for neutrinos skimming the Earth’s crust). This $n=1$ iterative solution of [@ref13] underestimates the correct results in some extreme cases (like for the hard initial $E^{-1}$ flux at $\theta=0^{\rm o}$) by as much as 40%. On the other hand, for increasing values of $\theta$ this discrepancy disappears very quickly. Consequently, some of the [*total*]{} nadir–angle–integrated upward–going $\mu^- +\mu^+$ event rates calculated in [@ref13] will be increased by less than about 2%. This is due to the fact that 80% of the $\mu^- +\mu^+$ rates are initiated by the $\nu_\mu+\bar{\nu}_\mu$ flux and only about 20% derives from the $\nu_\tau +\bar{\nu}_\tau$ and the associated $\tau^- +\tau^+$ fluxes. For completeness we present in Table 1 the correct expectations for the total $\mu^-+\mu^+$ event rates for the relevant dominant initial cosmic fluxes in Fig. 1, using Eqs. (12) and (14) of [@ref13] for calculating the rates initiated by the $\nu_\tau +\bar{\nu}_\tau$ and $\tau^- +\tau^+$ fluxes, respectively. Finally, it is also of interest to compare the tau–lepton range as given by our semi–analytic approach of treating the energy loss continuously in (13), with the one obtained by a stochastic treatment of the lepton energy loss (where the $\gamma(E)$ term in (13) is absent, i.e., the energy loss is treated separately, and the relevant survival probability $P(E,X)$ is calculated using Monte Carlo simulations, e.g., [@ref32; @ref35; @ref37; @ref38]). To do this, we can drop the inhomogeneous neutrino term in (13) and the resulting homogeneous transport equation for $F_\tau(E,X)$ can be easily solved [@ref13]: $$F_\tau(E,X) = F_\tau(\bar{E}(X,E),0)\exp\left[ -\int_0^X A(\bar{E}(X',E))\, dX'\right]$$ with $A(E)\equiv 1/\hat{\lambda}(E)-\partial\gamma(E)/\partial E$, $\hat{\lambda}^{-1}\equiv(\lambda_\tau^{\rm CC})^{-1}+ (\lambda_\tau^{\rm dec})^{-1}$, and where $d\bar{E}(X,E)/dX= \gamma(\bar{E})$ with $\bar{E}(0,E)=E$. The survival probability $P(E_0,X)$ for a tau–lepton with an initial energy $E_0$ at $X=0$ is then defined by the ratio of the energy–integrated differential fluxes $F_\tau$ at $X$ and $X=0$ : assuming, as usual, a monoenergetic initial flux in (26), $F_\tau(\bar{E}(X,E),0)\sim\delta(E-E_0)$, one obtains [@ref10] $$P(E_0,X) = \frac{\gamma(\tilde{E}_0)}{\gamma(E_0)}\, \exp \left[-\int_0^X A(\tilde{E}_0(X',E_0))\, dX'\right]$$ where we have used [@ref13] $d\bar{E}/dE =\gamma(\bar{E})/\gamma(E)$ and $d\tilde{E}_0(X,E_0)/dX=-\gamma(\tilde{E}_0)$ with $\tilde{E}_0(0,E_0)=E_0$. The (tau) lepton range for an incident lepton energy $E$ and a final energy $\tilde{E}(X,E)$ required to be greater than $E^{\rm min}$ at the detector, say, is then defined by $$R(E) = \int_0^{X_{\rm max}} P(E,X)\, dX$$ where we have substituted $E$ for $E_0$ in (27) and the upper limit of integration $X_{\rm max}$ derives from $\tilde{E}(X,E)\geq E^{\rm min}$. (Notice that for energy–independent values of $\alpha$ and $\beta$ in $\gamma(E)=\alpha+\beta E$ one simply gets $X_{\rm max} = \frac{1}{\beta}\ln \frac{\alpha+\beta E} {\alpha +\beta E^{\rm min}}$.) For calculating the $\tau$–lepton range $R_\tau(E)$ we use in $\gamma_\tau$ for the ionization energy loss [@ref39; @ref40] $\alpha_\tau \simeq 2.0 \times 10^{-3}$ GeV (cm we)$^{-1}$ and for the radiative energy loss (through bremsstrahlung, pair production and photonuclear interactions) [@ref10] $\beta_\tau=\beta_\tau(E)\simeq[0.16+0.6(E/10^9$ GeV)$^{0.2}] \times 10^{-6}$ (cm we)$^{-1}$ which parametrizes explicit model calculations [@ref32; @ref35] for standard rock ($\rho=2.65$ g/cm$^3$) reasonably well for $10^3$ $E$ $10^9$ GeV. Furthermore we impose [@ref32] $E^{\rm min}=50$ GeV. Our results for the $\tau$–lepton range are shown in Fig. 12 which agree of course with the ones in [@ref10]. The $\tau$–decay term dictates the $\tau$–range until $E>10^7$ GeV where the tau–lepton energy loss becomes relevant. The dashed–dotted curve shows the range as obtained by omitting the contribution due to the CC interaction length $\lambda_\tau^{\rm CC}$ in $\hat{\lambda}$ in (27). This term will be relevant for $E$ $10^{10}$ GeV where $\lambda_\tau^{\rm CC}$ becomes comparable to $\lambda_\tau^{\rm dec}$ as evident from Fig. 13. For comparison, stochastic Monte Carlo evaluations [@ref32; @ref35; @ref38] of the $\tau$–range are shown in Fig. 12 by the dotted curves which are of course strongly dependent on the assumed model extrapolations of the radiative cross sections to ultrahigh energies. Our results depend obviously also on such extrapolations due to specific choices of $\beta_\tau(E)$. Nevertheless, one concludes [@ref10] from Fig. 12 that the [*continuous*]{} tau–lepton energy loss approach, as used in (13), yields very [*similar*]{} results as the stochastic Monte Carlo calculations where the energy loss is treated separately. A similar conclusion holds for the muon–range $R_\mu(E)$ which we show for completeness in Fig. 14. Within the continuous muon energy loss approach, $R_\mu$ follows from (28) and (27) where the $\lambda_\tau^{\rm dec}$ term has to be omitted and in $\gamma_\mu(E)=\alpha_\mu+\beta_\mu E$ we take $\alpha_\mu\simeq \alpha_\tau\simeq 2.0\times 10^{-3}$ GeV (cm we)$^{-1}$ and [@ref32; @ref35] $\beta_\mu\simeq 6.0\times 10^{-6}$ (cm we)$^{-1}$ which, moreover, reproduces best [@ref5] the Monte Carlo result of Lipari and Stanev [@ref37] for the average muon–range in standard rock for $E>10^3$ GeV. Furthermore we choose [@ref32] the final muon energy to be larger than $E^{\rm min}=1$ GeV. (It should be noted that here $P_\mu(E,X)\simeq 1$ in (27), i.e., $R_\mu(E)\simeq X_{\rm max}$ in (28).) The muon–range $R_\mu$ calculated within the continuous muon energy loss approach yields again, as in the case of taus, very similar results as the stochastic Monte Carlo calculations [@ref32; @ref35; @ref38] for $R_\mu^{\rm sto}$ as shown in Fig. 14. This is contrary to the conclusions reached in [@ref10] that the continuous approach to the muon energy loss overestimates the muon–range as compared to stochastic Monte Carlo simulations. Therefore the continuous approach to the lepton energy loss is applicable to [*both*]{} taus and muons, since in both cases it yields similar results for the lepton ranges as the stochastic Monte Carlo simulations with the energy loss being treated separately. The transport equation of muon neutrinos including secondary muon neutrinos from tau neutrino interactions ========================================================================================================== It has been pointed out [@ref14] that the $\nu_\tau -\tau$ regeneration chain $\nu_\tau\to\tau\to\nu_\tau\to\ldots$ creates a secondary $\bar{\nu}_\mu +\nu_\mu$ flux due to the prompt, purely leptonic, tau decays $\tau^-\to\nu_\tau\mu^-\bar{\nu}_\mu$ and $\tau^+\to\bar{\nu}_\tau\mu^+\nu_\mu$. This will enhance the regenerated $\stackrel{(-)}{\nu}_\mu$ fluxes calculated according to (1) and thus also the ‘naively´’ calculated [@ref5; @ref15; @ref41] upward–going muon event rates at the detector site. Secondary neutrinos originate from the associated $\tau^{\pm}$ flux $F_\tau(E,X)$ and a prompt $\tau$–decay like $\tau^-\to\bar{\nu}_\mu X'$. Adding those contributions, denoted by $G_{\tau^{\stackrel{(-)}{+}}\to\stackrel{(-)}{\nu}_\mu}(E,X)$, to the simple transport equation (1) used thus far one obtains $$\frac{\partial F_{\nu_\mu}(E,X)}{\partial X} = -\frac{F_{\nu_\mu}(E,X)}{\lambda_\nu(E)}\, +\, \frac{1}{\lambda_\nu(E)} \int_0^1\frac{dy}{1-y}\, K_\nu^{\rm NC}(E,y)\, F_{\nu_\mu}(E_y,X)+G(E,X)$$ with $G=G_{\tau^+\to\nu_\mu}$ where $$G_{\tau^+\to\nu_{\mu}}(E,X) = \frac{1}{\lambda_\tau^{\rm dec}(E,\theta)} \int_0^1\frac{dy}{1-y}\, K_{\tau^+}^{\rm dec}(E,y)\, F_{\tau^+}(E_y,X)$$ and a similar transport equation holds for $F_{\bar{\nu}_\mu}$ with an appropriate expression for $G_{\tau^-\to\bar{\nu}_\mu}$. The relevant $\tau$ fluxes $F_{\tau^{\pm}}$ have been calculated in the previous Section (cf. Figs. 10 and 11). As in (14), the decay kernel in (30) is $K_{\tau^+}^{\rm dec}(E,y) = (1-y)\, dn_{\tau^+\to\nu_\mu}(z)/dy$ with $z=1-y$ and the relevant $\tau^+\to\nu_\mu X'$ decay distribution is given by [@ref28] $$\frac{dn_{\tau^+\to\nu_\mu}(z)}{dz} = B_{\nu_\mu} \left[2-6z^2+4z^3+P(-2+12z-18z^2+8z^3)\right]$$ with $P=+1$ and the branching fraction $B_{\nu_\mu}=0.18$. For a decaying $\tau^-\to\bar{\nu}_\mu X'$ one has $P=-1$ in (31). Notice that the $\stackrel{(-)}{\nu}_\mu$ spectrum in (31) is a little softer than the $\stackrel{(-)}{\nu}_\tau$ spectrum from the $\tau^{\pm}\to\stackrel{(-)}{\nu}_\tau X'$ decay [@ref7; @ref28] in (15). It should be noticed that the contribution of secondary neutrinos may alternatively be calculated using directly the $\stackrel{(-)}{\nu}_\tau$ fluxes $F_{\stackrel{(-)}{\nu}_\tau}(E,X)$ which give rise to the reaction chains $\nu_\tau\stackrel{\rm CC}{\longrightarrow}\tau^- \to \bar{\nu}_{\mu}X'$ and $\bar{\nu}_\tau\stackrel{\rm CC}{\longrightarrow}\tau^+ \to \nu_{\mu}X'$. Denoting these contributions by $G_{\nu_{\tau}\to\bar{\nu}_\mu}(E,X)$ and $G_{\bar{\nu}_\tau \to \nu_\mu}(E,X)$, respectively, the inhomogeneous term in the transport equation (29) is given by $G=G_{\bar{\nu}_\tau \to\nu_\mu}$ with $$G_{\bar{\nu}_\tau\to\nu_{\mu}}(E,X) = N_A \int^1_0 \frac{dy}{1-y}\int_0^1\frac{dz}{z}\, \frac{dn_{\tau^+\to\nu_\mu}(z)}{dz}\, \frac{d\sigma_{\bar{\nu}N}^{\rm CC}(\frac{E_y}{z},y)}{dy}\, F_{\bar{\nu}_\tau}\left(\frac{E_y}{z},\, X\right)$$ where $E_y/z=E/(1-y)z$, the decay distribution is given by (31) and the relevant flux $F_{\bar{\nu}_\tau}$ has been calculated in the previous Section (cf. Figs. 10 and 11). Although (32) and (30) yield the same quantitative results for $F_{\nu_\mu}(E,X)$, these two expressions should [*not*]{} be added since it would correspond to double–counting the effect of secondary neutrino production. This is due to the fact that the CC contribution $G_{\nu_{\tau}\to \tau}$ has been already included in (13) \[third term on the rhs\] for calculating $F_\tau$. (The situation here is very similar to the calculation of the atmospheric muon flux [@ref31; @ref28] where almost all muons come from meson decays with the meson flux being generated by nucleon interactions with air, i.e., by nucleon $\to$ meson transitions. These latter transitions are taken into account only in the evolution equation of the meson flux, but not anymore for the muon flux evolution.) For definiteness, we use the simpler expression in (30) for our subsequent calculations. As in our previous cases, the most general transport equation (29) for muon neutrinos is easily solved by an ansatz like (16) for tau neutrinos, $$F_{\nu_\mu}(E,X) = F_{\nu_\mu}^0(E)\exp\, \left[ -\frac{X}{\Lambda_{\nu_{\mu}G}(E,X)}\right]$$ with $$\Lambda_{\nu_\mu G}(E,X) = \frac{\lambda_\nu(E)}{1-Z_{\nu_{\mu}G}(E,X)}$$ and $Z_{\nu_{\mu}G}=Z_{\nu_\mu}+Z_G$. Inserting (33) into (29) one obtains $$Z_{\nu_\mu}(E,X) =\frac{1}{X}\int_0^X dX' \int dy\, K_\nu^{\rm NC}(E,y)\, \eta_\nu(E,y)\, e^{-X'D_{\nu_\mu}(E,E_y,X')}$$ which is similar to (5) but with $D_{\nu_\mu}(E,E_y,X')=\Lambda_{\nu_\mu G}^{-1}(E_y,X') -\Lambda_{\nu_\mu G}^{-1}(E,X')$, and $$Z_G(E,X) = \frac{\lambda_\nu(E)}{F_\nu^0(E)}\, \frac{1}{X}\, \int_0^X dX'\, G(E,X')\, e^{X'/\Lambda_{\nu_\mu G}(E,X')}\, \, .$$ Using again an iteration algorithm to solve for $Z_{\nu_\mu G}(E,X)$, the solution of (35) and (36) after the n–th iteration becomes $$\begin{aligned} Z_{\nu_\mu G}^{(n+1)}(E,X) & = & \frac{1}{X}\int_0^X dX'\int_0^1 dy\, K_\nu^{\rm NC}(E,y)\, \eta_\nu(E,y)\, e^{-X'D_{\nu_\mu}^{(n)}(E,E_y,X')} \nonumber\\ & & +\frac{\lambda_\nu(E)}{F_{\nu_\mu}^0(E)} \, \frac{1}{X}\, \int_0^X dX' G(E,X')\, e^{X'/\Lambda_{\nu_\mu G}^{(n)}}(E,X')\end{aligned}$$ where $D_{\nu_\mu}^{(n)}$ is as defined above with $\Lambda_{\nu_\mu G}\!\!\!\!\to\!\!\!\!\Lambda_{\nu_\mu G}^{(n)}$ and $\Lambda_{\nu_\mu G}^{(n)}(E,X')=\lambda_\nu(E)/$ $\left[1-Z_{\nu_\mu G}^{(n)}(E,X')\right]$. Due to the dominant and large $\tau$–decay contribution $G(E,X)$ in (29), it turns out that the optimal input choice for providing sufficiently convergent iterative solutions is obtained by implementing, as in the case of tau neutrinos in Sec. 3, the peculiar $E$ and $X$ dependence as implied by the $\tau$–decays, i.e., by $G$ in (36) from the very beginning. Therefore we use again (see discussion after Eq. (20)) $Z_{\nu_\mu}^{(0)}(E,X)=0$ and a vanishing $Z$–factor on the rhs of $Z_G$ in (36) which gives for the total input $Z$–factor $$Z_{\nu_\mu G}^{(0)}(E,X) = \frac{\lambda_\nu(E)}{F_\nu^0(E)}\, \frac{1}{X} \int_0^X dX'\, G(E,X')\, e^{X'/\lambda_\nu(E)}\,\,.$$ Inserting this into the rhs of (37) results in the first iterative solution $Z_{\nu_\mu G}^{(1)}(E,X)$, and so on. In contrast to $Z_{\nu_\mu}^{(1)}$ in (8), $Z_{\nu_\mu G}^{(1)}$ does not provide us with a sufficiently accurate final result, i.e., the maximum difference between $Z_{\nu_\mu G}^{(1)}$ and $Z_{\nu_\mu G}^{(2)}$ is here [*not*]{} always less than about 5% for some initial cosmic neutrino fluxes and energies. Therefore we have to carry out one further iteration, as in the case of tau neutrinos in Sec. 3, by inserting $Z_{\nu_\mu G}^{(1)}$ into the rhs of (37) in order to obtain $Z_{\nu_\mu G}^{(2)}(E,X)$ which turns out to be sufficiently close to the final result since $|Z_{\nu_\mu G}^{(3)}/Z_{\nu_\mu G}^{(2)}-1|$ 0.02. Our iterative results for $Z_{\nu_\mu G}^{(1,2)}$ are shown in Figs. 15 and 16 together with the appropriate input $Z_{\nu_\mu G}^{(0)}$ in (38) shown by the dotted curves. In order to illustrate the faster iterative convergence for increasing $\theta$ (smaller $X$), the results for $\theta=50^{\rm o}$ are presented in Fig. 15 as well. The sufficiently accurate results $Z_{\nu_\mu G}^{(2)}(E,X)$ and the similar expressions for $Z_{\bar{\nu}_\mu G}^{(2)}$, when inserted into (33), yield the final total fluxes $F_{\nu_\mu +\bar{\nu}_\mu}(E,X)$ shown in Figs. 17 and 18. The effect and importance of secondary neutrinos is best seen by comparing our results (solid and dashed curves) with the usual ones [@ref3; @ref5; @ref7; @ref15] obtained just for primary muon neutrinos ($G\equiv 0$ in (29)) shown by the dotted curves, which correspond of course to the results obtained in Sec. 2. Our results in Fig. 17 agree with the ones obtained in [@ref9], within the approximations made there. The corresponding $\stackrel{(-)}\nu_\mu$ initiated upward–going $\mu^{\stackrel{(+)}{-}}$ event rate per unit solid angle and second is calculated according to $$N_{\mu^-}^{(\nu_\mu)} = N_A \int_{E_{\mu}^{\rm min}} dE_\nu \int_0^{1-E_\mu^{\rm min}/E_\nu} dy \, A(E_\mu)\, R_\mu(E_\mu,E_\mu^{\rm min}) \frac{d\sigma_{\nu_\mu N}^{\rm CC}\,(E_\nu,y)}{dy}\, F_{\nu_\mu}(E_\nu,X)$$ with $E_\mu=(1-y)E_\nu$ and the energy dependent area $A(E_\mu)$ of the considered underground detectors is taken as summarized in [@ref5]. The muon–range is given by $R_\mu(E_\mu,E_\mu^{\rm min}) =\frac{1}{\beta_\mu}\ln \frac {\alpha_\mu+\beta_\mu E_\mu}{\alpha_\mu+\beta_\mu E_\mu^{\rm min}}$. It describes the range of an energetic muon being produced with energy $E_\mu$ and, as it passes through the Earth loses energy, arrives at the detector with energy above $E_\mu^{\rm min}$. The energy–loss parameters are taken as at the end of the previous Section, i.e., $\alpha_\mu = 2\times 10^{-3}$ GeV (cm we)$^{-1}$ and $\beta_\mu = 6\times 10^{-6}$ GeV (cm we)$^{-1}$. The integral over the neutrino energy $E_\nu$ was, for definiteness and better comparison [@ref9], performed up to a maximum neutrino energy of 10$^8$ GeV. The differential $\theta$–dependent $\mu^- +\mu^+$ rates for $E_\mu^{\rm min}=10^4$ GeV and $E_\mu^{\rm min}=10^5$ GeV are shown in Figs. 19 and 20. We also include the contributions initiated by the primary $\nu_\mu +\bar{\nu}_\mu$ flux, for brevity denoted by $\nu_\mu\to\mu$, and by the $\nu_\tau+\bar{\nu}_\tau$ flux via $\nu_\tau\to\tau\to\mu$ and the $\tau^- +\tau^+$ flux via $\tau \to\mu$ as discussed in the previous Section. The secondary neutrino contributions to the muon event rates have their largest relative contributions obviously at small nadir angles, with an enhancement over the primary $\nu_\mu+\nu_\tau+\tau$ initiated rates of up to 40% for the hard $E^{-1}$, AGN–M95 and TD–SLBY initial fluxes. At small nadir angles, however, the event rates are smallest and statistics are low. For the event rates are roughly a factor of (more than) 10 larger and the enhancement of the overall $\nu_\mu+\nu_\tau+\tau$ initiated muon rates (dashed–dotted curves in Figs. 19 and 20) can not be larger than about 15%. These results are more explicitly illustrated in Tables 2 and 3 where we present, besides the total nadir–angle–integrated rates, also the ones integrated over three typical $\theta$–intervals. (Remember that this amounts to integrating (39) over $\int_0^{2\pi} d\varphi \int_{\theta_{\min}}^{\theta_{\rm max}} d\theta\sin\theta = 2\pi\int_{\theta_{\rm min}}^{\theta_{\rm max}} d\theta\sin\theta$, with $\theta_{\rm min}=0^{\rm o}$ and $\theta_{\rm max}=90^{\rm o}$ for the total rates.) Since secondary muon neutrinos contribute significantly to a muon excess only at small and medium nadir angles, $\theta<60^{\rm o}$, the primary event rates (shown in brackets in Tables 2 and 3) can be enhanced by more than 20%, in particular for $E_\mu^{\rm min}= 10^5$ GeV. The statistics, however, are low since the fluxes are already strongly attenuated for $\theta<60^{\rm o}$ (large $X$), cf. Figs. 17 and 18. On the other hand, most of the events are generated at large $\theta$, $\theta>60^{\rm o}$, where the effect of secondary neutrinos is sizeably reduced (cf. Figs. 19 and 20), the total rates in Tables 2 and 3 are increased by less than 10%. Since the expected angular resolution of present and proposed detectors [@ref21; @ref42] is typically about $1^{\rm o}/(E_\nu/$TeV)$^{0.7}$, differential $\theta$–dependent measurements should be feasible, in order to delineate experimentally the effects of secondary neutrino fluxes. Keeping in mind that the lifetime of the planned experiments is roughly ten years, it appears to be not unreasonable that the tenfold rates implied by Tables 2 and 3 may be observable in the not too distant future. Summary and Conclusions ======================= For the sake of completeness we have first studied the solutions of the single transport equation for cosmic $\stackrel{(-)}{\nu}_\mu$ neutrinos propagating through the Earth. Although frequently used, the excellent convergence of its iterative solutions has not been explicitly demonstrated thus far for more realistic and hard cosmic neutrino fluxes. Using the symbolic ansatz for the solution $F_\nu(E,X)= F_\nu^{0}(E)\exp \left[(1-Z)X/\lambda_\nu\right]$, with $\lambda_\nu$ being the neutrino interaction length, the most simple input choice $Z_{\nu_\mu}^{(0)}(E,X)=0$ suffices to produce a sufficiently accurate iterative result $Z_{\nu_\mu}^{(1)}$ already after the first iteration, for all presently used initial cosmic model fluxes $F_\nu^0$. Turning to the iterative solutions of the far more complicated coupled transport equations for $\stackrel{(-)}{\nu}_\tau$ and their associated $\tau^{\pm}$ fluxes, a new semi–analytic input algorithm is presented which allows for a fast convergence of the iterative solutions: already a second $n=2$ iteration suffices for obtaining a sufficiently accurate result $Z^{(2)}$ and thus for the final $F_{\stackrel{(-)}{\nu}_\tau}$ and its associated $F_{\tau^{\pm}}$ fluxes. In order to achieve this one has to implement the peculiar $E$ and $X$ dependence as implied by the $\tau^{\pm}$ decay contributions already into the initial zeroth order input $Z^{(0)}(E,X)$. Choosing a vanishing input $Z^{(0)}=0$ as in the case for $\stackrel{(-)}{\nu}_\mu$ fluxes or even the final solution for the $\stackrel{(-)}{\nu}_\mu$ flux as an input, $Z^{(0)}=Z_{\nu_\mu}^{(1)}$, as frequently done, results in a far slower convergence of the iterative procedure. For completeness we briefly outline also the implications for the upward–going $\mu^- +\mu^+$ event rates for underground neutrino detectors using some relevant cosmic neutrino fluxes. These events are generated by the so called ‘primary’ $\stackrel{(-)}{\nu}_\mu$, $\stackrel{(-)}{\nu}_\tau$ and $\tau^{\pm}$ fluxes via the weak transitions and decays $\nu_\mu\stackrel{\rm CC}{\to}\mu$, $\nu_\tau\stackrel{\rm CC}{\to}\tau\to\mu$ and $\tau\to\mu$. Furthermore, for calculating the range $R_\tau(E)$ of tau–leptons, their energy loss can either be treated ‘continuously’ by including it directly in the transport equation, or ‘stochastically’ by treating it separately. Both approaches give very similar results for $R_\tau$ up to highest energies of $10^{12}$ GeV relevant at present. A similar agreement is obtained for the muon range $R_{\mu}(E)$. Therefore the continuous approach is applicable to both taus and muons. This is contrary to claims in the literature that the continuous approach overestimates $R_\mu$ as compared to stochastic Monte Carlo simulations. Finally, we generalized the single transport equation for $\stackrel{(-)}{\nu}_\mu$, by taking into account the contributions of secondary $\nu_\mu$ and $\bar{\nu}_\mu$ fluxes. These so called ‘secondary’ muon neutrino fluxes originate from prompt $\tau^{\pm}$ decays where the $\tau$–leptons are generated by the regeneration chain $\nu_\tau\to\tau\to\nu_\tau\to\ldots$ when a cosmic $\nu_\tau$ passes through the Earth. Thus the secondary $\nu_\mu +\bar{\nu}_\mu$ flux arises from the associated $\tau^{\pm}$ flux, as obtained from the coupled transport equations for $\nu_\tau$ and $\tau$, which initiates the $\tau\to \nu_\mu$ transitions ($\tau^-\to\nu_\tau\mu^-\bar{\nu}_\mu$ and $\tau^+\to\bar{\nu}_\tau\mu^+\nu_\mu$). In order to achieve a sufficiently fast convergence of the iterative solutions of the single generalized transport equation of muon neutrinos, one again has to implement the peculiar $E$ and $X$ dependence as implied by the weak $\tau$–decays already into the initial zeroth order input $Z^{(0)}(E,X)$. In this case one needs only $n=2$ iterations for obtaining a sufficiently accurate result $Z^{(2)}(E,X)$ for calculating the final secondary $\nu_\mu$ and $\bar{\nu}_\mu$ fluxes. The $\mu^- +\mu^+$ event rates initiated by the secondary neutrinos are largest obviously at small nadir angles ($\theta<60^{\rm o}$), with a relative enhancement of at most 40% over the primary $\nu_\mu+\nu_\tau+\tau$ initiated rates for the hard initial cosmic fluxes like AGN–M95 and TD–SLBY. At larger nadir angles, $\theta$ $60^{\rm o}$, the muon rates are dominantly initiated by the primary $\nu_\mu+\nu_\tau +\tau$ flux and the secondary $\nu_\mu + \bar{\nu}_\mu$ flux becomes naturally less relevant. Thus the secondary neutrino flux will enhance the total nadir–angle–integrated muon event rates only by less than 10%. Nevertheless, it should be possible to observe the effects of secondary neutrinos with differential $\theta$–dependent measurements, keeping in mind that the angular resolutions of the proposed underground neutrino telescopes will reach sub–arc–minute precisions. We are grateful to W. Rhode for helpful discussions, in particular about lepton ranges, and for providing us with the results of the $\tau$–ranges of the Chirkin–Rhode Monte Carlo calculations extended up to $10^{12}$ GeV. Similarly, we thank M.H. Reno for sending us her Monte Carlo results for the $\tau$–range for energies up to $10^{12}$ GeV. This work has been supported in part by the ‘Bundesministerium für Bildung und Forschung’, Berlin/Bonn. [54]{} A. Nicolaidis and T. Taramopoulos, [*Phys. Lett.*]{} [**B386**]{}, 211 (1996). V.A. Naumov and L. Perrone, [*Astropart. Phys.*]{} [**10**]{}, 239 (1999). J. Kwiecinski, A.D. Martin, and A.M. Stasto, [*Phys. Rev.*]{} [**D59**]{}, 093002 (1999). S. Iyer, M.H. Reno, and I. Sarcevic, [*Phys. Rev.*]{} [**D61**]{}, 053003 (2000). K. Giesel, J.-H. Jureit, and E. Reya, [*Astropart. Phys.*]{} [**20**]{}, 335 (2003). F. Halzen and D. Saltzberg, [*Phys. Rev. Lett.*]{} [**81**]{}, 4305 (1998). S. Iyer Dutta, M.H. Reno, and I. Sarcevic, [*Phys. Rev.*]{} [**D62**]{}, 123001 (2000). C. Hettlage and K. Mannheim, [*Nucl. Phys. B*]{} (Proc. Suppl.) [**95**]{}, 165 (2001). S. Iyer Dutta, M.H. Reno, and I. Sarcevic, [*Phys. Rev.*]{} [**D66**]{}, 077302 (2002). J.J. Tseng et al., [*Phys. Rev.*]{} [**D68**]{}, 063003 (2003). S. Hussain and D.W. McKay, [*Phys. Rev.*]{} [**D69**]{}, 085004 (2004). S. Yoshida, R. Ishibashi, and H. Miyamoto, [*Phys. Rev.*]{} [**D69**]{}, 103004 (2004). E. Reya and J. Rödiger, [*Phys. Rev.*]{} [**D72**]{}, 053004 (2005). J.F. Beacom, P. Crotty, and E.W. Kolb, [*Phys. Rev.*]{} [**D66**]{}, 021302(R) (2002). R. Gandhi, C. Quigg, M.H. Reno, and I. Sarcevic, [*Astropart. Phys.*]{} [**5**]{}, 81 (1996). M. Glück, S. Kretzer, and E. Reya, [*Astropart. Phys.*]{} [**11**]{}, 327 (1999). M. Glück, E. Reya, and A. Vogt, [*Eur. Phys. J.*]{} [**C5**]{}, 461 (1998). G.M. Frichter, D.W. McKay, and J.P. Ralston [*Phys. Rev. Lett.*]{} [**74**]{}, 1508 (1995); [**77**]{}, 4107 (1996) (E). K. Giesel, Master Thesis, University of Dortmund, 2003. K. Ackermann et al., (AMANDA Collaboration), [*Astropart. Phys.*]{} [**22**]{}, 339 (2005);\ K. Achterberg et al. (IceCube Collaboration), Proceedings of the 29th Intern. Cosmic Ray Conference, Pune, India, 2005 (astro–ph/0509330). F. Halzen, [*Eur. Phys. J.*]{} [**C46**]{}, 669 (2006). F.W. Stecker and M. Salamon, [*Space Sci. Rev.*]{} [**75**]{}, 341 (1996). K. Mannheim, [*Astropart. Phys.*]{} [**3**]{}, 295 (1995). E. Waxman and J.N. Bahcall, [*Phys. Rev.*]{} [**D59**]{}, 023002 (1999). G. Sigl, S. Lee, P. Bhattacharjee, and S. Yoshida, [*Phys. Rev.*]{} [**D59**]{}, 043504 (1999). G. Sigl, S. Lee, D.N. Schramm, and P. Coppi, [*Phys. Lett.*]{} [**B392**]{}, 129 (1997). S. Yoshida, G. Sigl, and S. Lee, [*Phys. Rev. Lett.*]{} [**81**]{}, 5505 (1998). P. Lipari, [*Astropart. Phys.*]{} [**1**]{}, 195 (1993). V.L. Ginzburg and S.I. Syrovatskii, The Origin of Cosmic Rays (Pergamon Press, Oxford, 1964), pp. 284. M.S. Longair, High Energy Astrophysics (Cambridge University Press, 1981), pp. 285. L.V. Volkova, G.T. Zatsepin, and L.A. Kuz’michev, [*Sov. J. Nucl. Phys.*]{} [**29**]{}, 645 (1979). S. Iyer Dutta, M.H. Reno, I. Sarcevic, and D. Seckel, [*Phys. Rev.*]{} [**D63**]{}, 094020 (2001). J. Jones, I. Mocioiu, M.H. Reno, and I. Sarcevic, [*Phys. Rev.*]{} [**D69**]{}, 033004 (2004). F. Becattini and S. Bottai, [*Astropart. Phys.*]{} [**15**]{}, 323 (2001). E. Bugaev et al., [*Astropart. Phys.*]{} [**21**]{}, 491 (2004). It should be noted that the $X'$-, $X''$-, and $X'''$-integration involving $\lambda_\tau^{\rm dec}$ in (19) and (20) cannot a priori be performed analytically, because of the appearance of the Earth’s density $\rho(X,\theta)$. It turns out, however, that the exact numerical integrations can be reproduced to within 1% by using the reasonable approximation [@ref4] $\rho(X,\theta)\simeq \rho_{av}(\theta)$ with the averaged $\rho_{av}(\theta)$ being not explicitly dependent on $X$ (thus one can drop the explicit $X$ dependence in, for example, $\lambda_{\tau}^{\rm dec}(E,\, X',\, \theta')\simeq \lambda_{\tau}^{\rm dec}(E,\, \theta)$ with $\theta'\equiv \theta(X')\simeq\theta(X)\equiv\theta$). Notice that $\rho_{av}(\theta)\equiv\int_0^{L}\rho\left(r(z,\theta)\right) \, dz/\int_0^L dz =X(\theta)/L$ with $r(z,\theta)= \sqrt{R_{\oplus}^2+z^2-zL}$ and $L=2R_{\oplus}\cos\theta$ (see, e.g., [@ref15]). In particular the $X''$–integral in (20) receives its dominant contribution only from $X''$ being close to the upper limit of integration, $X''\simeq X'$. Therefore the $X'''$–integral in (20) can be performed analytically. It should be furthermore noted that, when (20) is inserted into (19), $\lambda_{\tau}^{\rm dec}(E,X')$ in (19) gets cancelled to within a very good approximation (cf. Eq. (21)). P. Lipari and T. Stanev, [*Phys. Rev.*]{} [**D44**]{}, 3543 (1991). D. Chirkin and W. Rhode, hep–ph/0407075 (unpublished), and private communication. D. Fargion, astro–ph/9704205 (unpublished). D.E. Groom et al., Particle Data Group, [*Phys. Lett.*]{} [**B592**]{}, 1 (2004). R. Gandhi et al., [*Phys. Rev.*]{} [**D58**]{}, 093009 (1998). J.G. Learned and K. Mannheim, [*Annu. Rev. Nucl. Part. Sci.*]{} [**50**]{}, 679 (2000). -------------------- ------------------------ -------------- -------------- -------------- ------------- ------------- \[-1.5ex\][Flux]{} \[-1.5ex\][Detector]{} $10^3$ $10^4$ $10^5$ $10^6$ $10^7$ ANTARES 16.63 (13.7) 6.28 (5.00) 2.51 (1.98) 1.06 (0.90) 0.34 (0.32) AGN-M95 AMANDA-II 34.90 (29.1) 10.76 (8.62) 3.78 (2.98) 1.58 (1.34) 0.49 (0.46) IceCube 170.24 (143) 41.72 (33.7) 14.22 (11.2) 5.93 (5.04) 1.83 (1.74) ANTARES 0.75 (0.60) 0.39 (0.32) 0.10 (0.08) 0.01 (0.01) — GRB-WB AMANDA-II 1.39 (1.10) 0.68 (0.56) 0.15 (0.13) 0.02 (0.02) — IceCube 5.55 (4.35) 2.59 (2.13) 0.57 (0.49) 0.07 (0.06) — ANTARES 0.84 (0.62) 0.59 (0.45) 0.33 (0.26) 0.14 (0.12) 0.05 (0.05) TD-SLBY AMANDA-II 1.33 (0.97) 0.91 (0.68) 0.49 (0.39) 0.21 (0.18) 0.07 (0.07) IceCube 5.11 (3.70) 3.42 (2.57) 1.84 (1.47) 0.78 (0.68) 0.26 (0.25) -------------------- ------------------------ -------------- -------------- -------------- ------------- ------------- -------------------- ------------------------ -------------------------------- --------------------------------- --------------------------------- --------------- \[-1.5ex\][Flux]{} \[-1.5ex\][Detector]{} $0^\circ\le\theta\le 30^\circ$ $30^\circ\le\theta\le 60^\circ$ $60^\circ\le\theta \le90^\circ$ Total ANTARES 0.19 (0.18) 1.13 (1.05) 5.25 (5.04) 6.58 (6.28) AGN-M95 AMANDA-II 0.40 (0.38) 2.15 (2.03) 8.66 (8.34) 11.22 (10.76) IceCube 1.62 (1.54) 8.54 (8.06) 33.35 (32.13) 43.50 (41.72) ANTARES 0.02 (0.01) 0.10 (0.09) 0.28 (0.28) 0.39 (0.39) GRB-WB AMANDA-II 0.03 (0.03) 0.18 (0.17) 0.48 (0.48) 0.69 (0.68) IceCube 0.12 (0.11) 0.69 (0.67) 1.84 (1.82) 2.65 (2.59) ANTARES 0.01 (0.01) 0.07 (0.06) 0.54 (0.52) 0.62 (0.59) TD-SLBY AMANDA-II 0.01 (0.01) 0.12 (0.10) 0.83 (0.79) 0.96 (0.91) IceCube 0.05 (0.04) 0.44 (0.38) 3.14 (3.00) 3.63 (3.42) -------------------- ------------------------ -------------------------------- --------------------------------- --------------------------------- --------------- -------------------- ------------------------ -------------------------------- --------------------------------- --------------------------------- --------------- \[-1.5ex\][Flux]{} \[-1.5ex\][Detector]{} $0^\circ\le\theta\le 30^\circ$ $30^\circ\le\theta\le 60^\circ$ $60^\circ\le\theta \le90^\circ$ Total ANTARES 0.01 (0.01) 0.20 (0.17) 2.46 (2.33) 2.67 (2.51) AGN-M95 AMANDA-II 0.02 (0.02) 0.31 (0.26) 3.68 (3.50) 4.01 (3.78) IceCube 0.08 (0.06) 1.16 (0.98) 13.89 (13.20) 15.11 (14.22) ANTARES — 0.02 (0.02) 0.08 (0.08) 0.10 (0.10) GRB-WB AMANDA-II — 0.03 (0.03) 0.12 (0.12) 0.15 (0.15) IceCube 0.01 (0.01) 0.11 (0.10) 0.47 (0.46) 0.58 (0.57) ANTARES — 0.03 (0.02) 0.32 (0.30) 0.34 (0.33) TD-SLBY AMANDA-II — 0.04 (0.03) 0.48 (0.45) 0.52 (0.49) IceCube 0.01 (0.01) 0.14 (0.12) 1.79 (1.71) 1.94 (1.84) -------------------- ------------------------ -------------------------------- --------------------------------- --------------------------------- ---------------
--- abstract: | In this paper, we first study derivations in non nilpotent Lie triple algebras. We determine the structure of derivation algebra according to whether the algebra admits an idempotent or a pseudo-idempotent. We study the multiplicative structure of non nilpotent dimensionally nilpotent Lie triple algebras. We show that when $n=2p+1$ the adapted basis coincides with the canonical basis of the gametic algebra $G(2p+2,2)$ or this one obviously associated to a pseudo-idempotent and if $n=2p$ then the algebra is either one of the precedent case or a conservative Bernstein algebra. **Keyword:** Dimensionally nilpotent Lie triple algebra, pseudo-idempotent, Jordan algebra, ascending basis, adapted basis. **2010 Mathematics Subject Classification** : Primary 17A30, secondary 17D92, 17B40, 17C10 author: - | Abdoulaye Dembega[^1]\ Université Norbert Zongo\ BP 376 Koudougou, Burkina Faso - | Amidou Konkobo[^2] and Moussa Ouattara[^3]\ Université Joseph KI-ZERBO\ 03 BP 7021 Ouagadougou 03, Burkina Faso title: Derivations and dimensionally nilpotent derivations in Lie triple algebras --- Introduction ============ A $n+1$ finite dimensional algebra $A$ is dimensionally nilpotent if there is a derivation $d: A\longrightarrow A$ such that $d^{n+1}=0$ and $d^{n}\neq 0$. This notion has been studied by G.F. Leger and P.L. Manley[@Leger] for Lie algebras, J.M. Osborn [@Osborn] for Jordan algebras, Micali and Ouattara[@Micali] for genetic algebras. Recently, V. Eberlin [@Eberlin] has deepened the work of the authors of [@Leger] in his thesis. Regarding Jordan algebras, Osborn shows that every dimensionally nilpotent Jordan $K$-algebra is either nilpotent or satisfies $A/Rad(A)\simeq K$. We study the case of non nilpotent dimensionally Lie triple algebras. In an adapted basis we caracterize the multiplicative structure of these algebras relative to the parity of $n$. More precisely we show that when $n=2p+1$, the adapted basis coincides with canonical basis of the gametic algebra $G(2p+2,2)$ or this one obviously associated to a pseudo-idempotent. If $n=2p$ then this algebra is either one of the precedent case or a train algebra of rank $3$ which is a Jordan algebra [@Ouat]. Since Jordan algebras are also Lie triple ones the final corollary describes non nilpotent dimensionally nilpotent Jordan algebras. Preliminaries ============= A *Lie triple algebra* is a commutative algebra satisfying $$\begin{aligned} \label{Eq1} 2x(x(xy)) + yx^3 = 3x(yx^2) \end{aligned}$$ while a *Jordan algebra* is a commutative algebra satisfying $$\begin{aligned} x^2(yx)=(x^2y)x.\end{aligned}$$ Every Jordan algebra satisfies identity . Let $A$ be a Lie triple algebra and $L$ the ideal generated by the associators $(x^2, x, x)$. Then $L^2 = 0$ and $A/L$ is a Jordan algebra. A *pseudo-idempotent* of $A$ is a non-zero element $e$ such that there is $t \ne 0$ in $L$ satisfying $e^2 = e + t$ and $et = \frac{1}{2}t$. \[idemp\] Every Lie triple non nilalgebra contains either a non-zero idempotent, or a pseudo-idempotent. An ideal $I$ of an algebra $A$ is said to be *caracteristic* if $d(I)\subseteq I$ for every derivation $d$ of $A$. An ideal $I$ of an algebra $A$ is said to be *d-invariant* if $d(I)\subseteq I$ for a given derivation $d$ of $A$. Caracterization of derivations ============================== In this paragraph we study the derivations in Lie triple non nilalgebras . We give a caracterization, distinguishing two cases: with an idempotent or with a pseudo-idempotent. Lie triple algebras with idempotent ------------------------------------ Relative to the non-zero idempotent $e$, $A$ admits the following Peirce decomposition $A=A_e(1)\oplus A_e(\frac{1}{2})\oplus A_e(0)$. Relations between Peirce components and the products of their elements are ruled by the following lemma: \[decomp1\] Let $A=A_e(1)\oplus A_e(1/2)\oplus A_e(0)$ be the Peirce decomposition of $A$ relative to a non-zero idempotent. Then\ $A_e(1/2)A_e(1/2)\subseteq A_e(1)+A_e(0)$, $A_e({\lambda})A_e({\lambda})\subseteq A_e({\lambda})$, $A_e({\lambda})A_e(1/2)\subseteq A_e(1/2)$,\ $A_e({\lambda})A_e(1-{\lambda})=0$, $({\lambda}=0,1)$ ; $(x_1y_1)a_{1/2} =x_1(y_1a_{1/2})+y_1(x_1a_{1/2})$,\ $(x_0y_0)a_{1/2} =x_0(y_0a_{1/2})+y_0(x_0a_{1/2})$ ; $[x_1(x_{1/2}a_{1/2})]_1=[(x_1x_{1/2})a_{1/2}+(x_1a_{1/2})x_{1/2}]_1$,\ $[x_0(x_{1/2}a_{1/2})]_0=[(x_0x_{1/2})a_{1/2}+(x_0a_{1/2})x_{1/2}]_0$ ; $[(x_1x_{1/2})y_{1/2}]_0=[(x_1y_{1/2})x_{1/2}]_0$,\ $[(x_0x_{1/2})y_{1/2}]_1=[(x_0y_{1/2})x_{1/2}]_1$ ; $x_0(y_1a_{1/2})=y_1(x_0a_{1/2})$ ; $x_{1/2}(x_{1/2}^2)_1=x_{1/2}(x_{1/2}^2)_0=\frac12 x_{1/2}^3$ ; $(x_{1/2}y_{1/2})_0z_{1/2} + (y_{1/2}z_{1/2})_0x_{1/2} + (z_{1/2}x_{1/2})_0y_{1/2}\\ {\qquad\qquad\qquad}= (x_{1/2}y_{1/2})_1z_{1/2} + (y_{1/2}z_{1/2})_1x_{1/2} + (z_{1/2}x_{1/2})_1y_{1/2}$. Since $A$ is $e$-stable, i.e. $A_e({\lambda})A_e(1/2)\subseteq A_e(1/2)$ and $[(x_{\lambda}x_{1/2})y_{1/2}]_{1-{\lambda}}=[(x_{\lambda}y_{1/2})x_{1/2}]_{1-{\lambda}}$ with ${\lambda}=0,1$, calculations on derivations give results similar to [@BCMO Corollary 2], precisely. \[Der\] Every derivation $d$ of $A$ is determined and only defined by a quadruplet $(d(e),f_d,g_d,h_d)$ with $f_d \in End_K(A_e(1/2))$, $g_d\in Der_K(A_e(0))$ and $h_d\in Der_K(A_e(1))$ satisfying the following conditions: $d(e)\in A_e(1/2)$ ; $d(x_1)=h_d(x_1)+2d(e)x_1$ ; $d(x_{1/2})=f_d(x_{1/2})+2(d(e)x_{1/2})_0-2(d(e)x_{1/2})_1$ ; $d(x_0)=g_d(x_0)-2d(e)x_0$ ; $h_d(x_1y_1)=h_d(x_1)y_1+x_1h_d(y_1)$ ; $g_d(x_0y_0)=g_d(x_0)y_0+x_0g_d(y_0)$ ; $h_d((x_{1/2}y_{1/2})_1)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_1$ ; $g_d((x_{1/2}y_{1/2})_0)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_0$ ; $f_d(x_1x_{1/2})=h_d(x_1)x_{1/2}+x_1f_d(x_{1/2})$ ; $f_d(x_0x_{1/2})=g_d(x_0)x_{1/2}+x_0f_d(x_{1/2}).$ \[J caract1\] Let $A$ be a Lie triple algebra and $A=A_e(1)\oplus A_e(1/2)\oplus A_e(0)$ the Peirce decomposition of $A$ relative to an idempotent $e\ne 0$. Subspaces $J_{\lambda}=\{x_{\lambda}\in A_e({\lambda}) \mid x_{\lambda}A_e(1/2)=0\}$ $({\lambda}=0,1)$ and $J=J_0\oplus J_1$ are caracteristic ideals of $A$ and the quotient algebra $A/J$ is a Jordan algebra. Considering $J_\lambda=\ker(S_\lambda)$, with $S_\lambda : A_e({\lambda}) \rightarrow End_K(A_e(1/2)), x_{\lambda}\mapsto S_{\lambda}(x_{\lambda})$ and $S_{\lambda}(x_{\lambda}) : a_{1/2}\mapsto x_{\lambda}a_{1/2}$. We know by ([@Osb]) that $J_{\lambda}$ is an ideal of $A_e({\lambda})$ (${\lambda}=0,1$) and since $A_e({\lambda})A_e(1/2)\subseteq A_e(1/2)$, then $J=J_1+J_0$ is an ideal of $A$ such that $A/J$ is a Jordan algebra ([@O], Proposition 6.7). Let’s consider $d\in Der_K(A)$, $x_{{\lambda}}\in J_{{\lambda}}(e)$ and $a_{1/2} \in A_e(1/2)$. We have $0=d(x_{{\lambda}}a_{1/2})=x_{{\lambda}}d(a_{1/2})+d(x_{{\lambda}})a_{1/2}$. But $d(a_{1/2})=f_d(a_{1/2})+2(d(e)a_{1/2})_0-2(d(e)a_{1/2})_1,$ therefore we have $x_{\lambda}d(a_{1/2})=0 $ because $x_{\lambda}(d(e)a_{1/2})_{\lambda}=[(x_{\lambda}d(e))a_{1/2}+(x_{\lambda}a_{1/2})d(e)]_{\lambda}$. Hence $d(x_{{\lambda}})a_{1/2}=0$ with ${\lambda}=0,1.$ But, on the one hand we have $d(x_1)=h_d(x_1)-2d(e)x_1,$ and $0=d(x_1)a_{1/2}= h_d(x_1)a_{1/2}$ and then $h_d(x_1)\in J_1$, on the other hand we have $d(x_0)=g_d(x_0)-2d(e)x_0$, with $0=d(x_0)a_{1/2}=g_d(x_0)a_{1/2}$ and then $g_d(x_0)\in J_0$. Hence $d(J_{\lambda})\subseteq J_{\lambda}$ and we conclude that $d(J)\subseteq J$. Lie triple algebras with pseudo-idempotent ------------------------------------------ \[decomp2\] Let $L=L_e(1)\oplus L_e(1/2)\oplus L_e(0)$ and $A=A_e(1)\oplus A_e(1/2)\oplus A_e(0)$ be the respective Peirce decomposition of $L$ and $A$, relative to the pseudo-idempotent $e$, satisfying $e^2=e+t$ with $t\in L_{1/2}$ fixed. Then $A_e(0)L_e(1/2) \subseteq L_e(1/2)$, $A_e(1)L_e(1/2) \subseteq L_e(1/2)$, $A_e(1)L_e(1)\subseteq L_e(1)$,\ $A_e(0)L_e(0) \subseteq L_e(0)$, $A_e(0)L_e(1)=A_e(1)L_e(0)=0$,\ $A_e(1/2)L_e(0)=A_e(1/2)L_e(1)=A_e(1/2)L_e(1/2)=0$ ; $A_e(1)A_e(0)\subseteq L_e(1/2)$, $A_e(0)A_e(1/2)\subseteq A_e(1/2)$, $A_e(1)A_e(1/2)\subseteq A_e(1/2)$,\ $A_e(0)A_e(0)\subseteq A_e(0)+L_e(1/2)$, $A_e(1)A_e(1)\subseteq A_e(1)+L_e(1/2)$,\ $A_e(1/2)A_e(1/2)\subseteq A_e(1)+A_e(0)$ ; $(x_0y_0)_{1/2}= 4(x_0t)y_0=4(y_0t)x_0$ ;\ $(x_1y_1)_{1/2}= 4(x_1t)y_1=4(y_1t)x_1$ ;\ $(x_0y_1)_{1/2}= 4(x_0t)y_1= 4(y_1t)x_0$ ; $(x_1y_1)a_{1/2} =x_1(y_1a_{1/2})+y_1(x_1a_{1/2})$ ; $(x_0y_0)a_{1/2} =x_0(y_0a_{1/2})+y_0(x_0a_{1/2})$ ; $x_0(y_1a_{1/2})=y_1(x_0a_{1/2})$ ; $[x_0(x_{1/2}a_{1/2})]_0=[(x_0x_{1/2})a_{1/2}+(x_0a_{1/2})x_{1/2}]_0$ ; $[x_1(x_{1/2}a_{1/2})]_1=[(x_1x_{1/2})a_{1/2}+(x_1a_{1/2})x_{1/2}]_1$ ; $[(x_0x_{1/2})y_{1/2}]_1=[(x_0y_{1/2})x_{1/2}]_1$ ;\ $[(x_1x_{1/2})y_{1/2}]_0=[(x_1y_{1/2})x_{1/2}]_0$ ; $(x_{1/2}y_{1/2})_0z_{1/2}+(y_{1/2}z_{1/2})_0x_{1/2}+(z_{1/2}x_{1/2})_0y_{1/2} \\{\qquad\qquad\qquad}=(x_{1/2}y_{1/2})_1z_{1/2}+(y_{1/2}z_{1/2})_1x_{1/2}+(z_{1/2}x_{1/2})_1y_{1/2}$. \[d(t)\] Let $A$ be a Lie triple algebra and $e$ a pseudo-idempotent of $A$: $e^2=e+t,\; et=\frac12 t,\; t^2=0$ with $t\in L$. For every derivation $d$ of $A$, we have $$d(t)=0\text{ and } d(e)\in A_e(1/2).$$ Let’s consider $d\in Der_K(A)$. Since $e^2=e+t$, we have $2ed(e)=d(e)+d(t)$. Setting $d(e)=[d(e)]_1+[d(e)]_{1/2}+[d(e)]_0$, we have $d(t)=[d(e)]_1-[d(e)]_0$. Because of $2et=t$, we deduce $2ed(t)+2d(e)t=d(t)$. We have $2d(e)t=-[d(e)]_1-[d(e)]_0$. We know that $t\in L_e({1/2})$ and $L_e({1/2})$ is an ideal of $A$. It follows that $[d(e)]_1=[d(e)]_0=0$. \[Der pseudo\] Every derivation $d$ of $A$ is determined and only defined by a quadruplet $(d(e),f_d,g_d,h_d)$ with $f_d\in End_K(A_e(1/2))$, $g_d\in End_K(A_e(0))$ and $h_d\in End_K(A_e(1))$ satisfying the following conditions: $d(e)\in A_e(1/2)$ ; $d(x_1)=h_d(x_1)+2d(e)x_1$ ; $d(x_{1/2})=f_d(x_{1/2})+2(d(e)x_{1/2})_0-2(d(e)x_{1/2})_1$ ; $d(x_0)=g_d(x_0)-2d(e)x_0$ ; $h_d((x_1y_1)_1)=[h_d(x_1)y_1+x_1h_d(y_1)]_{1}$ ;\ $f_d((x_1y_1)_{1/2})=[h_d(x_1)y_1+x_1h_d(y_1)]_{1/2}=2h_d((x_1y_1)_1)t$ ; $g_d((x_0y_0)_0)=[g_d(x_0)y_0+x_0g_d(y_0)]_{0}$ ;\ $f_d((x_0y_0)_{1/2})=[g_d(x_0)y_0+x_0g_d(y_0)]_{1/2}=2g_d((x_0y_0)_0)t$ ; $h_d((x_{1/2}y_{1/2})_1)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_1$ ;\ $g_d((x_{1/2}y_{1/2})_0)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_0$ ; $f_d(x_1x_0)=h_d(x_1)x_0+x_1g_d(x_0)$ ; $f_d(x_1x_{1/2})=h_d(x_1)x_{1/2}+x_1f_d(x_{1/2})$ ; $f_d(x_0x_{1/2})=g_d(x_0)x_{1/2}+x_0f_d(x_{1/2}).$ Let $d$ be a derivation of $A$ and $e$ a pseudo-idempotent of $A$. Since $d(e)\in A_e(1/2)$, we have $(i)$. Let $x_1\in A_e(1)$. We have $ex_1=x_1$, and then $d(e)x_1+ed(x_1)=d(x_1)$. Let’s set $d(x_1)=a_1+a_{1/2}+a_0$. Then $d(e)x_1+a_1+\frac 12 a_{1/2}=a_1+a_{1/2}+a_0$, and we have $a_{1/2}=2d(e)x_1$ and $a_0=0$. Hence $d(x_1)=h_d(x_1)+2d(e)x_1$ with $h_d$ an endomorphism of $A_e(1)$ and $(ii)$ is prooved. By similar calculations we have $(iii)$ and $(iv)$. with $x_1$, $y_1\in A_e(1)$, we have $(x_1y_1)_{1/2}=4x_1(y_1t)=4y_1(x_1t)=2(x_1y_1)t$ $$\begin{aligned} d(x_1y_1) &=& d[(x_1y_1)_1]+d[(x_1y_1)_{1/2}] =d[(x_1y_1)_1]+2d((x_1y_1)t)\\ &=&h_d[(x_1y_1)_1]+2d(e)(x_1y_1)+2d((x_1y_1))t+2(x_1y_1)d(t)\\ &=&h_d[(x_1y_1)_1]+2h_d(x_1y_1)t+2d(e)(x_1y_1),\end{aligned}$$ because $d((x_1y_1))t=h_d(x_1y_1)t$. We also have, $$\begin{aligned} d(x_1y_1) &=& d(x_1)y_1+x_1d(y_1)\\ &=& x_1[h_d(y_1)+2d(e)y_1]+[h_d(x_1)+2d(e)x_1]y_1\\ &=& h_d(x_1)y_1+x_1h_d(y_1)+2[d(e)y_1]x_1+2[d(e)x_1]y_1.\\ &=& h_d(x_1)y_1+x_1h_d(y_1)+2d(e)(x_1y_1).\end{aligned}$$ It follows that $$h_d((x_1y_1)_1)=[h_d(x_1)y_1+x_1h_d(y_1)]_{1} \text{ et }$$ $$f_d((x_1y_1)_{1/2})=[h_d(x_1)y_1+x_1h_d(y_1)]_{1/2}=2h_d(x_1y_1)t, \textrm{ and we have } (v).$$ We show by similar calculations that: $$g_d((x_0y_0)_0)=[g_d(x_0)y_0+x_0g_d(y_0)]_{0}\text{ et }$$ $$f_d((x_0y_0)_{1/2})=[g_d(x_0)y_0+x_0g_d(y_0)]_{1/2}=2g_d(x_0y_0)t, \textrm{and we have } (vi).$$ Let $x_{1/2}, y_{1/2}\in A_e(1/2)$. We have $$\begin{aligned} d(x_{1/2}y_{1/2}) &=& d((x_{1/2}y_{1/2})_1) +d((x_{1/2}y_{1/2})_0)\\ &=& h_d((x_{1/2}y_{1/2})_1)+2d(e)(x_{1/2}y_{1/2})_1\\ & &+g_d((x_{1/2}y_{1/2})_0)-2d(e)(x_{1/2}y_{1/2})_0, \end{aligned}$$ But $$\begin{aligned} d(x_{1/2}y_{1/2})&=& d(x_{1/2})y_{1/2}+x_{1/2}d(y_{1/2}) \\ &=& (f_d(x_{1/2})+2[d(e)x_{1/2}]_0-2[d(e)x_{1/2}]_1)y_{1/2} \\ & & +x_{1/2}(f_d(y_{1/2})+2[d(e)y_{1/2}]_0-2[d(e)y_{1/2}]_1)\\ &=&f_d(x_{1/2})y_{1/2}+2y_{1/2}[d(e)x_{1/2}]_0-2y_{1/2}[d(e)x_{1/2}]_1\\ & &+x_{1/2}f_d(y_{1/2})+2x_{1/2}[d(e)y_{1/2}]_0-2x_{1/2}[d(e)y_{1/2}]_1\\ &=&f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})+2d(e)(x_{1/2}y_{1/2})_1\\ & &-2d(e)(x_{1/2}y_{1/2})_0 \end{aligned}$$ because of identity $(x)$ of Lemma \[decomp2\]. It follows that: $$\begin{aligned} d(x_{1/2}y_{1/2}) &=&f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})+2d(e)(x_{1/2}y_{1/2})_1-2d(e)(x_{1/2}y_{1/2})_0\\\end{aligned}$$ and we have $$h_d((x_{1/2}y_{1/2})_1)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_1\text{ et }$$ $$g_d((x_{1/2}y_{1/2})_0)=[f_d(x_{1/2})y_{1/2}+x_{1/2}f_d(y_{1/2})]_0, \textrm{ and we have } (vii).$$ We have $$\begin{aligned} d(x_1x_0) &=& f_d((x_1x_0)_{1/2}) \\ &=& d(x_1)x_0+x_1d(x_0) \\ &=& [h_d(x_1)+2d(e)x_1]x_0+x_1[g_d(x_0)-2d(e)x_0]\\ &=& h_d(x_1)x_0+x_1g_d(x_0),\end{aligned}$$ $$f_d((x_1x_0)_{1/2})=h_d(x_1)x_0+x_1g_d(x_0), \textrm{ and we have } (viii).$$ In a similar way $$\begin{aligned} d(x_1x_{1/2}) &=& f_d(x_1x_{1/2})+2[d(e)(x_1x_{1/2})]_0-2[d(e)(x_1x_{1/2})]_1 \\ &=& d(x_1)x_{1/2}+x_1d(x_{1/2}) \\ &=& [h_d(x_1)+2d(e)x_1]x_{1/2}+x_1[f_d(x_{1/2})+2[d(e)x_{1/2}]_0-2[d(e)x_{1/2}]_1]\\ &=& h_d(x_1)x_{1/2}+x_1f_d(x_{1/2})+2[d(e)x_1]x_{1/2}+2x_1[d(e)x_{1/2}]_0-2x_1[d(e)x_{1/2}]_1,\end{aligned}$$ $$f_d(x_1x_{1/2})=h_d(x_1)x_{1/2}+x_1f_d(x_{1/2}), \textrm{ and we have } (ix).$$ So we have $$\begin{aligned} d(x_0x_{1/2}) &=& f_d(x_0x_{1/2}) \\ &=& d(x_0)x_{1/2}+x_0d(x_{1/2}) \\ &=& [g_d(x_0)-2d(e)x_0]x_{1/2}+x_0[f_d(x_{1/2})+2[d(e)x_{1/2}]_0-2[d(e)x_{1/2}]_1]\\ &=& g_d(x_0)x_{1/2}+x_0f_d(x_{1/2})-2[d(e)x_0]x_{1/2}+x_0[2[d(e)x_{1/2}]_0\\ & &-2[d(e)x_{1/2}]_1],\end{aligned}$$ $$f_d(x_0x_{1/2})=g_d(x_0)x_{1/2}+x_0f_d(x_{1/2}), \textrm{ and finally } (x).$$ Conversely, once we have identities $(i)$ to $(xi)$, setting $x=x_1+x_{1/2}+x_0$ and $y=y_1+y_{1/2}+y_0$, we show that $d(xy)=d(x)y+xd(y)$. Let $A$ be the four dimensional Lie triple $K$-algebra which multiplication table in the basis $\{e,t,u,r\}$ is given by : $e^2=e+t$, $u^2=u+r$, $et=\frac12 t$, $ur=\frac12 r$, all other product being zero. The Peirce decomposition of $A$ relative to pseudo-idempotent $e$ gives $A_e(1)=K(e+2t)$, $A_e(1/2)=Kt$, $A_e(0)=<u,r>$. Let $d$ be a derivation of $A$. Since $e$ and $u$ are pseudo-idempotents, we have $d(t)=d(r)=0$, $d(e)={\alpha}t$, $d(u)={\beta}r$. The derivation algebra is two dimensional. Let’s consider the four dimensional Lie triple $K$-algebra $A$ which multiplication table in the basis $\{e,t_1,t_2,v\}$ is given by : $e^2=e+t_1$, $et_1=\frac12 t_1$, $et_2=\frac12 t_2$, $ev=v$ and $vt_1=t_2$, all other products being zero. Then we have $A_e(1)=<(e+2t), v>$, $A_e(1/2)=<t_1, t_2>$, $A_e(0)=0$. Let $d$ be a derivation of $A$. We have $d(t_1)=0.$ Set $d(e)= {\alpha}_1 t_1 +{\beta}_1 t_2.$ Thus ${\alpha}_1 t_1 +{\beta}_1 t_2=d(e+2t_1)=h_d(e+2t_1)+2d(e)(e+2t_1)=h_d(e+2t_1)+{\alpha}_1 t_1 +{\beta}_1t_2.$ It follows that $h_d(e+2t_1)=0.$ Setting $d(v)={\alpha}_2(e+2t_1)+{\beta}_2 v+{\gamma}_2 t_1+\eta_2 t_2$, relation $0=d(v^2)=2d(v)v$ gives ${\alpha}_2={\gamma}_2=0.$ It follows that $h_d(v)={\beta}_2 v.$ Furthermore, relation $f_d((v t_1)_{1/2})=h_d(t_1)+f_d(t_1)$ gives $f_d(t_2)={\beta}_2 t_2.$ We have $d(e)={\alpha}_1t_1+{\beta}_1 t_2, h_d(e+2t_1)=0, h_d(v)={\beta}_2 v, f_d(t_1)=0, f_d(t_2)={\beta}_2 t_2.$ The derivation algebra is three dimensional. Let’s consider a pseudo-idempotent $e\ne0$. Subspace $J_e(1/2)=\{x_{1/2}\in A_e(1/2) \mid x_{1/2}A_e(1/2)=0\}$ is a caracteristic ideal and $A/J_e(1/2)$ is a Lie triple algebra with $\overline{e}$ as idempotent. Let $x_{1/2}\in J_e(1/2)$, $a_{1/2} \in A_e(1/2)$ and $y_{\lambda}\in A_e({\lambda})$ (${\lambda}=0,1$). We have $[(x_{1/2}y_{\lambda})a_{1/2}]_{\lambda}=[(a_{1/2}y_{\lambda})x_{1/2}]_{\lambda}=0$ and $[(x_{1/2}y_{\lambda})a_{1/2}]_{1-{\lambda}}=[(a_{1/2}y_{\lambda})x_{1/2}]_{1-{\lambda}}=0$, and then $(x_{1/2}y_{\lambda})a_{1/2}=0$. Hence $A_e({\lambda}) J_e(1/2)\subseteq J_e(1/2)$, and it follows that $A J_e(1/2)\subseteq J_e(1/2)$. $J_e(1/2)$ is an ideal of $A$. Since $t\in L_{1/2}\subseteq J_e(1/2)$, $\overline{e}$ is an idempotent of quotient algebra $A/J_e(1/2)$. Let’s consider now $d\in Der_K(A)$, $x_{1/2}\in J_e(1/2)$ and $a_{1/2} \in A_e(1/2)$. We have $0=d(x_{1/2}a_{1/2})=x_{1/2}d(a_{1/2})+d(x_{1/2})a_{1/2}$. But $d(x_{1/2})=f_d(x_{1/2})\in A_{1/2}$ and $d(a_{1/2})=f_d(a_{1/2})+2(d(e)a_{1/2})_0-2(d(e)a_{1/2})_1,$ it follows that $x_{1/2} d(a_{1/2})=0$ because $x_{1/2}(d(e)a_{1/2})_1=x_{1/2}(d(e)a_{1/2})_0$. So $d(x_{1/2})a_{1/2}=0$, and $d(x_{1/2})\in J_e(1/2)$. We conclude that $d(J_e(1/2))\subseteq J_e(1/2)$. Dimensionally nilpotent Lie algebras ==================================== Let $A$ be a $n+1$ finite dimensional $K$-algebra . If there is a nilpotent $K$-derivation $d$ of $A$ such that $d^{n+1}=0$ and $d^{n}\ne 0$, $d$ is said to be dimensionally nilpotent, and so is the algebra $A$, though $A$ is not necessarily nilpotent. If so it is, there is a basis $\{e_0, e_1,\ldots, e_n\}$ of $A$ such that $d(e_i)= e_{i+1}$ $(i=0,\ldots n-1)$ and $d(e_n)= 0$ and the basis $\{e_0, e_1,\ldots, e_n\}$ is said to be *adapted* to $d$. [@Micali Exemple 2.5]\[gametique\] Let $K$ be a commutative field of characteristic $\ne2$ and $A = G(n + 1,2)$ the gametic diploid algebra with $n+ 1$ alleles. Its multiplication table in the natural basis $\{a_0,\ldots,a_n\}$ is given by $a_ia_j=\frac12 a_i+\frac12 a_j$. We know that the mapping ${\omega}:A\rightarrow K, a_i\mapsto 1$ is a weight function and if we set $e_i=a_0-a_i$ $(i\ne0)$ then $\{e_1,\ldots, e_n\}$ is a basis of the ideal $N = \ker{\omega}$ and $e_0=a_0$ is an idempotent of $A$ such that $\{e_0, e_1,\ldots, e_n\}$ is the canonical basis of $A$, so $e_0e_i =\frac12 e_i$ $(i = l,\ldots,n)$ and if $d$ is a derivation of $A$, $e_0d(e_i) = \frac12 d(e_i)$ $(i = 1,\ldots,n)$, because $d(e_0) \in N$ and $N$ is a zero algebra. So we just need to define $d : A \rightarrow A$ by $d(e_i) = e_{i+1}$ $(i = 1,\ldots,n-1)$, $d(e_0) = e_1$ and $d(e_n) = 0$. It follows that $d^{n+1} = 0$ and $d^n\ne 0$, showing the gametic algebra $A =G(n + 1,2)$ is dimensionally nilpotent. Let $K$ be a commutative field of characteristic $\ne 2$ and $A$ the $n+1$ dimensional commutative $K$-algebra, which multiplication table in the basis $\{e_0,e_1,\ldots,e_n\}$ is given by $e_0e_i=\frac12 e_i$ $(i=1,\ldots,n)$, $e_0^2=e_0+e_n$, all other product being zero. If $d$ is a derivation of $A$, $e_0d(e_i)=\frac12 d(e_i)$ $(i=1,\ldots,n)$ because $d(e_0)\in N=<e_1,\ldots,e_n>$ and $N$ is a zero algebra. Here, we just need again to define $d:A\rightarrow A$ by $d(e_i)=e_{i+1}$ $(i=1,\ldots,n-1)$, $d(e_0) = e_1$ and $d(e_n) = 0$. We have $d^{n+1} = 0$ and $d^n\ne 0$, that shows the algebra $A$ is dimensionally nilpotent. Since $Ke_n$ is an ideal, the quotient algebra $A/Ke_n$ is isomorphic to $G(n,2)$. Basic tools ----------- \[osb\] Let $K$ be a perfect field of characteristic $\ne2$ and $3$ and $A$ finite dimensional $K$-Jordan algebra, dimensionally nilpotent. Then either $A$ is nilpotent or $\dim_K(A/rad(A))= 1$. Let $A$ be a dimensionally nilpotent Lie triple non nilalgebra. Because of Theorem \[idemp\] we consider two cases : - $A$ has an idempotent $e$. Since the ideal $J$ is caracteristic, the quotient algebra $\overline{A}=A/J$ is a dimensionally nilpotent Jordan algebra. Because of Theorem \[osb\] we have $\dim_K(\overline{A}/rad(\overline{A})=1$ and since $rad(\overline{A})\simeq rad(A)/J$, the first isomorphism theorem gives $A/rad(A)\simeq \overline{A}/rad(\overline{A})$ and $\dim_KA/rad(A)=1$. Then we can write $A=Ke\oplus N$, with $N=rad(A)$. - $A$ has a pseudo-idempotent $e$. Since the ideal $J_e(1/2)$ is caracteristic, the quotient algebra $\overline{A}=A/J_e(1/2)$ is a dimensionally Lie triple algebra with $\overline{e}$ as idempotent. Because of $1)$ we can write $\overline{A}=K\overline{e}\oplus \overline{N}$ with $\overline{N}=rad(\overline{A})$. So we have $A=Ke\oplus N$ with $N=rad(A)$. \[lem1\] Let $x$, $y \in N$ such that $x\neq 0$ and ${\alpha}\in K$. If $xy={\alpha}y$ then ${\alpha}=0$ or $y=0$. Since $N$ is nilpotent, there is $m\in {\mathbb{N}}^{*}$ such that $L_x^m(y)={\alpha}^m y=0$, $L_x$ being the multiplicative operator by $x$. Then ${\alpha}=0$ or $y=0$. From now on, throughout the paper, $A$ is a dimensionally nilpotent Lie triple non nilalgebra of dimension $n+1$, with $\{e_0, e_1,\ldots, e_n\}$ an adapted basis to the derivation $d$. We can consider $e_0$ either, as an idempotent, or a pseudo-idempotent. In the last case, $e_0^2=e_0+t$, $e_0t=\frac{1}{2}t$ and $t^2=0$ implies $d(t)=0$ (Lemma \[d(t)\]), that means $t={\alpha}e_n$ with ${\alpha}\in K$. Since $t\in A_e(1/2)$, if ${\alpha}\ne0$, then $e_n\in A_e(1/2)$. \[lem2\] We have: $(i)$ $e_0e_n={\lambda}_n e_n$ $(ii)$ $e_ke_n=0$ with $1\leq k \leq n$ Let’s write $e_0e_n=\sum^n_{i=0} {\lambda}_i e_i$. Deriving $k$ times successively, we have $e_ke_n=\sum^{n-k}_{i=0} {\lambda}_i e_{i+k}$. With $k=n$, it follows that $e_n^2={\lambda}_0 e_n$ and because of Lemma \[lem1\] we have ${\lambda}_0=0$. Set $k=n-1$, one has $e_{n-1}e_n={\lambda}_1e_n$. That implies ${\lambda}_1=0$. And so on, we have ${\lambda}_0={\lambda}_1=\cdots={\lambda}_{n-1}=0$, $e_0e_n={\lambda}_ne_n$. Deriving successively $e_0e_n$ it follows that $e_ke_n=0$ with $1\leq k \leq n$. \[lem3\] We have : - $e_0e_k={\lambda}_ke_k+\sum^n_{i=k+1} a_{k,i} e_i$ with $1\leq k \leq n-1$; - $e_ie_k=\sum^n_{j=k+1} {\gamma}_{ikj} e_j$ with $1\leq i\leq k\leq n-1$; - ${\lambda}_k \in \{0, \frac{1}{2}, 1\}$ with $1\leq k \leq n$. Reason by recurrence on $n$. With $n=1$ the multiplication table of the algebra $A$ is given by $e_0^2=e_0$, $e_0e_1=\frac12 e_1$, $e_1^2=0$ and the lemma is satisfied. Assume the lemma is true until an order $n$. Because of Lemma \[lem2\] the subspace $I_{n+1}=Ke_{n+1}$ is a $d$-invariant ideal of $A$. The quotient algebra $A/I_{n+1}$ is dimensionally nilpotent of dimension $n+1$. By the hypothesis, we have $\overline{e}_0\overline{e}_k={\lambda}_k \overline{e}_k+\sum_{i=k+1}^na_{k,i}\overline{e}_i$ and $\overline{e}_i\overline{e}_k=\sum_{j=k+1}^n{\gamma}_{ikj}\overline{e}_j$, with $1\leq i\leq k\leq n$. Otherwise $e_0e_k={\lambda}_k e_k+\sum_{i=k+1}^na_{k,i}e_i+a_{k,n+1}e_{n+1}$ and $e_ie_k=\sum_{j=k+1}^n {\gamma}_{ikj}e_j+{\gamma}_{ik,n+1}e_{n+1}$; and results $(i)$ and $(ii)$ follow. Now we just need to show $(iii)$. Since $2L_{e_0}^3-3L_{e_0}^2+L_{e_0}=0$, with $L_{e_0}$ being the multiplicative operator by $e_0$, applying it to $e_k$ we have $2{\lambda}_k^3-3{\lambda}_k^2+{\lambda}_k=0$, soit ${\lambda}_k \in \{0, \frac{1}{2}, 1\}$. Example of low dimensions {#Sect41} ------------------------- Here we deal with cases $1\leq n \leq 4$. Let $A$ be a dimensionally nilpotent Lie triple algebra, of dimension $n+1$ and $\{e_0, e_1,\ldots, e_n\}$ be a basis adapted to $d$. We have $\ker d= Ke_n$. Since $e_0$ is an idempotent or a pseudo-idempotent, $e_1=d(e_0)\in A_e(1/2)$, i.e $e_0e_1=\frac{1}{2}e_1$. Deriving this we have $e_0e_2+e_1^2=\frac{1}{2}e_2$, that means $$\begin{aligned} \label{Eq2} {\lambda}_2+{\gamma}_{112}=\frac{1}{2} \textrm{ et } a_{2,k}+{\gamma}_{11k}=0 \quad (3\leq k\leq n).\end{aligned}$$ We also have $e_1^2=\sum_{k=2}^{n}{\gamma}_{11k}e_k$ which derivative is $2 e_1e_2=\sum_{k=2}^{n-1}{\gamma}_{11k}e_{k+1}=\sum_{k=3}^{n}{\gamma}_{11,k-1}e_{k}$, that means $2{\gamma}_{12k}={\gamma}_{11,k-1}$ $(3\leq k \leq n)$. Let’s derive for the second time $e_0e_1=\frac{1}{2}e_1$. We have $e_0e_3+3e_1e_2=\frac{1}{2}e_3$, that means $$\begin{aligned} \label{Eq3} {\lambda}_3+3{\gamma}_{123}=\frac{1}{2} \textrm{ et } a_{3,k}+3{\gamma}_{1,2,k}=0\quad (4\leq k\leq n).\end{aligned}$$ However we have, $d(e_0e_2)=e_0e_3+e_1e_2={\lambda}_2e_3+\sum_4^n a_{2,k-1}e_k$, that implies $$\begin{aligned} \label{Eq4} {\lambda}_3+{\gamma}_{123}={\lambda}_2 \textrm{ et } a_{3,k}+{\gamma}_{12k}=a_{2,k-1}\quad (4\leq k\leq n).\end{aligned}$$ So $(\ref{Eq4})$ implies ${\gamma}_{123}\in \{-1,-\frac{1}{2},0,\frac{1}{2},1\}$. So it is necessary to take ${\lambda}_3=\frac{1}{2}$ in $(\ref{Eq3})$. Whence ${\lambda}_3={\lambda}_2=\frac{1}{2}$ and ${\gamma}_{123}={\gamma}_{112}=0$ if $3\leq n$. Deriving $e_0e_3+3e_1e_2=\frac{1}{2}e_3$, one has $e_0e_4+4e_1e_3+3e_2^2=\frac{1}{2}e_4$, that means $$\begin{aligned} \label{Eq5} {\lambda}_{4}+4{\gamma}_{134}+3{\gamma}_{224}=\frac{1}{2}\end{aligned}$$ However, $d(e_0e_3)=e_0e_4+e_1e_3={\lambda}_3e_4+\sum_{k=5}^na_{3,k-1}e_k$, which implies $$\begin{aligned} \label{Eq6} {\lambda}_4+{\gamma}_{134}={\lambda}_3 \textrm{ et } a_{4,k}+{\gamma}_{13k}=a_{3,k-1}\quad (5\leq k\leq n)\end{aligned}$$ We also have $e_1e_2=\sum_{k=4}^n{\gamma}_{12k}e_k$ because ${\gamma}_{123}=0$ and $d(e_1e_2)=e_1e_3+e_2^2=\sum_{k=5}^n{\gamma}_{12,k-1}e_k$, which implies ${\gamma}_{223}=0$ and ${\gamma}_{134}+{\gamma}_{224}=0$. **Case $\dim_KA=2$ i.e $n=1$.** We obviously have $e_0e_1=\frac{1}{2}e_1$, $e_1^2=0$, $e_0^2=e_0$ or $e_0^2=e_0+e_1$ all other product being zero. $e_0$ $e_1$ ------- ------- ------------------ -- $e_0$ $e_0$ $\frac{1}{2}e_1$ $e_1$ $0$ $e_0$ $e_1$ ------- ----------- ------------------ -- $e_0$ $e_0+e_1$ $\frac{1}{2}e_1$ $e_1$ $0$ **Case $\dim_KA=3$ i.e $n=2$.** Because of $(\ref{Eq2})$ we have ${\lambda}_2+{\gamma}_{112}=\frac{1}{2}$. Let’s discuss the possible values of ${\lambda}_2$. $\ast$ ${\lambda}_2=0$ $\Rightarrow$ ${\gamma}_{112}=\frac{1}{2}$, so $e_0^2=e_0$, $e_0e_1=\frac{1}{2}e_1$, $e_1^2=\frac{1}{2}e_2$ all other product being zero. $e_0$ $e_1$ $e_2$ ------- ------- ------------------ ------- $e_0$ $e_0$ $\frac{1}{2}e_1$ $0$ $e_1$ $\frac{1}{2}e_2$ $0$ $e_2$ $0$ $\ast$ ${\lambda}_2=\frac{1}{2}$ $\Rightarrow$ ${\gamma}_{112}=0$, so $e_0^2=e_0$ or $e_0^2=e_0+e_2$, $e_0e_1=\frac{1}{2}e_1$, $e_0e_2=\frac{1}{2}e_2$ all other product being zero. $e_0$ $e_1$ $e_2$ ------- ------- ------------------ ------------------ $e_0$ $e_0$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $e_1$ $0$ $0$ $e_2$ $0$ $e_0$ $e_1$ $e_2$ ------- ----------- ------------------ ------------------ $e_0$ $e_0+e_2$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $e_1$ $0$ $0$ $e_2$ $0$ $\ast$ ${\lambda}_2=1$ $\Rightarrow$ ${\gamma}_{112}=-\frac{1}{2}$, so $e_0^2=e_0$, $e_0e_1=\frac{1}{2}e_1$, $e_0e_2=e_2$, $e_1^2=-\frac{1}{2}e_2$ all other product being zero. $e_0$ $e_1$ $e_2$ ------- ------- ------------------- ------- $e_0$ $e_0$ $\frac{1}{2}e_1$ $e_2$ $e_1$ $-\frac{1}{2}e_2$ $0$ $e_2$ $0$ **Case $\dim_KA=4$ i.e $n=3$.** Because of the preliminary calculations, ${\lambda}_3={\lambda}_2={\lambda}_1=\frac{1}{2}$, ${\gamma}_{112}={\gamma}_{123}=0$ and $a_{2,3}+{\gamma}_{113}=0$. So $e_0e_3=\frac{1}{2}e_3$ $\Rightarrow$ $e_3\in A_{\frac{1}{2}}$ and $e_1^2={\gamma}_{113}e_3$. Since $A_{\frac{1}{2}}^2\subseteq A_{0}+A_{1}$ we have ${\gamma}_{113}=0$ implying $a_{2,3}=0$ and finally $e_0e_2=\frac{1}{2}e_2$. So we have the following multiplication table : $e_0e_1=\frac{1}{2}e_1$, $e_0e_2=\frac{1}{2}e_2$, $e_0e_3=\frac{1}{2}e_3$, $e_0^2=e_0$ or $e_0^2=e_0+e_3$, all other product being zero. $e_0$ $e_1$ $e_2$ $e_3$ ------- ------- ------------------ ------------------ ------------------ $e_0$ $e_0$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $e_1$ $0$ $0$ $0$ $e_2$ $0$ $0$ $e_3$ $0$ $e_0$ $e_1$ $e_2$ $e_3$ ------- ----------- ------------------ ------------------ ------------------ $e_0$ $e_0+e_3$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $e_1$ $0$ $0$ $0$ $e_2$ $0$ $0$ $e_3$ $0$ **Case $\dim_KA =5$ i.e $n=4$.** $\ast$ ${\lambda}_4=0$ $\Rightarrow$ ${\gamma}_{134}=\frac{1}{2}$ because of $(\ref{Eq6})$. Since $e_1e_3={\gamma}_{134}e_4=\frac{1}{2}e_4$ we have $e_3\in A_{\frac{1}{2}}$ because $A_{\frac{1}{2}}^2\subseteq A_{0}+A_{1}$, $A_{\frac{1}{2}}A_{1}\subseteq A_{\frac{1}{2}}$, $A_{\frac{1}{2}}A_{0}\subseteq A_{\frac{1}{2}}$. So $e_0e_3=\frac{1}{2}e_3$ and $a_{3,4}=0$ $\Rightarrow$ ${\gamma}_{124}=0$ because of $(\ref{Eq4})$ and finally ${\gamma}_{113}=a_{2,3}=0$. In the same way $e_2^2=-\frac{1}{2}e_4$ $\Rightarrow$ $e_2\in A_e(0)$ or $e_2\in A_{\frac{1}{2}}$, because $A_e(1/2)^2\subseteq A_e(0)+A_e(1)$ and $A_e(0)^2\subseteq A_e(0)$. But $e_0e_2=\frac{1}{2}e_2+a_{2,4}e_4$ $\Rightarrow$ $e_2\in A_{\frac{1}{2}}$ so $a_{2,4}=0={\gamma}_{114}$ because of $(\ref{Eq2})$. Whence the following multiplication table $e_0$ $e_1$ $e_2$ $e_3$ $e_4$ ------- ------- ------------------ ------------------- ------------------ ------- $e_0$ $e_0$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $0$ $e_1$ $0$ $0$ $\frac{1}{2}e_4$ $0$ $e_2$ $-\frac{1}{2}e_4$ $0$ $0$ $e_3$ $0$ $0$ $e_4$ $0$ $\ast$ ${\lambda}_4=\frac{1}{2}$ $\Rightarrow$ ${\gamma}_{134}={\gamma}_{224}=0$ because of $(\ref{Eq5})$ and $(\ref{Eq6})$. We have $e_1e_2={\gamma}_{1,2,4}e_4\in A_{\frac{1}{2}}$ $\Rightarrow$ $e_2\in A_e(0)$ or $e_2\in A_e(1)$ this is a contradiction (because $e_0e_2=\frac{1}{2}e_2+a_{2,3}e_3$) so ${\gamma}_{124}=0$ and then $a_{3,4}={\gamma}_{113}=0$. Whence the following multiplication table: $e_0$ $e_1$ $e_2$ $e_3$ $e_4$ ------- ------- ------------------ ------------------ ------------------ ------------------ $e_0$ $e_0$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $\frac{1}{2}e_4$ $e_1$ $0$ $0$ $0$ $0$ $e_2$ $0$ $0$ $0$ $e_3$ $0$ $0$ $e_4$ $0$ $e_0$ $e_1$ $e_2$ $e_3$ $e_4$ ------- ----------- ------------------ ------------------ ------------------ ------------------ $e_0$ $e_0+e_4$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $\frac{1}{2}e_4$ $e_1$ $0$ $0$ $0$ $0$ $e_2$ $0$ $0$ $0$ $e_3$ $0$ $0$ $e_4$ $0$ $\ast$ ${\lambda}_4=1$ $\Rightarrow$ $e_4\in A_e(1)$ $\Rightarrow$ ${\gamma}_{134}=-\frac{1}{2}$ We have $e_1e_3=-\frac{1}{2}e_4$ $\Rightarrow$ $e_3\in A_{\frac{1}{2}}$ $\Rightarrow$ $e_0e_3=\frac{1}{2}e_3$ $\Rightarrow$ $a_{3,4}=0$. So ${\gamma}_{124}={\gamma}_{113}=a_{2,3}=0$. In the same way $e_2^2=\frac{1}{2}e_4$ $\Rightarrow$ $e_2\in A_{\frac{1}{2}}$ (because $e_2$ can not be in $A_e(1)$), $e_0e_2=\frac{1}{2}e_2$ $\Rightarrow$ $a_{2,4}={\gamma}_{114}=0$. Whence this table $e_0$ $e_1$ $e_2$ $e_3$ $e_4$ ------- ------- ------------------ ------------------ ------------------- ------- $e_0$ $e_0$ $\frac{1}{2}e_1$ $\frac{1}{2}e_2$ $\frac{1}{2}e_3$ $e_4$ $e_1$ $0$ $0$ $-\frac{1}{2}e_4$ $0$ $e_2$ $\frac{1}{2}e_4$ $0$ $0$ $e_3$ $0$ $0$ $e_4$ $0$ Main results in general case ---------------------------- \[2TP\] Let $A$ be dimensionally nilpotent Lie triple non nilalgebra. Let $\{e_0,e_1, \ldots, e_n\}$ be an adapted basis of $A$. Then: $1°)$ If $n=2p+1$, the multiplication table of $A$ is one of the two following: - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p+1)$, all other product being zero. - $e_0^2=e_0+e_{2p+1}$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p+1)$, all other product being zero. $2°)$ If $n=2p$, the multiplication table of $A$ is one of the four following : - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p)$, all other product being zero. - $e_0^2=e_0+e_{2p}$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p)$, all other product being zero. - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p-1)$, $e_0e_{2p}=0$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i-1}e_{2p}$, $(1\leq i \leq p)$, all other product being zero. - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p-1)$, $e_0e_{2p}=e_{2p}$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i}e_{2p}$ $(1\leq i \leq p)$, all other product being zero. Reason by recurrence on $n$. Subsection \[Sect41\] shows that the theorem is true when $n\leq 4$. Assume it is true until an order $n>4$ and let’s show it remains true for $n+1$. Integer $n$ being either even or odd, we consider two cases : $\mathbf{1)}$ $n=2p$ is even. The multiplication table of $A$ has the following form $e_0e_k=\frac12 e_k + a_{k,2p+1}e_{2p+1}$ $(1\leq k\leq 2p-1)$, $e_0e_{2p}={\lambda}_{2p}e_{2p}+a_{2p,2p+1}e_{2p+1}$, $e_0e_{2p+1}={\lambda}_{2p+1}e_{2p+1}$ and $e_ie_{2p-i}=\varepsilon_ie_{2p}+{\gamma}_{i,2p-i,2p+1}e_{2p+1}$, with $\varepsilon_i=0$, $\varepsilon_i=\frac{1}{2}(-1)^{i-1}$ or $\varepsilon_i=\frac{1}{2}(-1)^{i}$ $(i=1,\ldots,p)$ according to ${\lambda}_{2p}=\frac12$, ${\lambda}_{2p}=0$ or ${\lambda}_{2p}=1$, respectively. We have $d(e_ie_{2p-i})=e_ie_{2p+1-i}+e_{i+1}e_{2p-i}=\varepsilon_i e_{2p+1}$, and then the following system $$\begin{cases} e_1e_{2p}+e_{2}e_{2p-1}=\varepsilon_1 e_{2p+1}, & \hbox{} \\ \cdots\cdots, & \\ e_ie_{2p+1-i}+e_{i+1}e_{2p-i}=\varepsilon_i e_{2p+1}, & \hbox{}\\ \cdots\cdots, & \\ e_pe_{p+1}+e_{p+1}e_{p}=2e_pe_{p+1}=\varepsilon_p e_{2p+1}. \end{cases}\tag{S}$$ We see that $e_pe_{p+1}=\frac{1}{2}\varepsilon_p e_{2p+1}$, $e_{p-1}e_{p+2}=(\varepsilon_{p-1}-\frac{1}{2}\varepsilon_p)e_{2p+1}=\frac{3}{2}\varepsilon_{p-1}e_{2p+1}$, $e_{p-i}e_{p+1+i}=\frac{2i+1}{2}\varepsilon_{p-i}e_{2p+1}$, $e_1e_{2p}=\frac{2p-1}{2}\varepsilon_1 e_{2p+1}$. But $d(e_0e_{2p})=e_0e_{2p+1}+e_1e_{2p}={\lambda}_{2p}e_{2p+1}$, that means $e_1e_{2p}=({\lambda}_{2p}-{\lambda}_{2p+1})e_{2p+1}$, and also ${\lambda}_{2p}-{\lambda}_{2p+1}=\frac{2p-1}{2}\varepsilon_1$. - ${\lambda}_{2p}=0$, we have $\varepsilon_1=\frac{1}{2}$ and ${\lambda}_{2p+1}=-\frac{2p-1}{4}\not\in \{0,\frac{1}{2},1\}$, impossible. - ${\lambda}_{2p}=1$, we have $\varepsilon_1=-\frac{1}{2}$ and ${\lambda}_{2p+1}=1+\frac{2p-1}{4}=\frac{2p+3}{4}\not\in \{0,\frac{1}{2},1\}$, impossible. - ${\lambda}_{2p}=\frac{1}{2}$, $\varepsilon_1=0$ and ${\lambda}_{2p+1}=\frac12$. Since all the ${\lambda}_k$ are equal to $\frac12$ ($k\ne 2p+1$), applying $2L_{e_0}^3-3L_{e_0}^2+L_{e_0}=0$ to $e_k$, it follows that $2a_{k,2p+1}{\lambda}_{2p+1}({\lambda}_{2p+1}-1)=0$. If ${\lambda}_{2p+1}=\frac12$, then we have $a_{k,2p+1}=0$, which means $e_0e_k=\frac12 e_k$ for all $k$. Hence $e_ie_j=0$, with $1\leq i\leq j\leq n$. $\mathbf{2)}$ $n=2p-1$ is odd. The multiplication table of $A$ has the following form $e_0e_k=\frac12 e_k + a_{k,2p}e_{2p}$ ($k=1,\ldots,2p-1$), $e_0e_{2p}={\lambda}_{2p}e_{2p}$ and $e_ie_{2p-1-i}={\gamma}_{i,2p-1-i,2p}e_{2p}$ ($i=1,\ldots,p-1$). Deriving this last relation, we have $e_ie_{2p-i}+e_{i+1}e_{2p-1-i}=0$, and the following system $$\begin{cases} e_1e_{2p-1}+e_{2}e_{2p-2}=0, & \hbox{} \\ e_2e_{2p-2}+e_{3}e_{2p-3}=0, & \hbox{} \\ \cdots\cdots, & \\ e_{p-1}e_{p+1}+e_{p}^2=0. \end{cases}\tag{$S'_p$}$$ So $e_1e_{2p-1}=-e_2e_{2p-2}=e_3e_{2p-3}=\cdots =(-1)^{i-1}e_ie_{2p-i}=\cdots=(-1)^{p-1}e^2_p$, that means $e_ie_{2p-i}=(-1)^{i-1}e_1e_{2p-1}$. However, since $d(e_0e_{2p-1})=e_0e_{2p}+e_1e_{2p-1}=\frac{1}{2}e_{2p}$, we have $e_1e_{2p-1}=(\frac{1}{2}-{\lambda}_{2p})e_{2p}$. Because ${\lambda}_{2p}\in\{0,\frac{1}{2},1\}$, we consider three situations : - ${\lambda}_{2p}=0$, we have $e_1e_{2p-1}=\frac{1}{2}e_{2p}$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i-1}e_{2p}$ $(i=1,\ldots, p)$ and $e_0^2=e_0$. - ${\lambda}_{2p}=1$, we have $e_1e_{2p-1}=-\frac{1}{2}e_{2p}$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i}e_{2p}$ $(i=1,\ldots, p)$ and $e_0^2=e_0$. - ${\lambda}_{2p}=\frac{1}{2}$, we have $e_1e_{2p-1}=e_ie_{2p-i}=0$ $(i=1\ldots p)$ and $2a_{i,2p}{\lambda}_{2p}({\lambda}_{2p}-1)=0$ shows that $a_{i,2p-i}=0$. Hence $e_0e_i=\frac{1}{2}e_i$ $(i=1,\ldots,p)$. Furthermore we have, either $e_0^2=e_0$, or $e_0^2=e_0+e_{2p}$. For cases ${\lambda}_{2p}=0$ and ${\lambda}_{2p}=1$, We just need to show $e_ie_j=0$ for $i+j<2p$. The following lemma completes the proof of the theorem. And Note \[NOTE\] shows that all algebras defined in this theorem are Lie triple. $e_0e_i=\frac12 e_i$ for $i=1,\ldots,n-1$. One has $e_ie_{2k-i}={\gamma}_{i,2k-i,n}e_{n}$ for $i=1,\ldots,k-1$. Deriving this we have $e_ie_{2k-i+1}+e_{i+1}e_{2k-i}=0$. By varying $i$ we have the following system $$\begin{cases} e_1e_{2k}+e_{2}e_{2k-1}=0, & \hbox{} \\ e_2e_{2k-1}+e_{3}e_{2k-2}=0, & \hbox{} \\ \cdots\cdots, & \\ e_{k-1}e_{k+2}+e_{k}e_{k+1}=0, & \hbox{}\\ e_ke_{k+1}+e_{k+1}e_{k}=2e_ke_{k+1}=0. \end{cases}\tag{$S_k$}$$ Going up the lines of this system we see that $e_ie_{2k+1-i}=0$ for $i=1,\ldots,k$ in particular $e_1e_{2k}=0$, so $e_0e_{2k+1}+e_1e_{2k}=\frac12 e_{2k+1}$ and $e_0e_{2k+1}=\frac12 e_{2k+1}$, $k=0,\ldots,p-1$. Now we make a recurrence on $n$. Assume it is true until an order $n$. We distinguish two cases : - $n+1=2p+1$ is odd. We have $e_0e_{n+1}=\frac12 e_{n+1}$, which imposes $e_0e_n=\frac12 e_n$. - $n+1=2p$ is even. Since $n=2p-1$ is odd, we have $e_0e_{n}=\frac12 e_n$. Since every commutative Jordan algebra is a Lie triple algebra, we have the following result: \[Jordan\] Let $A$ be a commutative Jordan non nilalgebra, dimensionally nilpotent. Let $\{e_0,e_1, \ldots, e_n\}$ be an adapted basis of $A$. Then: $1°)$ If $n=2p+1$, the multiplication table of $A$ is: $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p+1)$, all other product being zero. $2°)$ If $n=2p$, the multiplication table of $A$ is one of the following three tables : - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p)$, all other product being zero. - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p-1)$, $e_0e_{2p}=0$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i-1}e_{2p}$, $(1\leq i \leq p)$, all other product being zero. - $e_0^2=e_0$, $e_0e_i=\frac{1}{2}e_i$ $(1\leq i \leq 2p-1)$, $e_0e_{2p}=e_{2p}$, $e_ie_{2p-i}=\frac{1}{2}(-1)^{i}e_{2p}$ $(1\leq i \leq p)$, all other product being zero. The Corollary follows from Theorem \[2TP\] knowing that Jordan algebras do not admit pseudo-idempotent. \[NOTE\] 1) Multiplication tables in Theorem \[2TP\] $1)~(i)$ and in Corollary \[Jordan\] $1)$ when $n=2p+1$ is odd are those of *gametic algebras* $G(2p+2,2)$ (Example \[gametique\]). It is the same for those defined in Theorem \[2TP\] $2)~(i)$ and in Corollary \[Jordan\] $2)~(i)$ when $n=2p$ is even. These are gametic algebras $G(2p+1,2)$. They are caracterized as elementary train algebras with equation $x^2-{\omega}(x)x=0$, in which ${\omega}:A\rightarrow K$, $e_0\mapsto 1$, $e_i\mapsto 0$ is a homomorphism of algebras. 2\) Multiplication tables in Theorem \[2TP\] $2)~(ii)$ and in Corollary \[Jordan\] $2)~(ii)$, when $n=2p$ is even, are those of *normal Bernstein algebras* of type $(2p,1)$. Normal Bernstein algebras are defined by equation $x^2y={\omega}(x)xy$. These are Bernstein-Jordan algebras, caracterized by the train equation $x^3-{\omega}(x)x^2=0$ [@Ouat; @Buse]. 3\) Multiplication tables in Theorem \[2TP\] $2)~(iii)$ and in Corollary \[Jordan\] $2)~(iii)$, when $n=2p$ is even, are those of the other class of *train algebras of rank $3$ which are Jordan algebras* of type $(2p,1)$. They are defined by equation $x^3-2{\omega}(x)x^2+{\omega}(x)^2x=0$ [@Ouat Theorem 2.1]. 4\) Multiplication tables in Theorem \[2TP\] $1)~(i')$ and $2)~(i')$ are those of *train algebras* satisfying $x^3-\frac{3}{2}{\omega}(x)x^2+\frac{1}{2}{\omega}(x)^2x=0$. These are Lie triple algebras because of [@BKO1 Proposition 5.2, (iii)]. [10]{} J. [Bayara]{}, A. [Conseibo]{}, A. [Micali]{} and M. [Ouattara]{}, *Derivations in power-associative algebras*, , **4** 6, 1359–1370 (2011) <https://doi.org/10.3934/dcdss.2011.4.1359> J. Bayara, A. Konkobo and M. Ouattara, *Algèbres de Lie triple sans idempotent*, Afr. Mat., **25**(4): 1063–1075, (2014). <doi:10.1007/s13370-013-0172-4> J. Bayara, A. Konkobo and M. Ouattara, *Equations des algèbres Lie triple qui sont des algèbres train*, Indagationes Mathematicae, **28**(2): 390–405, (2017). <doi:10.1016/j.indag.2016.10.004> V. Eberlin, *Algèbres de Lie dimensionnellement nilpotentes*, Comm. Algebra, **28**(1), 183–191 (2000). <doi:10.1080/00927870008826835> I.R. Hentzel and L.A. Peresi, *Almost Jordan rings*, Proc. Am. Math. Soc. **104**(2), 343-–348, (1988). <doi:10.2307/2046977> P. Holgate, *The interpretation of derivations in genetic algebras*, , **85**, [75–79]{} (1987). <doi:10.1016/0024-3795(87)90209-6> N. Jacobson, *Lie algebras.*, Dover publications., [Dover]{}, 1979. G.F. Leger and P.L. Manley, *Dimensionally nilpotent Lie algebras*, Journal of Algebra **117**: 162–164 (1988). <doi:10.1016/0021-8693(88)90247-5> A. Micali and M. Ouattara *Algèbres génétiques dimensionnellement nilpotentes* Linear Algebra Appl. **266**: 271–290 (1997). <doi:10.1016/S0024-3795(97)86524-X> J.M. Osborn, *Commutative algebras satisfying an identity of degree four*, Trans. Amer. Math. Soc. [**16**]{} (1965), 1114–1120. <doi:10.2307/2035628> J.M. Osborn, *Varieties of algebras*, Adv. Math. **8** (1972), 163–369. <doi:10.1016/0001-8708(72)90003-5> J.M. Osborn, *Dimensionally nilpotent Jordan algebras*, Proc. Amer. Math. Soc., **116**, 949–953 (1992). <doi:10.2307/2159472> M. Ouattara, *Sur les T-algèbres de Jordan*, Linear Algebra Appl. **144** (1991), 11–21. <doi:10.1016/0024-3795(91)90056-3> R.D. Schafer, *An introduction to nonassociative algebras* Academic Press, New York, 1966. A. Wörz-Busekros, Algebras in Genetics, *Lecture Notes in Biomathematics* <span style="font-variant:small-caps;">36</span>, Springer-Verlag, Berlin-New York, 1980. [^1]: `doulaydem@yahoo.fr` [^2]: `konkoboa@yahoo.fr` [^3]: `ouatt_ken@yahoo.fr`
--- abstract: 'In this paper, we prove that for a fibration $f:X\to Z$ from a smooth projective 3-fold to a smooth projective curve, over an algebraically closed field $k$ with $\mathrm{char} k =p >5$, if the geometric generic fiber $X_{\ol\eta}$ is smooth, then subadditivity of Kodaira dimensions holds, i.e. $$\kappa(X)\ge\kappa(X_{\ol\eta})+\kappa(Z).$$' address: - | Sho Ejiri\ Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan. - | Lei Zhang\ School of Mathematics and Information Sciences\ Shaanxi Normal University\ Xi’an 710062\ P.R.China author: - Sho Ejiri - Lei Zhang title: 'Iitaka’s $C_{n,m}$ conjecture for 3-folds in positive characteristic' --- = 9999 Introduction {#section:intro} ============ Throughout this paper, a [*fibration*]{} means a projective morphism $f:X\to Y$ between varieties such that the natural morphism $\O_Y \to f_*\O_X$ is an isomorphism. The Kodaira dimension is one of the most important birational invariants and plays a key role in the birational classification of algebraic varieties. For a fibration, we have the following conjecture on Kodaira dimensions, which was proposed by Iitaka in characteristic zero. \[conj:Iitaka\]Let $f:X\to Z$ be a fibration between smooth projective varieties of dimension $n$ and $m$ respectively over an algebraically closed field $k$ with $\mathrm{char} k = p \ge 0$ , whose geometric generic fiber $X_{\ol\eta}$ is integral and smooth. Then $$\kappa(X)\ge\kappa(X_{\ol\eta})+\kappa(Z).$$ In characteristic zero, many results related to this conjecture are known [@Bir09; @Cao15; @CP15; @CH11; @Fuj13; @Fuj14; @Kaw81; @Kaw82; @Kaw85; @Kol87; @KP15; @Lai11; @Vie77; @Vie83]. In particular, this conjecture was reduced to problems in the minimal model program by Kawamata [@Kaw85 Corollary 1.2]. In positive characteristic, Conjecture \[conj:Iitaka\] has been proved in some cases recently. Chen and Zhang showed $C_{n,n-1}$ [@CZ13 Theorem 1.2]. Under the assumption that $p>5$, $C_{3,1}$ was shown when $k=\ol{\mb F_p}$ by Birkar, Chen and Zhang [@BCZ15 Theorem 1.2], when $X_{\ol\eta}$ is of general type [@Eji15 Theorem 1.5] (see also [@Zha16 Appendix 7]), and when the genus of $Z$ is at least two by Zhang [@Zha16 Corollary 1.9]. Furthermore, when $f$ has singular geometric generic fiber, its dualizing sheaf, denoted by $\omega_{X_{\ol\eta}}$ (the same notation with canonical sheaf because they coincide with each other when $X_{\ol\eta}$ is smooth), was considered by [@Pat13] and [@Zha16], and under some special situations, an analogous inequality $\kappa(X)\ge\kappa(X_{\ol\eta}, \omega_{X_{\ol\eta}})+\kappa(Z)$ was proved. The aim of this paper is to prove the theorem below. \[thm:c31\_intro\]Conjecture $C_{3,m}$ holds when $\mathrm{char} k = p>5$. The proof relies on the minimal model program for varieties of dimension at most three in characteristic $p>5$ developed by several mathematicians including Birkar, Cascini, Hacon, Tanaka, Waldron and Xu. The case when $\kappa(X_{\ol\eta}) = 2$ has been proved by the first author [@Eji15 Theorem 1.5]. In this paper we only need to consider the cases $\kappa(X_{\ol\eta}) = 0, 1$. This paper is organized as follows. Section \[section:mmp\] includes some basic results to be used in our proof, including minimal model theory of 3-folds, vector bundles on elliptic curves and weak positivity of push-forward of pluri-relative canonical sheaves. Section \[section:k(F)=0\] and \[section:k(F)=1\] are devoted to study the cases $\kappa(X_{\ol\eta}) = 0$ and $1$ respectively. **Notation and Conventions:** In this paper, we fix an algebraically closed field $k$ of characteristic $p>0$. A [*$k$-scheme*]{} is a separated scheme of finite type over $k$. A [*variety*]{} means an integral $k$-scheme, and a [*curve*]{} (resp. [*surface, $n$-fold*]{}) means a variety of dimension one (resp. two, $n$). Let $\varphi:S\to T$ be a morphism of schemes and let $T'$ be a $T$-scheme. Then we denote by $S_{T'}$ and $\varphi_{T'}:S_{T'}\to T'$ respectively the fiber product $S\times_{T}T'$ and its second projection. For a prime $p\in\Z$, $\mb F_p$ and $\Z_{(p)}$ denote respectively $\Z/p\Z$ and the localization of $\Z$ at $p\Z$. For a Cartier, $\Z_{(p)}$-Cartier or $\Q$-Cartier divisor $D$ on $S$ (resp. an $\O_S$-module $\G$), the pullback of $D$ (resp. $\G$) to $S_{T'}$ is denoted by $D_{T'}$ or $D|_{S_{T'}}$ (resp. $\G_{T'}$ or $\G|_{S_{T'}}$) if it is well-defined. Similarly, for a homomorphism of $\O_S$-modules $\alpha:\F\to\G$ the pullback of $\alpha$ to $S_{T'}$ is denoted by $\a_{T'}:\F_{T'}\to \G_{T'}$. The first author is greatly indebted to his supervisor Professor Shunsuke Takagi for suggesting the problems in this paper, for many fruitful discussions and for much helpful advice. He is deeply grateful to Professors Caucher Birkar and Yifei Chen for valuable comments and suggestions. He would like to thank Professors Yoshinori Gongyo and Akiyoshi Sannai for helpful comments. He also would like to thank Doctors Takeru Fukuoka and Hokuto Konno for useful comments. He is supported by JSPS KAKENHI Grant Number 15J09117 and the Program for Leading Graduate Schools, MEXT, Japan. Part of this work was done during the period of the second author visiting BICMR, he would like to thank Prof. Chenyang Xu for his hospitality and useful discussion. The second author is supported by grant NSFC (No. 11401358 and No. 11531009). The authors thank the referee for many useful comments and suggestions to improve this paper. Preliminaries {#section:mmp} ============= In this section, we recall some basic results which will be used in the proof. Minimal models of $3$-folds --------------------------- Existence of (log) minimal models of $3$-folds in positive characteristic $p>5$ was first proved for canonical singularities by Hacon and Xu [@HX15], and in general by Birkar [@Bir13] (see [@Wal16] for the lc case). The result on Mori fiber spaces was proved for terminal singularities by Cascini, Tanaka and Xu [@CTX15], and in general by Birkar and Waldron [@BW14]. We collect some results in the following theorem, which will be used in our proof. \[thm:mmp\]Assume that the base field $k$ has characteristic $p>5$. Let $f:X\to Z$ be a contraction from a normal $3$-fold, and let $\Delta$ be an effective $\Q$-Cartier $\Q$-divisor on $X$. \(1) If either $(X,\Delta)$ is klt and $K_X+\Delta$ is pseudo-effective over $Z$, or $(X,\Delta)$ is lc and $K_X+\Delta$ has a weak Zariski decomposition [^1], then $(X,\Delta)$ has a log minimal model over $Z$. \(2) If $(X, \Delta)$ is a dlt pair and $Z$ is a smooth projective curve with $g(Z) \geq 1$, then every step of LMMP in [@Bir13 Sec. 3.5-3.7] starting from $(X, \Delta)$ is over $Z$. For (1) please refer to [@Bir13 Theorem 1.2 and Proposition 8.3]. For (2), since $(X, \Delta)$ is dlt, every $K_X + \Delta$-extremal ray is a generated by a rational curve by cone theorem [@BW14 Theorem 1.1], which is contracted by $f$ since $g(Z) \geq 1$. So for an extremal contraction $X \to \bar{X}$, if there is a divisorial contraction or a flip $\sigma: X \dashrightarrow X^+$ as in [@Bir13 Sec. 3.5-3.7], there exist natural morphism $\bar{f}: \bar{X} \to Z$ and $f^+: X^+ \to Z$ fitting into the following commutative diagram $$\xymatrix{&X \ar[rd]\ar[rdd]_f\ar@{.>}[rr] & &X^+ \ar[ld]\ar[ldd]^{f^+}\\ &&\bar{X} \ar[d]^{\bar{f}} &\\ &&Z &. }$$ Note that $(X^+, \Delta^+ = \sigma_*\Delta)$ is a dlt pair. We can show this assertion by induction. Covering Theorem ---------------- The result below is \[[@Iit82], Theorem 10.5\] when $X$ and $Y$ are both smooth, and the proof there also applies when varieties are normal. \[ct\] Let $f\colon X \rightarrow Y$ be a proper surjective morphism between complete normal varieties. If $D$ is a Cartier divisor on $Y$ and $E$ an effective $f$-exceptional divisor on $X$, then $$\kappa(X, f^*D + E) = \kappa(Y, D).$$ As a corollary we get the following useful result. \[injofpic\] Let $g: W \rightarrow Y$ be surjective projective morphism between projective varieties. Assume $Y$ is normal and let $L_1,L_2 \in \Pic^0(Y)$ be two line bundles on $Y$. If $g^*L_1 \sim_{\mathbb{Q}} g^*L_2$ then $L_1 \sim_{\mathbb{Q}} L_2$. Let $L = L_1 \otimes L_2^{-1}$. Denote by $\sigma: W' \to W$ the normalization and $g'=g\circ \sigma: W' \to Y$. Then $g'^*L \sim_{\Q} 0$. Applying Theorem \[ct\] to $g': W' \to Y$ gives that $L \sim_{\Q} 0$, which is equivalent to that $L_1 \sim_{\mathbb{Q}} L_2$. Adjunction ---------- \[adj\] Assume that the base field $k$ has characteristic $p>5$. Let $(X, \Delta)$ be a normal, $\mathbb{Q}$-factorial, lc 3-fold (not necessarily projective). Let $C$ be a projective lc center of $(X, \Delta)$ and $\tilde{C}$ be the normalization of $C$. If $(K_X + \Delta)|_{\tilde{C}}$ is numerically trivial, then $(K_X + \Delta)|_{\tilde{C}}$ is $\mathbb{Q}$-trivial. By [@Bir13 Lemma 6.5], we can take a crepant partial resolution $\mu:X' \rightarrow X$ such that $$K_{X'} + D + \Delta' \sim_{\mathbb{Q}} \mu^*(K_X + \Delta)\cdots (\clubsuit)$$ where $D$ is a reduced irreducible divisor dominant over $C$ and $(X', D + \Delta')$ is dlt. Then considering the restriction of the relation $\clubsuit$ on $D$, by the adjunction formula [@Ke99 5.3], we have $$K_{D} + \Delta_D \sim_{\mathbb{Q}} \mu^*(K_X + \Delta)|_D.$$ Then $D$ is a normal projective surface hence granted a natural morphism $D \to \tilde{C}$, and $(D, \Delta_D)$ is log canonical by [@Bir13 Lemma 5.2]. Applying [@Tan14 Theorem 1.2], we have that $K_{D} + \Delta_D$ is semi-ample, thus $\mu^*(K_X + \Delta)|_{D}$ is $\mathbb{Q}$-trivial since $(K_X + \Delta)|_{\tilde{C}}$ is numerically trivial. We can conclude that $(K_X + \Delta)|_{\tilde{C}}$ is $\mathbb{Q}$-trivial by Lemma \[injofpic\]. Vector bundles on elliptic curves --------------------------------- In this subsection, we recall some facts about vector bundles on elliptic curves, which are used in the proof of Theorem \[thm:sa\_intro\]. \[thm:facts on vb on ell curve\]Let $C$ be an elliptic curve, and let $\E_C(r,d)$ be the set of isomorphism classes of indecomposable vector bundles of rank $r$ and of degree $d$. - For each $r>0$, there exists a unique element $\E_{r,0}$ of $\E_C(r,0)$ with $H^0(C,\E_{r,0})\ne0$. Moreover, for every $\E\in\E_C(r,0)$ there exists an $\L\in\Pic^0(C)$ such that $\E\cong\E_{r,0}\otimes\L$. - When the Hasse invariant ${\rm Hasse}(C)$ is nonzero, $F_C^*\E_{r,0}\cong \E_{r,0}$. When ${\rm Hasse}(C)$ is zero, $F_C^*\E_{r,0}\cong \bigoplus_{1\le i\le\min\{r,p\}}\E_{\lfloor(r-i)/p\rfloor+1,0}$, where $\lfloor r\rfloor$ denotes the round down of $r$. \[thm:fact on vb on sm curve\]Let $\E$ be a vector bundle on a smooth projective curve $C$. If ${F_C^e}^*\E\cong\E$ for some $e>0$, then there exists an étale morphism $\pi:C'\to C$ from a smooth projective curve $C'$ such that $\pi^*\E\cong\bigoplus\O_{C'}$. \[prop:decomp\]Let $\E$ be a vector bundle on an elliptic curve $C$. Then there exists a finite morphism $\pi:C'\to C$ from an elliptic curve $C'$ such that $\pi^*\E$ is a direct sum of line bundles. We may assume that for every finite morphism $\varphi:B\to C$ from an elliptic curve $B$, $\varphi^*\E$ is indecomposable. Set $d:=\deg\E$ and $r:=\rank\E$. We show that $r=1$. Let $Q\in C$ be a closed point. Replacing $\E$ by $((r_C)^*\E)(-dQ)$, we may assume that $d=0$. Here $r_C:C\to C$ is the morphism given by multiplication by $r$. Hence Theorems \[thm:facts on vb on ell curve\] and \[thm:fact on vb on sm curve\] imply that when the Hasse invariant of $C$ is nonzero (resp. zero), there exists an étale morphism $\pi:C'\to C$ (resp. an $e>0$) such that $\pi^*\E$ (resp. ${F_C^e}^*\E$) is a direct sum of line bundles. This implies that $r=1$. Weak positivity --------------- The following positivity result will be used in the proof of the case when the geometric generic fiber has Kodaira dimension one. \[mthp\] Assume that $\mathrm{char} k=p>5$. Let $f: X \rightarrow Z$ be a fibration from a smooth projective 3-fold to a smooth projective curve. Suppose that the geometric generic fiber $X_{\ol\eta}$ has at most rational double points as singularities. If $\kappa(X_{\ol\eta},K_{X_{\ol\eta}})=1$, then there exists a real number $c>0$ such that $f_*\omega_{X/Z}^m$ contains a nef subbundle of rank at least $cm$ for sufficiently divisible $m>0$. Before proving Theorem \[mthp\], we recall some results. \[thm:ellfib\_intro\]Let $f:X\to Z$ be a surjective morphism between smooth projective varieties, over an algebraically closed field of positive characteristic, whose geometric generic fiber is a smooth elliptic curve. Then $\kappa(X,K_{X/Z})\ge0$. The following lemma will be frequently used. \[l-linear-pullback\] Let $f: X \rightarrow Z$ be a fibration between normal quasi-projective varieties. Let $L$ be a $f$-nef $\mathbb{Q}$-Cartier divisor on $X$ such that $L_{\eta}\sim_{\mathbb{Q}}0$ where $\eta$ is the generic point of $Z$. Assume $\dim Z\le 3$. Then there exist a diagram $$\xymatrix{ X'\ar[r]^\phi\ar[d]_{f'} & X\ar[d]^f\\ Z'\ar[r]^\psi & Z }$$ with $\phi,\psi$ projective birational, and an $\mathbb{Q}$-Cartier divisor $D$ on $Z'$ such that $\phi^* L\sim_{\mathbb{Q}} f'^*D$. Furthermore, if $f$ is flat and $Z$ is $\Q$-factorial, then we can take $X'=X$ and $Z'=Z$. The next lemma is a consequence of Tanaka’s vanishing theorem for surfaces [@Tan15x]. \[mthplem\] Let $g:Y\to Z$ be a generically smooth surjective morphism from a smooth projective surface to a smooth projective curve. Let $H$ be a nef and $g$-big divisor on $Y$. Then $g_*\O_Y(K_{Y/Z}+lH)$ is a nef vector bundle for every $l\gg 0$. Let $A$ be an ample divisor on $Z$ with $\deg A \geq \deg K_Z + 2$. Then $A - K_Z - z$ is ample for a closed point $z \in Z$ where $z$ is seen as a divisor on $Z$. Note that $\nu(H) \geq 1$ and $H+g^*(A-K_Z - z)$ is nef and big. Denote by $Y_z$ the fiber of $g$ over $z$. By [@Tan15x Theorem 2.6] we see that $$H^1(Y,K_Y+H+g^*(A-K_Z) +(l-1)H - Y_z) = H^1(Y,K_Y+H+g^*(A-K_Z -z) +(l-1)H)=0$$ for $l\gg0$. Thus for a closed point $z\in Z$, by the long exact sequence arising from taking cohomology of the exact sequence below $$0 \to \O_Y(K_{Y/Z}+g^*A +lH - Y_z) \to \O_Y(K_{Y/Z}+g^*A+lH) \to \O_Y(K_{Y/Z}+g^*A+lH)|_{Y_z} \to 0$$ we conclude that the restriction $$H^0(Y,K_{Y/Z}+g^*A+lH)\to H^0(Y_z,(K_{Y/Z}+g^*A+lH)|_{Y_z})$$ is surjective. This implies that $(g_*\O_Y(K_{Y/Z}+lH))(A)$ is generically globally generated. On the other hand, if $z$ is general then $Y_z$ is smooth, applying [@Pat14 Corollary 2.23], since $H|_{Y_z}$ is ample, we have that for $l\gg0$ the morphism $$\begin{aligned} H^0(Y_{z},\phi^{(e)}_{Y_{z}}\otimes\O_{Y_{z}}(K_{Y_{z}}+lH_{z})): H^0(Y_{z},K_{Y_{z}}+lp^eH_{z})\to H^0(Y_{z},K_{Y_{z}}+lH_{z})\end{aligned}$$ is surjective. This implies that the homomorphism ([@Eji15 Section 2]) $$\begin{aligned} {g_{Z^e}}_*&(\phi^{(e)}_{Y/Z}\otimes\O_{Y_{Z^e}}((K_{Y/Z}+lH)_{Z^e}))\otimes \O_{Z^e}(A):\\ &g_*\O_{Y}(K_{Y/Z}+lp^eH + g^*(A+z)) \cong g_*\O_{Y}(K_{Y/Z}+lp^eH)\otimes \O_{Z^e}(A) \\ &\to {g_{Z^e}}_*\O_{Y_{Z^e}}((K_{Y/Z}+lH)_{Z^e})\otimes \O_{Z^e}(A+z) \cong {F_Z^{e}}^*g_*\O_Y(K_{Y/Z}+lH)\otimes \O_{Z^e}(A)\end{aligned}$$ is generically surjective. Thus for every $e>0$, ${F_Z^e}^*(g_*\O_Y(K_{Y/Z}+lH))\otimes \O_{Z^e}(A)$ is generically globally generated, and hence is nef. We conclude that $g_*\O_Y(K_Y+lH)$ is nef by applying [@Eji15 Proposition 4.7]. Let $W$ be a minimal model of $X$ over $Z$. Let $\rho:X_{\ol\eta}\to W_{\ol\eta}$ be the induced morphism. Since $\rho_*\O_{X_{\ol\eta}}\cong\O_{W_{\ol\eta}}$, $W_{\ol\eta}$ is normal. Furthermore, since $W$ is terminal, we have $K_{X_{\ol\eta}}\ge \rho^*K_{W_{\ol\eta}}$, and hence $W_{\ol\eta}$ has at most canonical singularities. In particular, replacing $X$ with a minimal model, with the loss of smoothness we may assume that $K_{X/Z}$ is $f$-nef. Then by [@Tan14 Theorem 1.2], $K_{X_{\ol\eta}}$ is semi-ample, and since $p>5$, the geometric generic fiber of the Iitaka fibration $I_{\ol\eta}: X_{\ol\eta} \to C_{\ol\eta}$ is a smooth elliptic curve over $k(\ol\eta)$ by [@Bad01 Theorem 7.18]. For the generic fiber $X_{\eta}$ and sufficiently divisible positive integer $n$, since $H^0(X_{\ol\eta}, nK_{X_{\ol\eta}}) \cong H^0(X_{\eta}, nK_{X_{\eta}})\otimes_{k(\eta)} k(\ol\eta)$, we see that the Iitaka fibration $I_{\ol\eta}: X_{\ol\eta} \to C_{\ol\eta}$ coincides with the Iitaka fibration $I_{\eta}: X_{\eta} \to C_{\eta}$ tensoring with $k(\ol\eta)$. Thus the the geometric generic fiber of $I_{\eta}$ is a smooth elliptic curve. Considering the relative Iitaka fibration of $f: X \to Z$, whose geometric generic fiber is a smooth elliptic curve, we get a birational morphism $u:X'\to X$, a fibration $g:Y\to Z$ with $Y$ smooth, and an elliptic fibration $h:X'\to Y$ fitting into the following commutative diagram: $$\xymatrix{&X'\ar[d]_{h}\ar[r]^{u} &X \ar[d]^{f}\\ &Y\ar[r]^{g} &Z. }$$ Note that the geometric generic fiber $C_{\ol\eta}$ of $g:Y\to Z$ is normal, and hence smooth. By Lemma \[l-linear-pullback\], we may assume that $u^*K_{X/Z}\sim_{\Q}h^*H$ for a nef $g$-big $\Q$-Cartier divisor on $Y$. By Theorem \[thm:ellfib\_intro\], we have $\kappa(X',K_{X'/Y})\ge0$, and hence there exists an injective homomorphism $h^*\o_{Y/Z}^m\to\o_{X'/Z}^m$ for sufficiently divisible $m>0$. Let $l\gg0$ be an integer such that $lH$ is Cartier and $u^*lK_{X/Z}\sim h^*lH$. Then we have natural homomorphisms $$\begin{aligned} (g_*\O_Y(K_{Y/Z}+lH))^{\otimes m}&\to g_*\O_Y(m(K_{Y/Z}+lH))\cong g_*h_*\O_{X'}(mh^*(K_{Y/Z}+lH))\\ &\hookrightarrow f_*u_*\O_{X'}(mK_{X'/Z}+u^*lmK_{X/Z})\cong f_*\O_{X}(m(l+1)K_{X/Z}).\end{aligned}$$ Replacing $l$ if necessary, we may assume that the first homomorphism is generically surjective. By Lemma \[mthplem\], $g_*\O_Y(K_{Y/Z}+lH)$ is nef, and hence so is $g_*\O_Y(m(K_{Y/Z}+lH))$. This completes the proof. The case $\kappa(X_{\ol\eta})=0$ {#section:k(F)=0} ================================ In this section, we prove Theorem \[thm:c31\_intro\] in the case when the Kodaira dimension of the geometric generic fiber is equal to zero. It is proved as a consequence of Theorems \[thm:ps-eff\_intro\] and \[thm:sa\_intro\]. \[thm:ps-eff\_intro\]Let $f:X\to Z$ be a surjective morphism from a normal projective variety $X$ over an algebraically closed field of characteristic $p>0$ to a smooth projective variety $Z$, and let $\Delta$ be an effective $\Q$-divisor on $X$ such that $a\Delta$ is integral for some $a>0$ not divisible by $p$. Assume that $(X_{\ol\eta},\Delta_{\ol\eta})$ is $F$-pure, where $\ol\eta$ is the geometric generic point of $Z$. If $K_X+\Delta\sim_{\Q}f^*(K_Z+L)$ for some $\Q$-divisor $L$ on $Z$, then $L$ is pseudo-effective. Theorem \[thm:ps-eff\_intro\] follows from [@Eji16 Theorem 4.5] (by setting $D=-(K_Z+L)$), and it is also proved by Patakfalvi [@Pat14 Theorem 1.6] when $Z$ is a curve. \[thm:sa\_intro\]With the same notation and assumptions as in Theorem \[thm:ps-eff\_intro\], if $Z$ is an elliptic curve, then $L$ is semi-ample. By Theorem \[thm:ps-eff\_intro\], we have $\deg L\ge0$. We may assume that $\deg L=0$, and it suffices to show that $L\sim_{\Q}0$. Since $(K_X+\Delta)_{\ol\eta}\sim_{\Q}0$, there is an ample Cartier divisor $A$ on $X$ such that $l(K_X+\Delta)_{\ol\eta}+A_{\ol\eta}$ is ample and free for every $l\in a\Z$. Recall that $0<a\in \Z\setminus p\Z$ and $a\Delta$ is integral. By Fujita’s vanishing theorem, there exist some $m_0>0$ such that for every nef Cartier divisor $N$ on $X_{\ol\eta}$, $\O_{X_{\ol\eta}}((m_0-1)A_{\ol\eta}+N)$ is $0$-regular with respect to $l(K_X+\Delta)_{\ol\eta}+A_{\ol\eta}$ for every $l\in a\Z$. Then the natural homomorphism $$\begin{aligned} H^0(X_{\ol\eta},l(K_X+\Delta)_{\ol\eta}+mA_{\ol\eta})&\otimes H^0(X_{\ol\eta},(m'-1)A_{\ol\eta})\otimes H^0(X_{\ol\eta},l'(K_X+\Delta)_{\ol\eta}+A_{\ol\eta})\\ &\to H^0(X_{\ol\eta},(l+l')(K_X+\Delta)_{\ol\eta}+(m+m')A_{\ol\eta})\end{aligned}$$ is surjective for every $l,l'\in a\Z$ and $m,m'\ge m_0$. Thus $$\begin{aligned} H^0(X_{\ol\eta},l(K_X+\Delta)_{\ol\eta}+mA_{\ol\eta})&\otimes H^0(X_{\ol\eta},l'(K_X+\Delta)_{\ol\eta}+m'A_{\ol\eta})\\ &\to H^0(X_{\ol\eta},(l+l')(K_X+\Delta)_{\ol\eta}+(m+m')A_{\ol\eta})\end{aligned}$$ is also surjective, and hence the natural homomorphism $$\G(l,m)\otimes\G(l',m')\to\G(l+l',m+m')$$ is generically surjective, where $\G(l,m):=f_*\O_X(l(K_{X/Z}+\Delta)+mA)$. From now on, we use the same notation as [@Eji15 Sections 2 or 3] or [@Eji16 Section 2]. Replacing $m_0$ if necessary, by [@Pat14 Corollary 2.23] we may assume that $$\begin{aligned} H^0(&X_{\ol\eta},\phi_{(X_{\ol\eta},\Delta_{\ol\eta})}^{(e)}\otimes\O_{X_{\ol\eta}}(N+m_0A_{\ol\eta})):\\ &H^0(X_{\ol\eta},(1-p^e)(K_X+\Delta)_{\ol\eta}+p^e(N+m_0A_{\ol\eta})) \to H^0(X_{\ol\eta},N+m_0A_{\ol\eta})\end{aligned}$$ is surjective for every $e>0$ with $a|(p^e-1)$ and for every nef Cartier divisor $N$ on $X_{\ol\eta}$. Since $l(K_X+\Delta)_{\ol\eta}$ is nef, $$\begin{aligned} {f_{Z^e}}_*&(\phi_{(X/Z,\Delta)}^{(e)}\otimes\O_{X_{Z^e}}(l(K_{X/Z}+\Delta)_{Z^e}+m_0A_{Z^e})):\\ & \G((l-1)p^e+1,m_0p^e) \to {f_{Z^e}}_*\O_{X_{Z^e}}(l(K_{X/Z}+\Delta)_{Z^e}+m_0A_{Z^e})\cong {F_Z^e}^*\G(l,m_0)\end{aligned}$$ is generically surjective. Let $b>0$ be an integer such that $a|b$, that $bL$ is integral and that $b(K_X+\Delta)$ is linearly equivalent to $bf^*L$. By Proposition \[prop:decomp\], there exists a finite morphism $\pi:Z'\to Z$ from an elliptic curve $Z'$ such that $\pi^*\G(r,m_0)$ is a direct sum of line bundles for each $0\le r<b$ with $a|r$. By Lemma \[injofpic\], we may replace $L$ and $\G(r,m_0)$ respectively by its pullback by $\pi$. Set $$\begin{aligned} \F:=&\bigoplus_{0\le r<b,~a|r}\G(r,m_0),\\ \mu:=&\min\{\deg\M|\textup{$\M\in\Pic(Z)$ and $\M$ is a direct summand of $\F$}\}\textup{, and} \\ T:=&\{\M\in\Pic(Z)|\textup{$\deg\M=\mu$ and $\M$ is a direct summand of $\F$}\}\\ =&\{\M_1,\ldots,\M_\lambda\}.\end{aligned}$$ Then for every $\M_i\in T$, there exists an $0\le s<b$ with $a|s$ such that the composition $$\begin{aligned} \G(s,m_0)^{\otimes p^e-1}&\otimes\G(r_{i,e},m_0)\otimes\O_Z(-q_{i,e}bL)\\ &\to\G((s-1)p^e+1,p^em_0)\to{F_Z^e}^*\G(s,m_0)\twoheadrightarrow\M_i^{p^e}\end{aligned}$$ is generically surjective for every $e>0$ with $a|(p^e-1)$. Here $q_{i,e}$ and $r_{i,e}$ are integers satisfying $1+s-p^e=-q_{i,e}b+r_{i,e}$ and $0\le r_{i,e}<b$. Then there exist a line bundle $\M$ which is a direct summand of $\G(s,m_0)^{p^e-1}\otimes\G(r_{i,e},m_0)$ and a non-zero morphism $\M\to\M_i^{p^e}(q_{i,e}bL)$. By considering the degree of the line bundles, we see that $\M_i^{p^e}(q_{i,e}bL)\cong\M\in T^{p^e}$, where $$\textstyle T^n:=\{\bigotimes_{1\le i\le\lambda}\M_i^{n_i}\in\Pic(Z)|n_i\ge0,~\sum_{1\le i\le\lambda}n_i=n\}.$$ Fix an integer $e >0$ such that $a|p^e-1$. Set $n:=\lambda(p^e-1)+1$. For every $\mathcal N\in T^n$, there exist $n_1,\ldots,n_\lambda\ge0$ such that $\mathcal N\cong \bigotimes_{1\le i\le \lambda}\M_i^{n_i}$ and $n'_j:=n_j-p^e\ge0$ for at least one $j$. Then $$\begin{aligned} \mathcal N(q_{j,e}bL)\cong(\bigotimes_{i\ne j}\M_i^{n_i})\otimes\M_j^{n'_j}\otimes\M_j^{p^e}(q_{j,e}bL).\end{aligned}$$ Since $\M_j^{p^e}(q_{j,e}bL)\in T^{p^e}$, we have $\mathcal N(q_{j,e}bL)\in T^{n}$. Hence for every $m\ge q:=\max\{q_{1,e},\ldots,q_{\lambda,e}\}$, $$\textstyle \mathcal N(mbL)\in \{\M(k bL)\in\Pic(Z)|\M\in T^n,~0\le k<q\}.$$ Since $T^n$ is a finite set, there are integers $m>m'>0$ such that $\mathcal N(mbL)\cong\mathcal N(m'bL)$, and hence $(m-m')bL\sim0$. As in the proof of Theorem \[mthp\], we may assume that $X$ is minimal over $Z$ and $K_{X_{\ol\eta}}$ is semi-ample, thus $K_{X_{\ol\eta}}\sim_{\Q}0$. By Lemma \[l-linear-pullback\], $K_X$ is $\Q$-linearly equivalent to the pullback of $K_Z+L$ for some $\Q$-divisor $L$ on $Z$. In particular $\kappa(X,K_X)=\kappa(Z,K_Z+L)$. It is enough to show that $\kappa(Z,K_Z+L)\ge\kappa(Z)$. By Lemma \[thm:ps-eff\_intro\], we see that $L$ is nef. Note that since $X_{\ol\eta}$ has at most rational double points as singularities, $X_{\ol\eta}$ is Gorenstein and $p>5$, $X_{\ol\eta}$ is $F$-pure by [@Art77 Section 3] and [@Fed83]. When $Z$ is of general type, by Theorem \[thm:ps-eff\_intro\], we have $K_Z+L$ is big, thus $\kappa(Z,K_Z+L)=\dim Z=\kappa(Z)$. And when $Z$ is an elliptic curve, by Theorem \[thm:sa\_intro\], we have $\kappa(Z,K_Z+L)\ge\kappa(Z)$. This completes the proof. The case $\kappa(X_{\ol\eta})=1$ {#section:k(F)=1} ================================ In this section, we consider the case when the Kodaira dimension of the geometric generic fiber is one. Let $f:X\to Z$ be a surjective morphism from a smooth projective 3-fold to a smooth projective curve of genus at least one, and let $\ol\eta$ be the geometric generic point of $Z$. Suppose that $\kappa(X_{\ol\eta})=1$. With the loss of smoothness, by Theorem \[thm:mmp\] (2) we may assume that $X$ is a minimal model. Then $X_{\ol\eta}$ has canonical singularities by the proof of Theorem \[mthp\]. If $g(Z) >1$, then since $f_*\omega_{X/Z}^m$ contains a nef sub-bundle of rank $\geq cm$ for some $c>0$ and any sufficiently divisible $m$ (Theorem \[mthp\]), by some standard arguments (proof of [@BCZ15 Proposition 5.1]), we can conclude that $$\kappa(X) \geq 2 = \kappa(Z) + \kappa(X_{\ol\eta}).$$ So from now on, we assume $g(Z) = 1$. Then $\omega_{X} = \omega_{X/Z}$. We break the proof into several steps. Step 1: By considering the relative Iitaka fibration and applying Lemma \[l-linear-pullback\], we get the following commutative diagram $$\centerline{\xymatrix{ &X'\ar[d]_h\ar[r]^{\sigma} &X\ar[d]^f \\ &Y\ar[r]^g &Z}}$$ where $Y$ is a smooth projective surface, and $h$ is fibration with geometric fiber being a smooth elliptic curve by the proof of Theorem \[mthp\], such that $\sigma^*K_X \sim_{\mathbb{Q}} h^*D$ where $D$ is a nef $g$-big divisor on $Y$. If $D$ is big, then we are done. From now on, we assume the numerical dimension $\nu(K_X)=\nu(D) = 1$. Then we claim that \[cl\] If $X$ has a fibration $f': X \rightarrow W$ to a normal projective curve $W$ such that $K_{F'}$ is numerically trivial, where $F'$ denotes the generic fiber of $f'$. Assume moreover that there exist $L \in \Pic^0(Z)$ and an integer $m>0$ such that $h^0(X, mK_X + f^*L) >0$. Then $K_X$ is semi-ample. Take an effective divisor $D_L \sim mK_X + f^*L$. Since $D_L$ is nef, effective and $D_L|_{F'} \sim_{num}0$, we have $$(mK_X + f^*L)|_{F'} \sim D_L|_{F'} \sim 0.$$ By Lemma \[l-linear-pullback\] we can assume $D_L \sim_{\mathbb{Q}} f'^*A$ where $A$ is a divisor on $W$, which is ample since $D_L\ne0$. So we only need to show that $L \sim_{\mathbb{Q}} 0$. Since $X$ has at most terminal singularities, it is smooth in codimension one, so $F'$ is a regular surface over the function filed $K(W)$ of $W$. Applying [@Tan15 Theorem 0.2], since $K_{F'}$ is numerically trivial, we have $K_{F'}\sim_{\mathbb{Q}} 0$. Therefore, we conclude that $$f^*L|_{F'} \sim_{\Q} mK_{F'} + f^*L|_{F'} \sim_{\Q} (mK_X + f^*L)|_{F'} \sim_{\Q} D_L|_{F'} \sim_{\Q} 0.$$ On the other hand, $F'$ is dominant over the curve $Z\otimes_k K(W)$, passing to the algebraic closure of $K(W)$ and applying Lemma \[injofpic\], we show that $L$ is torsion. Step 2: By Theorem \[mthp\], there exists $c>0$ such that for sufficiently divisible $m_1$, $f_*\omega_{X}^{m_1}$ contains a nef sub-bundle $V$ of rank $r_{m_1} \ge cm_1$. If $\deg V >0$, then we are done by some standard arguments ([@BCZ15 Propostion 5.1]). So we assume that $\deg V =0$, thus by Proposition \[prop:decomp\] there exists a flat base change between two elliptic curves $\pi: Z_1 \rightarrow Z$ such that $\pi^*V = \oplus_{i=1}^n \mathcal{L}_i$ where $\mathcal{L}_i \in \mathrm{Pic}^0(Z_1)$. Let $X_1$ be the normalization of $X\times_Z Z_1$. Then we get the following commutative diagram $$\centerline{\xymatrix{ &X_1\ar[d]_{f_1}\ar[r]^{\pi_1} &X\ar[d]^f \\ &Z_1\ar[r]^{\pi} &Z}}$$ where $\pi_1$ and $f_1$ denote the natural projections. We have that $\pi^*f_*\omega_{X}^{m_1} \subset f_{1*}\pi_1^*\omega_{X}^{m_1}$ by [@Ha77 Proposition 9.3], thus $$\pi^*V = \oplus_{i=1}^n \mathcal{L}_i \subset f_{1*}\pi_1^*\omega_{X}^{m_1}.$$ So we conclude that $$h^0(X_1, \pi_1^*m_1K_X - f_1^*\mathcal{L}_i) \geq 1,$$ and if $\mathcal{L}_i = \mathcal{L}_j$ for some $j \neq i$ then the strict inequality holds. Since $\pi^*: \mathrm{Pic}^0(Z) \to \mathrm{Pic}^0(Z_1)$ is surjective, there exists $L_i'$ such that $\mathcal{L}_i \sim \pi^*L_i'$, thus $$\pi_1^*m_1K_X - f_1^*\mathcal{L}_i \sim \pi_1^*(m_1K_X + f^*L_i').$$ Applying Theorem \[ct\], we can find a sufficiently divisible integer $l >0$ such that $$h^0(X, l(m_1K_X - f^*L_i')) \geq 1.$$ Put $m = lm_1$ and $L_i = lL_i'$. Then $h^0(X, mK_X - f^*L_i) \geq 1$. If $h^0(X, mK_X - f^*L_i) > 1$, then $h^0(Y, mD - g^*L_i) >1$ by the construction in Step 1. Since $mD - g^*L_i$ is nef and $\nu(mD - g^*L_i) = 1$, the movable part of $|mD - g^*L_i|$ has no base point, hence induces a fibration $g': Y \to W'$ on $Y$ to a curve $W'$. The Stein factorization of the composite morphism $g' \circ h: X' \to W'$ induces a fibration $f'': X' \to W$ from $X'$ to a normal curve $W$, which is defined by the base point free linear system $|\mu^*l(mK_X - f^*L_i)|$ for sufficiently divisible integer $l>0$. Since $\sigma: X' \to X$ is a birational morphism such that $\sigma_*\O_{X'} = \O_X$, we conclude that $|\mu^*l(mK_X - f^*L_i)| = \mu^*|l(mK_X - f^*L_i)|$, thus $|l(mK_X - f^*L_i)|$ has no base point, hence defines such a fibration $f': X \to W$ as in Claim of Step 1. So $K_X$ is semi-ample, and this completes the proof in this case. From now on, we can assume $h^0(X, mK_X - f^*L_i) = 1$ and $h^0(X_1, \pi_1^*(mK_X - f^*L_i)) = 1$. For every $i$, we have unique effective divisors $B_i \sim mK_X - f^*L_i$. And by construction, we can assume $\pi_1^*B_i \neq \pi_1^*B_j$ if $i \neq j$, thus $L_i \neq L_j$. In the following, we only need to show that at least two of $L_i$ are torsion.\ Step 3: For every $j$, we have unique effective divisor $B_j \sim mK_X - f^*L_j$. Let $B'$ be the reduced divisor supported on the union of components of $\sum_j B_j$. Take a smooth log resolution $\mu: \tilde{X} \rightarrow X$ of the pair $(X, B')$. Denote by $\tilde{f}: \tilde{X} \rightarrow Z$ the natural morphism. Let $\tilde{B}$ be the reduced divisor supported on the union components of $\sum_j \mu^*B_j$. Consider the dlt pair $(\tilde{X}, \tilde{B})$. Since $X$ has terminal singularities, there exists an effective $\mu$-exceptional divisor $E$ on $\tilde{X}$ such that $$K_{\tilde{X}} \sim_{\Q} \mu^*K_X + E.$$ So $K_{\tilde{X}} + \tilde{B} \sim_{\Q} \mu^*K_X + E + \tilde{B}$ has a weak Zariski decomposition. By Theorem \[thm:mmp\] (2), $(\tilde{X}, \tilde{B})$ has a minimal model $(\hat{X}, \hat{B})$ which is dlt, and there exists a natural morphism $\hat{f}: \hat{X} \to Z$. By the construction, we have the following:\ (1) Note that $B_j|_{X_{\bar{\eta}}}$ is contained in finitely many fibers of the Iitaka fibration $I_{\bar{\eta}}: X_{\bar{\eta}} \to C_{\bar{\eta}}$, which implies that $\kappa(\tilde{X}_{\bar{\eta}}, (K_{\tilde{X}} + \tilde{B})|_{X_{\bar{\eta}}}) = 1$. Since the restriction $(K_{\hat{X}} + \hat{B})|_{\hat{X}_{\bar{\eta}}}$ is semi-ample by [@Tan14 Theorem 1.2], it induces an elliptic fibration on $\hat{X}_{\bar{\eta}}$ by construction. So applying Lemma \[l-linear-pullback\] again, we get the following commutative diagram $$\centerline{\xymatrix{ &\hat{X}'\ar[d]_{\hat{h}}\ar[r]^{\hat{\sigma}} &\hat{X}\ar[d]^{\hat{f}} \\ &\hat{Y}\ar[r]^{\hat{g}} &Z }}$$ where $\hat{Y}$ is a smooth projective surface and $\hat{h}$ is an elliptic fibration such that, $\hat{\sigma}^*(K_{\hat{X}} + \hat{B}) \sim_{\mathbb{Q}} \hat{h}^*\hat{D}$ where $\hat{D}$ is a nef and $\hat{g}$-big divisor on $\hat{Y}$.\ (2)We claim that $\nu(K_{\hat{X}} + \hat{B}) = \nu(\hat{D}) = 1$. Indeed, otherwise, $\hat{D}$ will be big. Note that the divisor $\mu^*\sum_j B_j - \tilde{B}$ is effective and $\mu^*\sum_j B_j \sim \mu^*nmK_X - \sum_j\tilde{f}^*L_j$. Then applying Theorem \[ct\] we can get a contradiction as follows $$\begin{split} 2 = &\kappa(\hat{Y}, \hat{D}) =\kappa(\hat{X}', \hat{\sigma}^*(K_{\hat{X}} + \hat{B})) = \kappa(\hat{X}, K_{\hat{X}} + \hat{B}) = \kappa(\tilde{X}, K_{\tilde{X}} + \tilde{B})\\ \leq &\kappa(\tilde{X}, K_{\tilde{X}} + \mu^*nmK_X - \sum_j\tilde{f}^*L_j)\\ = &\kappa(\tilde{X}, \mu^*K_X + E + \mu^*nmK_X - \sum_j\mu^*f^*L_j) \\ = &\kappa(X, (nm+ 1)K_X - \sum_jf^*L_j) = 1. \end{split}$$ (3) For sufficiently divisible $M$ and every $1 \leq i \leq n$, we get an effective Cartier divisor $$\tilde{\Gamma}_i = M(\mu^*B_i + mE) + Mm\tilde{B} \sim M(mK_{\tilde{X}} - \tilde{f}^*L_i) + Mm\tilde{B}.$$ Denote by $\nu: \tilde{X} \dashrightarrow \hat{X}$ the natural birational map. Let $\hat{\Gamma}_i = \nu_*\tilde{\Gamma}_i$. Then $$\hat{\Gamma}_i \sim M(mK_{\hat{X}} - \hat{f}^*L_i) + Mm\hat{B} \sim Mm(K_{\hat{X}} + \hat{B}) - M\hat{f}^*L_i.$$ Since $E$ is contained in finitely many fibers of $\tilde{f}$, $\nu_*E$ is contracted by $\hat{f}$. So if a component of $\hat{\Gamma}_i$ is dominant over $Z$ then it is contained in $\hat{B}$.\ (4) Take an effective divisor $\hat{D}_i \sim Mm\hat{D} - M\hat{g}^*L_i$ for each $i$. Since $\hat{D}$ is nef and $\hat{D}^2 =0$, so is $\hat D_i$. Considering connected components of the union of the $\hat D_i, 0 \leq i \leq n$, we see that there exist nef effective Cartier divisors $\hat D_1',\ldots,\hat D_k'$ satisfying the conditions below: - $\mathrm{Supp}(\hat{D}_j')$ is connected for each $j$, and $\mathrm{Supp}(\hat{D}_j')\cap\mathrm{Supp}(\hat{D}_l')=\emptyset$ for each $j\neq l$; - $(\hat{D}_1')^2=\cdots=(\hat{D}_k')^2=0$; - the greatest common divisor of the coefficients of every $\hat{D}_j'$ is equal to one; - for each $i$, there exist $a_{i1},\ldots,a_{ik} \in \mathbb{Z}^{\geq 0}$ such that $\hat{D}_i = a_{i1}\hat{D}_1' + \cdots + a_{ik}\hat{D}_k'$. Note that at least one of the $\hat{D}_j'$ is dominant over $Z$, and hence intersects every fiber of $\hat{f}$. From this we see that every $\hat{D}_j'$ is dominant over $Z$. Indeed, if a $\hat D_j'$ is contained in one fiber, then the support of $\hat D_j'$ is equal to the whole fiber as shown by [@Beau96 VIII.4], which contradicts to the first condition above. Now we have $\hat{\sigma}^*\hat{\Gamma}_i = \hat{h}^*\hat{D}_i$ by the construction. Hence $\hat h^*\hat D_1',\ldots,\hat h^*\hat D_k'$ are disjoint connected components of $\hat \sigma^*(\sum \hat\Gamma_i)$. Let $\hat G_j:=\hat\sigma_*\hat h^*\hat D_j'$. Then we have that $\mathrm{Supp}(\hat G_j)\cap\mathrm{Supp}(\hat G_l)=\emptyset$ for each $j\neq l$.\ (5) Take two divisors $\hat{D}_1, \hat{D}_2$. Since $\hat{D}_1\ne\hat D_2$, we may assume that $a_{11}>a_{21}\geq0$. We may further assume that $a_{22}>a_{12}\geq0$ because $\hat{D}_1 \sim_{num} \hat{D}_2$. We can write that $$\hat\Gamma_1 = a_{11}\hat{G}_1 + a_{12}\hat{G}_2 + \hat{G}'_3~\mathrm{and}~ \hat\Gamma_2 = a_{21}\hat{G}_1 + a_{22}\hat{G}_2 + \hat{G}''_3$$ where neither of $\hat{G}_1$ and $\hat{G}_2$ intersects $\hat{G}'_3 \cup \hat{G}''_3$.\ Step 4: Take two reduced, irreducible and dominant over $Z$ components $\hat{C}_1, \hat{C}_2$ of $\hat{G}_1, \hat{G}_2$ respectively. Then $\hat{C}_1, \hat{C}_2$ are contained in $\hat{B}$ by the construction of $\hat\Gamma_i$ in Step 3 (3). Since $(\hat{X}, \hat{B})$ is dlt and $\hat{B}$ is a reduced integral divisor, so $\hat{C}_1, \hat{C}_2$ are log canonical centers of $(\hat{X}, \hat{B})$. By Step 3 (4), since $\hat{D}$ is nef and $\hat{D}\cdot\hat{D}_i = 0$, so $\hat{D}|_{\hat{D}_i} \sim_{num} 0$. For $j=1,2$, since $\hat{h}(\hat{\sigma}^{-1}\hat{C}_j)$ is a component of some $\hat{D}_i$, by $\hat{\sigma}^*(K_{\hat{X}} + \hat{B}) \sim_{\Q} \hat{h}^*\hat{D}$, we conclude that $$\hat{\sigma}^*(K_{\hat{X}} + \hat{B})|_{\hat{\sigma}^{-1}\hat{C}_j} \sim_{num} 0.$$ Denote by $\hat{C}'_i$ the normalization of $\hat{C}_i$. Then $(K_{\hat{X}} + \hat{B})|_{\hat{C}'_i} \sim_{num} 0$, thus $(K_{\hat{X}} + \hat{B})|_{\hat{C}'_i} \sim_{\mathbb{Q}} 0$ by Lemma \[adj\]. Therefore, $$\begin{split} -a_{21}M\hat{f}^*L_1|_{\hat{C}'_1} &\sim_{\mathbb{Q}} a_{21}(Mm(K_{\hat{X}} + \hat{B}) - M\hat{f}^*L_1)|_{\hat{C}'_1}\\ &\sim_{\mathbb{Q}} a_{21}\hat{\Gamma}_1|_{\hat{C}'_1} \sim_{\mathbb{Q}} a_{11}a_{21}\hat{G}_1|_{\hat{C}'_1}\\ &\sim_{\mathbb{Q}} a_{11}\hat{\Gamma}_2|_{\hat{C}'_1} \sim_{\mathbb{Q}} -a_{11}M\hat{f}^*L_2|_{\hat{C}'_1} \end{split}$$ which, by Lemma \[injofpic\], implies that $$a_{21}ML_1 \sim_{\mathbb{Q}} a_{11}ML_2.$$ In the same way, restricting on $\hat{C}'_2$ gives $$a_{22}ML_1 \sim_{\mathbb{Q}} a_{12}ML_2.$$ Finally by conditions $a_{11} > a_{21}$ and $a_{12}<a_{22}$, we conclude that $L_1 \sim_{\mathbb{Q}} L_2 \sim_{\mathbb{Q}} 0$, and this completes the proof. [99]{} S. Abhyankar, [*Resolution of singularities of embedded algebraic surfaces*]{}, Second edition, Springer Monographs in Mathematics, SpringerVerlag, Berlin, (1998). M. F. Atiyah: [*Vector bundles over an elliptic curve*]{}, Proc. London Math. Soc, (1957), 414–452. M. Artin: [*Coverings of the rational double points in characteristics p*]{}, Complex Analysis and Algebraic Geometry, Iwanami Shoten, Tokyo, (1977). L. Bǎdescu: [*Algebraic surfaces*]{}, Universitext, Springer-Verlag, NewYork, (2001). A. Beauville: [*Complex algebraic surfaces*]{}, second edition. London Mathematical Society Student Texts. 34, Combridge University Press (1996). C. Birkar, [*The Iitaka conjecture $C_{n,m}$ in dimension six*]{}. Compositio Math. (2009), no. 6, 1442–1446. C. Birkar: [*Existence of flips and minimal models for 3-folds in char $p$*]{}, Ann. Sci. école Norm. Sup. (4) 49 (2016), 169–212. C. Birkar, J. Waldron: [*Existence of Mori fibre spaces for 3-folds in char $p$*]{}, http://arXiv:1410.4511v1. C. Birkar, Y. Chen, C. Zhang: [*Iitaka’s $C_{n,m}$ conjecture for 3-folds over finite fields*]{}, to appear in Nagoya Math. J., http://arxiv.org/abs/1507.08760 (2016). E. Bombieri, D. Mumford: [*Enriques’ classification of surfaces in char.p, II*]{}. In Complex Analysis and Algebraic Geometry (dedicated to K. Kodaira). Iwanami Shoten Publ., Tokyo, Cambridge Univ. (1977), Part I, 23–42. J. Cao: [*Kodaira dimension of algebraic fiber spaces over surfaces*]{}, http://arxiv.org/abs/1511.07048 (2015). J. Cao, M. Pǎun: [*Kodaira dimension of algebraic fiber spaces over Abelian varieties*]{}, Invent. Math. (2017), no. 1, 345–387. P. Cascini, H. Tanaka, C. Xu: [*On base point freeness in positive characteristic*]{}, Ann. Sci. Ecole Norm. S. (2015), 1239–1272. J.A. Chen, C.D. Hacon: [*Kodaira dimension of irregular varieties*]{}, Invent. Math. (2011), no. 3, 481–500. Y. Chen, L. Zhang: [*The subadditivity of the Kodaira dimension for fibrations of relative dimension one in positive characteristics*]{}, Math. Res. Lett. 22 (2015), 675–696. V. Cossart, O. Piltant: [*Resolution of singularities of threefolds in positive characteristic I*]{}, J. Algebra (2008), 1051–1082. V. Cossart, O. Piltant: [*Resolution of singularities of threefolds in positive characteristic II*]{}, J. Algebra (2009),1836–1976. S. D. Cutkosky: [*Resolution of singularities for 3-folds in positive characteristic*]{}, Amer. J. Math. (2009), no. 1, 59–127. S. Ejiri: [*Weak positivity theorem and Frobenius stable canonical rings of geometric generic fibers*]{}, to appear in J. Algebraic Geom., http://arxiv.org/abs/1508.00484 (2015). S. Ejiri: [*Positivity of anti-canonical divisors and $F$-purity of fibers*]{}, preprint (2016). R. Fedder: [*F-purity and rational singularity*]{}, Trans. Amer. Math. Soc. (1983), no. 2, 461–480. O. Fujino: [*On maximal Albanese dimensional varieties*]{}, Proc. Japan Acad. Ser. A Math. Sci. (2013), no. 8, 92–95. O. Fujino: [*On subadditivity of the logarithmic Kodaira dimension*]{}, to appear in J. Math. Soc. Japan, http://arxiv.org/abs/1406.2759 (2014). C.D. Hacon, C. Xu: [*On the three dimensional minimal model program in positive characteristic*]{}, J. Amer. Math. Soc. (2015), 711–744. R. Hartshorne, *Algebraic Geometry*, Graduate Texts in Mathematics, No. 52. 1977. S. Iitaka, *Algebraic geometry*. An introduction to birational geometry of algebraic varieties, Graduate Texts in Mathematics, 76. 1982. Y. Kawamata: [*Characterization of abelian varieties*]{}, Compositio Math. (1981), no. 2, 253–276. Y. Kawamata: [*Kodaira dimension of algebraic fiber spaces over curves*]{}, Invent. Math. (1982), no. 1, 57–71. Y. Kawamata: [*Minimal models and the Kodaira dimension of algebraic fiber spaces*]{}, J. Reine Angew. Math. (1985), 1–46. S. Keel: *Basepoint freeness for nef and big line bundles in positive characteristic*, Annals of Math, Second Series, Vol. 149, No. 1 (1999), 253–286. J. Kollár: [*Subadditivity of the Kodaira dimension: fibers of general type*]{}, Algebraic geometry, Sendai, 1985, 361–398, Adv. Stud. Pure Math. , North-Holland, Amsterdam, (1987). S. Kovács, Z. Patakfalvi: [*Projectivity of the moduli space of stable log-varieties and subadditvity of log-Kodaira dimension*]{}, to appear in J. Amer. Math. Soc., http://arxiv.org/abs/1503.02952 (2015). H. Lange, U. Stuhler: [*Vektorbündel auf Kurven und Darstellungen Fundamentalgruppe*]{}, Math Z. (1977), 73–83 C. Lai: [*Varieties fibered by good minimal models*]{}, Math. Ann. (2011), no. 3, 533–547. T. Oda: [*Vector bundles on an elliptic curve*]{}, Nagoya Math. J. (1971), 41–72. Z. Patakfalvi: [*Semi-positivity in positive characteristics*]{}, Ann. Sci. Ecole Norm. S. (2014), no. 5, 991–1025. Z. Patakfalvi: [*On subadditivity of Kodaira dimension in positive characteristic over a general type base*]{}, to appear in J. Algebraic Geom., http://arxiv.org/abs/1308.5371 (2013). H. Tanaka: [*Minimal models and abundance for positive characteristic log surfaces*]{}, Nagoya Math. J. (2014), 1–70. H. Tanaka: [*The X-method for klt surfaces in positive characteristic*]{}, J. Algebraic Geom. (2015), 605-628. H. Tanaka: [*Abundance theorem for surfaces over an imperfect field*]{}, http://arxiv.org/abs/1502.01383 (2015). E. Viehweg: [*Canonical divisors and the additivity of the Kodaira dimension for morphisms of relative dimension one*]{}, Compositio Math. (1977), no. 2, 197–223. E. Viehweg: [*Weak positivity and the additivity of the Kodaira dimension for certain fibre spaces*]{}, Algebraic varieties and analytic varieties (Tokyo, 1981), 329–353, Adv. Stud. Pure Math. North-Holland, Amsterdam, (1983). J. Waldron, [*Finite generation of the log canonicial ring for 3-folds in char $p$*]{}, to appear in Math. Res. Lett., http://arxiv.org/abs/1503.03831. J. Waldron: [*The LMMP for log canonical 3-folds in char $p$*]{}, http://arxiv.org/abs/1603.02967 (2016). L. Zhang: [*Subadditivity of Kodaira dimensions for fibrations of three-folds in positive characteristics*]{}, http://arxiv.org/abs/1601.06907 (2016). [^1]: i.e., there exists a birational projective morphism $\mu: W \to X$ such that $\nu^*(K_X + \Delta) = P + M$ where $P$ is nef over $Z$ and $M$ is effective
--- abstract: 'We present recent results from the UKQCD collaboration’s dynamical QCD simulations. This data has fixed lattice spacing but varying dynamical quark mass. We concentrate on searching for an unquenching signal in the mesonic mass spectrum where we do not find a significant effect at the quark masses considered.' address: 'Department of Physics, University of Wales Swansea, U.K.' author: - | Chris Allton\ for the [*UKQCD Collaboration*]{} title: Searching for dynamical fermion effects in UKQCD simulations --- Introduction ============ Computing resources available for simulations of lattice QCD are now powerful enough to investigate unquenching effects. UKQCD has embarked on a programme of studying these effects in the light hadron spectrum, static quark potential, glueball spectrum and topological sectors. The purpose of this paper is to investigate unquenching effects in the first of these areas. It is well known that the lattice cut-off is a function of [*both*]{} the gauge coupling, $\beta$, and dynamical quark mass, $m^{sea}$ (see e.g. [@csw176]). For this reason, the philosophy we have chosen is to simulate at points along the “matched” trajectory in the $(\beta,m^{sea})$ plane i.e. defined by[*fixed*]{} lattice spacing, $a$. This then disentangles lattice spacing artefacts with unquenching effects. Any variation of a physical quantity along this trajectory can sensibly be attributed to unquenching effects rather than lattice systematics. This work has been published in full in [@csw202]. $\beta$ ${\mbox{$\kappa^{\rm sea}$}}$ $c_{SW}$ \#conf. --------- ------------------------------- ---------- --------- --------------------------------------------------- ----------------------------------------------- ---------------------------------------------- 5.20 0.1355 2.0171 208 0.0972(8)[${\scriptstyle {}^{+{7}}_{-{0}}}$]{} 0.110[${\scriptstyle {}^{+{ 4}}_{-{ 3}}}$]{} 0.578[${\scriptstyle {}^{+{13}}_{-{19}}}$]{} 5.20 0.1350 2.0171 150 0.1031(09)[${\scriptstyle {}^{+{20}}_{-{1}}}$]{} 0.115[${\scriptstyle {}^{+{ 3}}_{-{ 3}}}$]{} 0.700[${\scriptstyle {}^{+{12}}_{-{10}}}$]{} 5.26 0.1345 1.9497 101 0.1041(12)[${\scriptstyle {}^{+{11}}_{-{10}}}$]{} 0.118[${\scriptstyle {}^{+{ 2}}_{-{ 2}}}$]{} 0.783[${\scriptstyle {}^{+{ 5}}_{-{ 5}}}$]{} 5.29 0.1340 1.9192 101 0.1018(10)[${\scriptstyle {}^{+{20}}_{-{7}}}$]{} 0.116[${\scriptstyle {}^{+{ 3}}_{-{ 4}}}$]{} 0.835[${\scriptstyle {}^{+{ 7}}_{-{ 7}}}$]{} 5.93 0 1.82 623 0.1040(03)[${\scriptstyle {}^{+{4}}_{-{0}}}$]{} 0.1186[${\scriptstyle {}^{+{17}}_{-{15}}}$]{} 1 Simulation Details ================== A non-perturbatively improved clover action was used with lattice parameters displayed in Table \[tb:params\] with a volume of $16^3 \times 32$. The last four rows contain the parameters for the matched ensembles, while the top simulation explores a lighter quark mass. Further details of the simulation appear in [@csw202; @derek]. Mesonic Sector ============== We begin outlining our results by indicating our quark masses via $M_{PS}^{unitary}/M_V^{unitary}$, where $M_{V(PS)}$ is the pseudoscalar (vector) meson mass and the superscript “unitary” refers to the parameters $m^{sea} \equiv m^{val}$ in Table \[tb:params\]. As can be seen our quark masses are only modestly light (c.f. $M_{PS}^{unitary}/M_V^{unitary} \approx 0.18$ in nature). The results for the hyperfine splitting are shown in Fig. 1 where the experimental points are plotted assuming $r_0 = (0.49 \pm 0) fm$. Note that there is a tendency for the lattice data to flatten towards the experimental points as the dynamical quark mass decreases within the matched ensembles. However the “unmatched” simulation (${\mbox{$\kappa^{\rm sea}$}}=0.1355$) shows an increased negative slope, presumably due to finite-volume effects. \[fig:hyperfine\] The $J$ parameter, defined as, $$J = M_V \frac{dM_V}{dM_{PS}^2} \bigg|_{K,K^\ast}, \label{eq:J}$$ was calculated using 3 approaches: (i) “Pseudo-Quenched” i.e. where ${\mbox{$\kappa^{\rm sea}$}}$ is held fixed and the derivative in eq.(\[eq:J\]) is w.r.t. variations in ${\mbox{$\kappa^{\rm val}$}}$; (ii) “Unitary Trajectory” i.e. where ${\mbox{$\kappa^{\rm sea}$}}\equiv {\mbox{$\kappa^{\rm val}$}}$ and the derivative in eq.(\[eq:J\]) is w.r.t. variations in both ${\mbox{$\kappa^{\rm val}$}}$ and ${\mbox{$\kappa^{\rm sea}$}}$ combined; and (iii) “Chiral Extrapolation of Pseudo-Quenched” i.e. taking the $J$ values from Approach (i) and performing the chiral extrapolation $m^{sea} \rightarrow 0$. The results of all three methods are shown in Fig. 2, together with the experimental point. There is evidence that the lattice value for $J$ approaches the experimental point as the sea quark mass decreases (see approaches 1 and 3). Note also that the “unmatched” simulation (${\mbox{$\kappa^{\rm sea}$}}=0.1355$) has a [*smaller*]{} $J$ value. This fact is related to the comment in the previous paragraph. \[fig:J\] We use two methods to extract the lattice spacing. The first is via the Sommer scale $r_0$, and the second is directly from the meson spectrum as outlined in [@leonardo]. The values obtained from both methods are displayed in Table \[tb:params\]. Note that $a_J$ is $\approx$ 10% larger than $a_{r_0}$. This could be due to the experimental estimate of $r_0 = 0.49$ fm being $\approx$ 10% too small. Unquenching Effects in Meson Spectrum ===================================== In order to investigate unquenching effects in the meson spectrum, we define the quantity $$\delta_{i,j}(\beta,{\mbox{$\kappa^{\rm sea}$}}) = 1 - \frac{a_i(\beta,{\mbox{$\kappa^{\rm sea}$}})} {a_j(\beta,{\mbox{$\kappa^{\rm sea}$}})},$$where $a_i$ is the scale determined from the physical quantity $M_i$. When $\delta_{i,j} = 0$ then the lattice prediction of $M_i$ with scale taken from $M_j$ agrees with experiment. Thus $\delta$ is a good parameter to study unquenching effects. We expect that $\delta_{i,j}(\beta,m^{sea}) = {\cal O}(a^2)$ since we are using a non-perturbatively improved clover action. In Fig. 3 we plot $\delta_{i,j}$ for the matched datasets where $j=r_0$, and $i$ is the scale determined from the string tension, $\sigma$, and the two mass pairs $(\rho,\pi)$ and $(K,K^\ast)$ following [@leonardo]. The $x-$axis in this plot is $(aM_{PS}^{unitary})^{-2} \sim 1/m^{sea}$, so the quenched data point lies on the $y-$axis. \[fig:delta\] We see disappointingly that, in the case of the scale determinations from the meson spectrum, there is no significant tendency of $\delta$ towards zero as $m^{sea} \rightarrow 0$, i.e. the quantity $\delta$ does not give us an indication of unquenching effects. However, in the case of $\sigma$, there is a statistically significant variation of $\delta \rightarrow 0$ as $m^{sea} \rightarrow 0$. This implies that we do see unquenching effects in the static quark potential. Chiral Extrapolations ===================== In [@csw202] we used 3 approaches to perform the chiral extrapolations of hadron masses: “Pseudo-Quenched”; “Unitary Trajectory”; and a “Combined Chiral Fit”. In this publication we will refer only to the last approach. We take $$\hat{M}({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm val}$}})$$ $$\;\;\;\;\;\;\;\;\;\;\;\;= A({\mbox{$\kappa^{\rm sea}$}}) + B({\mbox{$\kappa^{\rm sea}$}}) \hat{M}_{PS}({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm val}$}})^2$$ $$\;\;\;\;\;\;\;\;\;\;\;\;= A_0 + A_1 \hat{M}_{PS}({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm sea}$}})^{-2}$$ $$+ \left[ B_0 + B_1 \hat{M}_{PS}({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm sea}$}})^{-2} \right] \hat{M}_{PS}({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm val}$}})^2,$$ using the nomenclature $\hat{M} \equiv aM$ and the first argument of $M({\mbox{$\kappa^{\rm sea}$}};{\mbox{$\kappa^{\rm val}$}})$ refers to the sea quark and the second to the valence quark. The results of these extrapolations are shown in Table \[tb:chiral\]. We stress that this functional form for the extrapolation is not motivated by theory, but is used as a numerical analysis technique in order to test for evidence of unquenching effects. As can be seen from Table \[tb:chiral\], the parameters $A_1$ and $B_1$ are compatible with zero (to $2\sigma$) and therefore we conclude that there is no evidence of unquenching effects. hadron $A_0$ $A_1$ $B_0$ $B_1$ --------- --------------------------------------------- ----------------------------------------------- --------------------------------------------- --------------------------------------------- Vector .492[${\scriptstyle {}^{+{10}}_{-{ 9}}}$]{} -0.004[${\scriptstyle {}^{+{ 2}}_{-{ 3}}}$]{} 0.61[${\scriptstyle {}^{+{ 4}}_{-{ 4}}}$]{} .015[${\scriptstyle {}^{+{ 9}}_{-{ 7}}}$]{} Nucleon .663[${\scriptstyle {}^{+{13}}_{-{15}}}$]{} 0.006[${\scriptstyle {}^{+{ 3}}_{-{ 4}}}$]{} 1.23[${\scriptstyle {}^{+{ 6}}_{-{ 6}}}$]{} -.001[${\scriptstyle {}^{+{1}}_{-{1}}}$]{} Delta .84[${\scriptstyle {}^{+{ 2}}_{-{ 2}}}$]{} -0.002[${\scriptstyle {}^{+{ 5}}_{-{ 5}}}$]{} 0.91[${\scriptstyle {}^{+{ 8}}_{-{ 9}}}$]{} .02[${\scriptstyle {}^{+{ 2}}_{-{ 2}}}$]{} : Fit parameters from the Chiral Extrapolations[]{data-label="tb:chiral"} Conclusions =========== This paper attempts to uncover unquenching effects in the dynamical lattice QCD simulations at a fixed (matched) lattice spacing (and volume) and various dynamical quark masses. This approach allows a more controlled study of unquenching effects without the possible entanglement of lattice and unquenching systematics. However, we see no significant sign of unquenching effects in the meson spectrum. This is presumably since our dynamical quarks are relatively massive, and so the meson spectrum is dominated by the static quark potential. This potential is, by definition, matched amongst our ensembles at the hadronic length scale $r_0$, and so any variation of the meson spectrum within our matched ensemble must surely be a “higher” order unquenching effect which is beyond our present statistics. We have however shown that unquenching effects exist in the static quark potential, and other work (using the same ensembles) has shown interesting unquenching effects in the glueball and topological sector [@csw202]. [99]{} UKQCD, C.R.Allton [*et al*]{}., Phys.Rev. [**D60**]{} (1999) 034507, [hep-lat/9808016]{}. UKQCD, C.R.Allton [*et al*]{}., [hep-lat/0107021]{}. D. Hepburn, [*these proceedings*]{}. C.R.Allton, V.Gimènez, L.Giusti, and F.Rapuano, Nucl. Phys. [**B489**]{} (1997) 427.\
--- abstract: 'We present a re-analysis of the optical spectroscopic data on SS433 from the last quarter-century and demonstrate that these data alone contain systematic and identifiable deviations from the traditional kinematic model for the jets: variations in speed, which agree with our analysis of recent radio data; in precession-cone angle and in phase. We present a simple technique for separating out the jet speed from the angular properties of the jet axis, assuming only that the jets are symmetric. With this technique, the archival optical data reveal that the variations in jet speed and in precession-cone angle are anti-correlated in the sense that when faster jet bolides are ejected the cone opening angle is smaller. We also find speed oscillations as a function of [*orbital*]{} phase.' author: - 'Katherine M. Blundell and Michael G. Bowler' title: 'Jet velocity in SS433: its anti-correlation with precession-cone angle and dependence on orbital phase' --- Introduction {#sec:intro} ============ In a recent paper [@Blu04] we presented the deepest yet radio image of SS433, which revealed an historical record over two complete precession periods of the geometry of the jets. Detailed analysis of this image revealed systematic deviations from the standard kinematic model [@Mar84; @Eik01]. Variations in jet speed, lasting for as long as tens of days, were needed to match the detailed structure of each jet. Remarkably, these variations in speed were equal, matching the two jets simultaneously. The Doppler residuals to the kinematic model show little variation with the precessional phase of the jets and this observation rules out variations in jet speed [*alone*]{} as the source of the residuals [e.g. @Kat82a; @Eik01]. Very little phase variation is obtained if the pointing angle jitters [@Kat82a] but there is no evidence excluding symmetric speed variations of the magnitude reported in [@Blu04] superposed on pointing jitter, Fig\[fig:varywithphase\]. Thus our findings from the radio image led us to re-analyse the archival optical data. G. Collins II and S. Eikenberry (with kind permission of B. Margon) made available their compiled datasets, published in [@Col00] and used in [@Eik01]. We use the Collins’ compilation (available at http://www-astro.physics.ox.ac.uk/$\sim$kmb/ss433/) because of its higher quoted precision, but very similar results are obtained from Margon’s. Speed and angular variations from the optical data {#sec:variations} ================================================== If the precessing jet axis of SS433 traces out a cone of semi-angle $\theta$ about a line which is oriented at an angle $i$ to our line-of-sight with jet velocity $\beta$ in units of $c$ ($\gamma = (1 - \beta^2)^{-1/2}$), the redshifts measured from the west jet ($z_{+}$) and the east jet ($z_{-}$) are given, if the jets are symmetric, by: $$\label{eq:redshift} z_{\pm} = -1 + \gamma[1 \pm \beta\sin{\theta}\sin{i}\cos{\phi} \pm \beta\cos{\theta}\cos{i}],$$ where $\phi$ is the phase of the precession cycle (see http://www-astro.physics.ox.ac.uk/$\sim$kmb/ss433/). Addition of $z_{+}$ and $z_{-}$ in Eqn\[eq:redshift\] gives an expression relating the observed redshifts to the jet speed independently of any angular variation. Re-arrangement gives $$\label{eq:sum} \beta = \left[ 1 - \left[1 + \frac{z_{+} + z_{-}}{2}\right]^{-2} \right] ^{1/2}.$$ The quantity $z_{+} + z_{-}$ fluctuates very substantially (see Fig\[fig:resids\_v\_time\]). Subtraction of the expressions for $z_{+}$ and $z_{-}$ gives the angular properties $a$ of the orientation of the jet axis, with the speed divided out using Eqn\[eq:sum\]: $$\label{eq:ang} a = \frac{z_{+} - z_{-}}{2\beta\gamma} = \sin{\theta}\sin{i}\cos{\phi} + \cos{\theta}\cos{i},$$ Fluctuations in $\beta$, $\theta$ and $\phi$ are predominantly symmetric, as described by Eqns 1–3, when fluctuations in $z_{+} + z_{-}$ represent symmetric fluctuations in speed. (The velocity variations in the two radio jets [@Blu04] are highly symmetric; the standard deviation on the difference in the speeds is less than $0.004\,c$ and on the common velocity $0.014\,c$.) The disadvantages of the variables $s = z_{+} + z_{-}$ and $a$ (Eqn\[eq:ang\]) are that their interpretation is simple only for perfect symmetry and that they may only be used for the 395 out of 486 observations which record a simultaneous pair. Their merits are exemplified by Fourier analyses of the time distributions. We used the algorithm of [@Rob87] which accounts for the uneven time-sampling of the data. The angular data $a$ clearly revealed periodicities corresponding to the nodding of the precession axis [@Kat82; @New82; @Col02] and the 162-day precession period, clearly seen in Fig\[fig:ft\]a. There is no periodicity in the speed data $s$ (Fig\[fig:ft\]b) common to the angular data, consistent with perfect symmetry. The speed data also indicate a periodicity at 13.08days (the periodicity at 12.58days matches a beat with Earth’s orbital period 365days); to investigate this, we folded the data over 13.08days in 20 phase bins, and in each bin the mean speed ($\beta$, from Eqn\[eq:sum\]) was derived. Fig\[fig:fold\] shows a clear sinusoidal oscillation with orbital phase. The rms variation in speed which oscillates with orbital phase is smaller by a factor of three than the overall speed dispersion. This oscillation with amplitude $2000\,{\rm km\,s^{-1}}$ may be because the speed with which the bolides are ejected is a function of orbital phase, but the excursions in Fig\[fig:fold\] could also be interpreted as due to orbital motion; in that case SS433’s orbital speed is $\sim 400\, {\rm km\, s}^{-1}$. Anti-correlated deviations in jet speed and $\theta$ {#sec:correlations} ==================================================== We fitted Collins’ dataset with the kinematic model, including nodding. From our fit, we derived model redshift pairs and hence the variables $s$ and $a$. Subtraction of these model variables from those constructed from the data gave residuals $\Delta s$ in $s$ and $\Delta a$ in $a$. The $\beta$ variation is shown in Fig\[fig:resids\_v\_time\]c. The standard deviation of this histogram is 0.013, in excellent agreement with the result from our radio image (0.014). Examples of the residuals in $s$ and in the angular variable $a$ are plotted in Fig\[fig:resids\_v\_time\]; the variations in speed and angular residuals are anti-correlated. From Eqns\[eq:redshift\]–\[eq:ang\], maintaining the assumption of symmetry: $$\label{eq:s_2} \Delta s^2 = 4 \beta^2 \gamma^6 \Delta \beta^2 ,$$ $$\begin{aligned} \label{eq:a_2} \Delta a^2 &=& (\cos\theta \sin{i} \cos\phi - \sin\theta \cos{i})^2 \Delta \theta^2 + (\sin\theta \sin{i} \sin\phi)^2 \Delta\phi^2 \\ \nonumber &-& 2 \sin\theta \sin{i} \sin\phi (\cos\theta \sin{i} \cos\phi - \sin\theta \cos{i}) \Delta \theta \Delta \phi ,\end{aligned}$$ $$\begin{aligned} \label{eq:as} \Delta a\,\Delta s &=& 2 \beta \gamma^3 [(\cos\theta \sin{i} \cos\phi - \sin\theta \cos{i}) \Delta\beta\,\Delta\theta \\ \nonumber &-& \sin\theta \sin{i} \sin\phi \Delta\beta \Delta\phi],\end{aligned}$$ where $\Delta \beta$, $\Delta \theta$ and $\Delta \phi$ represent the variations in $\beta$, $\theta$ and $\phi$ respectively. Averaging over many cycles for any given value of the phase $\phi$ yields the averages $\langle \Delta a^2 \rangle$, $\langle \Delta s^2 \rangle$, $\langle \Delta a \Delta s \rangle$ as functions of $\phi$, in terms of the parameters $\langle \Delta \beta^2 \rangle$, $\langle \Delta \theta^2 \rangle$, $\langle \Delta\beta\,\Delta\theta \rangle$ and so on. The fit to $\langle \Delta\beta\,\Delta\theta \rangle$ is shown in Fig\[fig:a\_s\]. The parameters are given in Table\[tab:fits\]; the global $\chi^2 / NDF$ is 34.8/24, where $NDF$ is the number of degrees of freedom. Fig\[fig:a\_s\] shows that the quantity $\langle \Delta a\,\Delta s \rangle$ has an almost pure cosinusoidal variation with phase, as given by the first term on the right hand side of Eqn\[eq:as\]. This shape is the unique signature of a correlation between variations in $\beta$ and $\theta$. $\langle \Delta \beta^2 \rangle$ shows no correlation with $\phi$ and $\langle \Delta a^2 \rangle$ very little. The latter requires fluctuations in both $\theta$ and in $\phi$. We remark that $\langle \Delta (z_{+} - z_{-})\,\Delta s \rangle $ does not show a strong correlation with $\phi$; nor should it, using the parameters from Table 1. Removing the varying speed from $z_{+} - z_{-}$ was crucial in revealing this correlation in $\langle \Delta a\,\Delta s \rangle$. The redshift residual plot {#sec:eikfigfive} ========================== Consider the plane of redshift residuals, as in figure 5 of [@Eik01], with symmetric excursions from the kinematic model in $\beta$, $\theta$ and $\phi$. Comparison of their figure 5 (similar to our Fig\[fig:eikfigfive\]f) with our Figs\[fig:eikfigfive\]a and \[fig:eikfigfive\]b requires the presence of both angular variations (to spread the points along the line $y = -x$) and velocity variations (to spread the points perpendicular to this line — note that even their quoted redshift measurement error of 0.003, likely an over-estimate, will not account for this breadth). Inclusion of all these variations, correlated as in Table\[tab:fits\], gives Fig\[fig:eikfigfive\]d which resembles that from the data (Fig\[fig:eikfigfive\]f). The simulations in Fig\[fig:eikfigfive\]d take no account of the (stochastic) duration of the excursions; the duration of variations in Fig\[fig:eikfigfive\]e were drawn from a gaussian with half-width 2days. Thus Figs\[fig:varywithphase\] and \[fig:eikfigfive\] establish the consistency of the optical data with perfect symmetry and with speed fluctuations whose magnitude is in excellent agreement with those found in the radio image. In addition, the slope of Fig\[fig:eikfigfive\]e is $-0.765 \pm 0.034$, and that for Fig\[fig:eikfigfive\]f from the Collins data set is $-0.786$ and not $-1$; this long standing curiosity is explained by the physics from the §\[sec:correlations\] fit. Other assumptions {#sec:other} ================= If the jet speed were constant, the residuals to $z_{+}$ and $z_{-}$ would, for the case of strictly antiparallel jets, be correlated as in Fig\[fig:eikfigfive\]a; to spread the distribution of points perpendicular to this line it is necessary to allow some independence in the pointing of the two jets. The variation with precessional phase of the quantities $\langle \Delta s^2\rangle$, $\langle \Delta a^2\rangle$ and $\langle \Delta a \Delta s\rangle$ is then almost as well described ($\chi^2 / NDF = 36.3/24$) as by our fit of §\[sec:correlations\]. Such a model would not be able to explain the radio jet morphology [@Blu04] and this fit required angular fluctuations breaking symmetry to have rms values $\sim 1/2$ of those preserving symmetry. Angular jitter alone cannot account for the slope of $\Delta z_{-}$ versus $\Delta z_{+}$ (Fig\[fig:eikfigfive\]) differing from $-1$, unless the jitter in the East jet is systematically smaller than in the West jet. Velocity variation breaks symmetry in the Doppler shifts and accounts naturally for this observation. A better fit than either ($\chi^2/NDF$ = 25.5/21) was achieved by allowing some symmetry breaking angular fluctuations in addition to symmetric velocity fluctuations: in this case the rms symmetry breaking fluctuations were $\sim 1/4$ of those preserving symmetry. In all cases the rms fluctuations in $\theta$ were $\sim 1/3$ of the rms fluctuations in $\phi$, as would be expected for pointing angle fluctuations described by [@Kat82a] as “isotropic”. Concluding remarks ================== Archival optical spectroscopic data on SS433 reveal variations in jet speed, in cone opening angle, and in the phase of the precession. These appear in the plane of redshift residuals (Fig\[fig:eikfigfive\]) and through the new (symmetry dependent) technique of combining simultaneously observed redshift pairs for the speed-only ($s$) and angular-only ($a$) characteristics (Fig\[fig:resids\_v\_time\]). The velocity variations $\sim 0.014\,c$ are strongly anticorrelated with cone angle $\theta$, in the sense that when faster bolides are ejected the cone angle is smaller. We also found smaller amplitude sinusoidal oscillations in speed as a function of orbital phase. If this is due to ejection speed, perhaps the orbit of the binary is eccentric. If these 13.08-day oscillations are orbital Doppler shifts the orbital velocity is $\sim 400\,{\rm km\, s}^{-1}$ (twice that inferred by [@Cra81; @Fab90]). If this were the case, the mass of the companion to SS433 would be $> 86\, {\rm M}_\odot$ and if the mass fraction were 0.1, then the mass of the companion would be  $100\, {\rm M}_\odot$. Such masses would be hardly consistent with an A-type companion [@Gei02; @Cha04]. K.M.B. thanks the Royal Society for a University Research Fellowship. It is a pleasure to thank Avinash Deshpande, James Binney & Philipp Podsiadlowski for helpful discussions. Blundell, K. M., & Bowler, M. G. 2004, , 616, L159 Charles, P.A. et al, 2004, RevMexAA, 20, 50 Crampton, D. & Hutchings, J.B. 1981, , 251, 604 Collins, G.W. & Scher, R.W., 2002, , 336, 1011 Collins, G.W. & Scher, R.W., 2000, in “the Kth Reunion”, ed A.G.D. Philip, L. Davis Press, Schenectady, NY, pg105 Eikenberry, S.S. et al, 2001, , 561, 1027 Fabrika, S. N. & Bychkova, L. V. 1990, , 240, L5 Gies, D.R., Huang, W. & McSwain, M.V., 2002, , 578, [l]{}67 Katz, J. I. & Piran, T. 1982, , 23, 11 Katz, J. I., Anderson, S. F., Grandi, S. A., & Margon, B. 1982, , 260, 780 Margon, B. 1984, , 22, 507 Newsom, G.H. & Collins, G.W., 1982, , 262, 714 Roberts, D.H., Lehar, J. & Dreher, J.W., 1987, , 93, 968 [llll]{} $\langle \Delta \beta^2 \rangle $ & $\phantom{-}1.67 \pm 0.18 \times 10^{-4}$ & rms speed variation & 0.0129$c$\ $\langle \Delta \theta^2 \rangle $ & $\phantom{-}2.24 \pm 0.35 \times 10^{-3}$ & rms $\theta$ variation & 2.71deg\ $\langle \Delta \phi^2 \rangle $ & $\phantom{-}1.31 \pm 0.26 \times 10^{-2}$ & rms $\phi$ variation & 2.96days\ $\langle \Delta \beta\Delta\theta \rangle $ &$-3.81 \pm 0.52 \times 10^{-4}$ & &\ $\langle \Delta \beta\Delta\phi \rangle $ & $\phantom{-}1.70 \pm 0.80 \times 10^{-4}$ & &\ $\langle \Delta \theta\Delta\phi \rangle$ & indistinguishable from zero & &\
--- author: - 'V. Hansteen' - 'A. Ortiz' - 'V. Archontis' - 'M. Carlsson' - 'T. M. D. Pereira' - 'J. P. Bj[ø]{}rgen' bibliography: - 'solarrefs.bib' date: 'Received ; accepted' title: 'Ellerman bombs and UV bursts: transient events in chromospheric current sheets' --- [Ellerman bombs (EBs), observed in the photospheric wings of the [H$\alpha$]{} line, and UV bursts, observed in the transition region  line, are both brightenings related to flux emergence regions and specifically to magnetic flux of opposite polarity that meet in the photosphere. These two reconnection-related phenomena, nominally formed far apart, occasionally occur in the same location and at the same time, thus challenging our understanding of reconnection and heating of the lower solar atmosphere.]{} [We consider the formation of an active region, including long fibrils and hot and dense coronal plasma. The emergence of a untwisted magnetic flux sheet, injected $2.5$ Mm below the photosphere, is studied as it pierces the photosphere and interacts with the preexisting ambient field. Specifically, we aim to study whether EBs and UV bursts are generated as a result of such flux emergence and examine their physical relationship.]{} [The Bifrost radiative magnetohydrodynamics code was used to model flux emerging into a model atmosphere that contained a fairly strong ambient field, constraining the emerging field to a limited volume wherein multiple reconnection events occur as the field breaks through the photosphere and expands into the outer atmosphere. Synthetic spectra of the different reconnection events were computed using the $1.5$D RH code and the fully 3D MULTI3D code.]{} [The formation of UV bursts and EBs at intensities and with line profiles that are highly reminiscent of observed spectra are understood to be a result of the reconnection of emerging flux with itself in a long-lasting current sheet that extends over several scale heights through the chromosphere. Synthetic spectra in the [H$\alpha$]{} and  139.376 nm lines both show characteristics that are typical of the observations. These synthetic diagnostics suggest that there are no compelling reasons to assume that UV bursts occur in the photosphere. Instead, EBs and UV bursts are occasionally formed at opposite ends of a long current sheet that resides in an extended bubble of cool gas.]{} Introduction ============ Emerging flux regions are host to a large variety of transient phenomena that are driven by the interaction of the emerging field with the photospheric, chromospheric, and coronal plasma, the preexisting ambient field, and of the field with itself. Such interactions are a necessary part of the rise of the field into the corona and the forming magnetic field of the active region; the rising field carries with it considerable mass, which must fall back to and through the photosphere in order to allow the field to attain coronal heights. Reconnection plays an important role in this process, cutting field lines so that dense material can fall while at the same time alleviating the weight on the field lines and allowing them to form the longer loops that make up the active region corona and chromosphere of the active region [see, e.g., @2004ApJ...614.1099P]. These dynamical phenomena include Ellerman bombs [EB hereafter, @1917ApJ....46..298E] and UV bursts [@2014Sci...346C.315P], which both occur in the vicinity of merging or cancelling photospheric fields of opposite polarities. Ellerman bombs are first and foremost recognized by strong emission in the wings of [H$\alpha$]{}, with little or no visible signal in the line core. This indicates that EBs are formed at photospheric or upper photospheric heights, at most some few hundred kilometers above the photosphere. As observed in spectroheliograms in the wings of [H$\alpha$]{}, they appear as flame-like structures that jut out of the photosphere when seen toward the limb [@2011ApJ...736...71W]. The primary signature of UV bursts is found in small ($\sim1\arcsec$) extreme brightenings in the  lines that have been observed with the Interface Region Imaging Spectrograph (IRIS) [@2014SoPh..289.2733D] to be as much as two to three orders of magnitude brighter than the average emission. This brightening is accompanied by large line broadening, of order $200$ km/s or more, and importantly, superimposed absorption features by lines associated with much cooler temperatures such as from  and  lines. UV bursts and EBs are also found at sunspot light bridges [e.g., @2015ApJ...811..137T] and in moving magnetic features [e.g., @2015ApJ...809...82G]. The observed relationship between EBs and UV bursts has recently been the focus of debate and several papers, see @2015ApJ...812...11V, @2015ApJ...810...38K, ,@2016ApJ...824...96T, and @2018SSRv..214..120Y, and references cited therein, and most recently, @Ortiz_etal2018. The question is whether EBs and UV bursts are connected, and if so, how they are related. The height at which the plasma temperature becomes high enough to emit radiation in the  lines also needs to be determined. @2014Sci...346C.315P have speculated on the possibility of a relationship between UV bursts and EBs. These authors were unable to conclude whether UV bursts and EBs were the same phenomena or not. [@2015ApJ...808..116J] suggested that the  emission was more naturally explained as stemming from the chromosphere and a result of Alfv[é]{}nic turbulence. @2015ApJ...812...11V noted that all [H$\alpha$]{} fitting exercises show that EBs represent temperature enhancements of the low standard model chromosphere by at most a few thousand Kelvin ot less. In addition, they claim that  intensities with the observed properties cannot be obtained from any of these models. On the other hand, based on  emission,  absorption in the wings of , enhanced emission in the  wings but not core, deep absorption in  lines, and compact brightenings in the AIA 170 nm passband, @2016ApJ...824...96T concluded that the UV bursts that occur in connection with EBs are formed in the photosphere. While further noting that other UV bursts may be formed in the chromosphere. @2017RAA....17...31F point out that the observed [H$\alpha$]{}emission cannot be reproduced using non-local thermal equilibrium (non-LTE) semi-empirical modeling if the temperature is above $10\,000$ K. They suggested that the coincidence of EBs and UV bursts could be due a projection effect. Furthermore,  @Ortiz_etal2018, for example, found that EBs and UV bursts can be cospatial and cotemporal, but can also occur independently of each other in both time and location. The occurrence of cospatial and cotemporal EBs and UV bursts in the same spectral image is between 10% and 20% of all observed [H$\alpha$]{} brightenings in that image. When these two reconnection events do occur together, they may do so nearly simultaneously or with a delay of a few minutes; the UV burst shows a more rapid rise and fall in intensity and is more short lived than the cospatial EB. The question then is whether and how models can naturally reproduce the required conditions to generate EBs and UV bursts in the same location and at the same time. Modeling reproduces the diagnostic signatures of EBs and UV bursts as a result of large angle reconnection, essentially of fields oriented in diametrically opposite directions. EBs result when the reconnection occurs in the upper photosphere with temperatures some $1\,000$–$3\,000$ K higher than ambient [e.g., @2013JPhCS.440a2007R], while UV bursts are reproduced when the reconnection is located higher in the chromosphere and entails much higher temperatures than those found in EB models, at least high enough to ionize silicon three times: 20 kK if in dense photospheric material, 80 kK at lower densities, as pointed out by . It may be possible to generate temperatures as high as $25$ kK through reconnection in the upper photosphere or lower chromosphere, as shown by @2016ApJ...832..195N [@2018ApJ...852...95N]. @2018ApJ...862L..24P and @2019ApJ...872...32S pointed out in a theoretical work that flux cancellation can cause simultaneous reconnection at different heights in the solar atmosphere. @2017ApJ...839...22H found that both EBs and UV bursts were generated in a model where a flux sheet was injected some 2.5 Mm below the photosphere and allowed to emerge through the photosphere into an essentially magnetic field free corona. Upon reaching the photosphere, no longer buoyant, the field stalled, but eventually emerged in locations of strong field, filling the chromosphere with expanding cool bubbles. These bubbles pushed the original weak-field outer atmosphere aside, filling a large area in height with cool gas. As the bubbles grew horizontally, they came into contact with each other, typically with oppositely oriented fields, and formed current sheets. These locations were the sites of EBs and UV bursts. ![image](ebuv_bz_xy_iz=611.pdf){width="\hsize"} ![image](ebuv_tg_xz_iy=345.pdf){width="\hsize"} However, in that model no colocated or cotemporal EBs and UV bursts were produced. Rather, it was found that the high velocities and temperatures needed to produce UV bursts occurred several chromospheric scale heights above the photosphere, while EBs were confined to locations just above the photosphere. In this paper a model very similar to that described in @2017ApJ...839...22H is considered, but instead of a weak 0.1 Gauss slanted field, a stronger preexisting outer atmospheric field (average field strength 2 Gauss at 10 Mm) that is oriented at an angle to the injected emerging field is employed. Thus, in addition to improving the numerical resolution, the impetus of this work is the expectation that the preexisting ambient field helps contain and confine the emerging field. This should give rise to large angle reconnection and thus powerful coronal heating as the fields eventually interact, forming a typical realistic active region chromosphere covered by a hot corona. We do indeed find the formation of long chromospheric fibrils in this model, and significant coronal heating. The average temperature at 10 Mm initially is 2 MK, but rises in the span of 30 minutes to almost 10 MK before falling again over the next hour to the 4–5 MK that are typical of active regions [@2012ApJ...759..141W]. In addition, we also find that both EBs and UV bursts are produced. Occasionally and critically, we find that these phenomena are found to be both colocated and cotemporal, as observed, for instance, by @2015ApJ...812...11V, @2016ApJ...824...96T, and @Ortiz_etal2018. Our simulations seem to indicate that the concurrence of EBs and UV bursts are due to a long vertical or nearly vertical current sheet with intensive reconnection that stretches from the upper photosphere, where EB signatures are generated, to several thousand kilometers upward. UV bursts are the result of reconnection across several chromospheric scale heights well above the photosphere. Line and continuum absorption is provided by cool gas that is carried into a vastly expanded chromosphere as the emerging field breaks through the photosphere and rises to fill the nascent active region corona. ![Structure of chromosphere and corona above the flux cancellation site. Joule heating (top panel), up- and downflows in and around the current sheet (center panel), and magnetic field lines surrounding and above the current sheet. The green field lines in the lower panel outline the original ambient field that has been pushed upward and compressed by the emerging field (purple field lines) that is oriented largely in the $y$-direction (front to back in the figure). Below this field, we find a newly emerged field, drawn in red and blue to mark positive and negative $B_z$. The reconnection site driving the EB and UV burst is evident in the center of each panel. Reconnection heats and accelerates the plasma. The center panel shows isosurfaces of $100$ km/s (cyan) upward flows and $-60$ km/s downward (purple) flows. Beige, brown, and green in the upper panel show the intensity of the Joule heating. []{data-label="fig:ebuv_3D"}](ebuvb_bzqjoule_411.jpg "fig:"){width="7cm"} ![Structure of chromosphere and corona above the flux cancellation site. Joule heating (top panel), up- and downflows in and around the current sheet (center panel), and magnetic field lines surrounding and above the current sheet. The green field lines in the lower panel outline the original ambient field that has been pushed upward and compressed by the emerging field (purple field lines) that is oriented largely in the $y$-direction (front to back in the figure). Below this field, we find a newly emerged field, drawn in red and blue to mark positive and negative $B_z$. The reconnection site driving the EB and UV burst is evident in the center of each panel. Reconnection heats and accelerates the plasma. The center panel shows isosurfaces of $100$ km/s (cyan) upward flows and $-60$ km/s downward (purple) flows. Beige, brown, and green in the upper panel show the intensity of the Joule heating. []{data-label="fig:ebuv_3D"}](ebuvb_bzuz-100+60_411.jpg "fig:"){width="7cm"} ![Structure of chromosphere and corona above the flux cancellation site. Joule heating (top panel), up- and downflows in and around the current sheet (center panel), and magnetic field lines surrounding and above the current sheet. The green field lines in the lower panel outline the original ambient field that has been pushed upward and compressed by the emerging field (purple field lines) that is oriented largely in the $y$-direction (front to back in the figure). Below this field, we find a newly emerged field, drawn in red and blue to mark positive and negative $B_z$. The reconnection site driving the EB and UV burst is evident in the center of each panel. Reconnection heats and accelerates the plasma. The center panel shows isosurfaces of $100$ km/s (cyan) upward flows and $-60$ km/s downward (purple) flows. Beige, brown, and green in the upper panel show the intensity of the Joule heating. []{data-label="fig:ebuv_3D"}](ebuv_bz_fieldlines_v2.jpg "fig:"){width="7cm"} ![image](emergence_side_ix=345_it=412.pdf){width="\hsize"} Methods ======= Initial model ------------- The initial model we used for this study is a version of the publicly available Bifrost model with higher resolution that covers the same spatial extent and has roughly the same magnetic topology. This new model was run for roughly one hour solar time and is designed to have an effective temperature of $T_{\bf eff}\approx 5780$ K, convective energy transport below the photosphere, and radiative losses maintained by the injection of entropy at the bottom boundary. The outer atmosphere is maintained by acoustic shocks generated from motions in the convection zone and by the interaction of convective motions and the magnetic field. The average field strength in the photosphere is 50 Gauss originally, and at the onset of the run, the vertical component of the photospheric field is concentrated in two amorphous regions, positive polarities between $x=[6,10]$ Mm and $y=[5,15]$ Mm, and negative polarities some 10 Mm distant between $x=[15,19]$ Mm and $y=[12,17]$ Mm, as shown in the left panel of Figure \[fig:ebuv\_bz\_tg\_evol\]. Both polarities have maximum strengths of about $1\,500$ Gauss in the photosphere, and chromospheric and coronal field lines stretch between them. Hints of these field lines are visible in the temperature structure of the lower corona shown in the middle left panel of Figure \[fig:ebuv\_bz\_tg\_evol\]. Coronal field lines, for example, as outlined in emission in the 19.51 nm line formed at $1.5$ MK, show that the field lies at a $15^{\circ}$ angle to the $x$-axis. The photosphere, as defined by the height where $\tau_{500 {\rm nm}}=1$, is located at $z=0$ km. Above it, though corrugated, the initial chromosphere extends to 2 Mm, at which height we find the rapid temperature rise to coronal temperatures. Before the newly injected magnetic flux breaks through the photosphere, we find average coronal temperatures of 2 MK at 10 Mm. Chromospheric and coronal temperatures and dynamics are maintained by acoustic waves and by footpoint braiding and subsequent nanoflares resulting from convective motions, as described in and @2015ApJ...811..106H. The model extends from $2.5$ Mm below the photosphere to $14.5$ Mm above the photosphere in the vertical direction, and $24$ Mm in both horizontal directions. This computational domain is covered by $768\times768\times768$ grid points, giving a horizontal grid size of $31.25$ km. The grid size varies in the vertical direction and is roughly $12$ km from $-1.0$ Mm below the photosphere to $4.5$ Mm through the chromosphere. It slowly increases where the scale heights are larger, to $80$ km at the upper boundary in the corona and to $20$ km at the bottom convection zone boundary. The calculations were carried out using the Bifrost code . This code solves the magnetohydrodynamics (MHD) equations for a plasma in which the ionization state, for this specific model, is assumed to be in LTE. Energy source and sink terms include radiative losses; solving the equations of radiative transfer in four frequency bins and scattering for the optically thick photosphere and lower chromosphere; optically thin radiative losses based on recipes derived by for the upper chromosphere, transition region, and corona; and thermal conduction along the magnetic field. The recipe for optically thin radiative losses includes back-heating by the Ly$\alpha$ line and the Lyman-continuum, as described in . Spectral synthesis ------------------ In order to compare our simulation with observed EBs and UV bursts, we synthesized spectral data in several often observed lines that are formed in the photosphere, chromosphere, transition region, and corona. These lines include [H$\alpha$]{}, the [H&K]{} and  854.2 nm triplet line, lines of  including the 139.376 nm line, and the coronal  19.51 nm line. The [H$\alpha$]{} scattering line requires a full 3D evaluation of the radiation field to obtain chromospheric features in the emergent intensities [@2012ApJ...749..136L]. We therefore calculated the hydrogen lines using the MULTI3D code [@2009ASPC..415...87L]. To model H$\alpha,$ we used a three-level plus continuum model atom of H <span style="font-variant:small-caps;">i</span>, where Ly$\alpha$ is treated with a Doppler absorption profile in complete redistribution [@2018_bjorgen]. Synthetic  line profiles were calculated using the RH 1.5D code . RH 1.5D solves the non-LTE radiative transfer problem by treating each column of a 3D model as a 1D atmosphere (1.5D approximation). Based on the RH code [@2001ApJ...557..389U], RH 1.5D allows partial redistribution effects, multiple species treated in non-LTE, and adding blending lines. To synthesize , we used a five-level plus continuum model atom in which the [H&K]{} lines were treated in partial redistribution, while the infrared triplet lines were treated in complete redistribution. Although the  139.376 nm line, formed at transition region temperatures (80 kK), is usually optically thin, the opacity of the line can sometimes become higher than unity. When the line was viewed from directly above, we therefore computed the  lines by solving the non-LTE equations of radiative transfer using the RH 1.5D code. We employed a 9-level atom, including 5 levels of , the ground state, the upper levels of the 139.376 and 140.277 nm lines, and the ground state. In addition, we included two more active atoms: a model atom with 15 levels of and the ground state of and a 4-level model of -. This allowed us to derive the correct continuum intensity level for the lines (the background opacity there is dominated by photoionization of ) and possible absorption in the  139.332 nm line, which is located at 93 km/s in the blue wing of  139.376 nm. The  line is formed at $14$ kK, which is much lower than the expected formation temperature of . We did not include the  lines that formed in the vicinity of 139.376 nm in this study. Moreover, the  lines, both as viewed from above and from the side, were also calculated using an optically thin approximation with the same method as described below for , and with data from version 8 of the CHIANTI atomic database . Although differences are visible between the optically thin and optically thick results, the total intensities are qualitatively very similar. The  19.5 nm that forms around 1.5 MK can be considered optically thin, and we calculated this line using collisional and ionization state data from version 8 of the CHIANTI database. The contribution to the intensity at frequency $\nu$ is given by $$dI_\nu=\phi_\nu(u_z,T)A_{\rm Fe}n^2_{\rm e}g(T)e^{-\tau}dz ,$$ where $\phi_\nu$ is the (Gaussian) emission profile, $u_z$ is the vertical velocity, $T$ is the temperature, $A_{\rm Fe}$ is the abundance of iron, $n_{\rm e}$ is the electron density, $g(T)$ gives the excitation and ionization state of , and $dz$ is a distance element along the (vertical) line of sight. Because several regions of potential  emission are separated in places by several megameter of cool gas, we included absorption by neutral hydrogen and helium and singly-ionized helium in the $e^{-\tau}$ absorption factor, following the recipes of @2005ApJ...622..714A, as done earlier and described in greater detail in @2009ApJ...702.1016D. The total intensity in the line was found by integrating along the line of sight and in frequency across the line profile. We assumed ionization equilibrium when we computed the  and  lines. We included the effects of three-body recombination, which would allow  to be formed at temperatures down to 10–20 kK at photospheric densities, but we did not see such temperatures near the dense plasma that forms the photosphere in this simulation . Results ======= Injection of the magnetic flux sheet ------------------------------------ A magnetic sheet, spanning $x=[4,18]$ Mm and the full range along the $y$-axis, was injected at the bottom boundary 2.5 Mm below the photosphere with a field strength of $B_y=2\,000$ Gauss and oriented in the $y$-direction. Through the action of buoyancy and convective upflows, the field rose through the upper convection zone and reached the photosphere after roughly one hour (at $t\approx3\,500$ s simulation time). On rising, the sheet was perturbed by convective motions; it was pulled down in locations of descending plumes of cold plasma and rose more slowly in upflow regions. It therefore reached the surface at slightly different times in discrete locations. On arriving at photospheric heights, the field was no longer buoyant and the upward motions stalled, but eventually, in locations where the field was strong enough, the field broke through the surface. The field in these locations rose rapidly and expanded into the chromosphere, carrying cool magnetized photospheric material upward. It was anchored to the photosphere in downdrafts where the field became nearly vertical [see @2014ApJ...781..126O for a detailed description of this phase]. The state of the vertical field component in the photosphere is shown for two separate times in the middle and right panels of Figure \[fig:ebuv\_bz\_tg\_evol\]. Patches of opposite polarity are seen to pierce the photosphere in a fairly complex granular pattern that approximately fills the same horizontal extent as the horizontal size of the injected flux sheet. The field orientation is also close to what it was when it was injected $2.5$ Mm below. It forms an angle of approximately $75^{\circ}$ with the preexisting ambient field, which ensures large-angle reconnection as these fields, old and new, interact. The field that rose into the chromosphere was strong, strong enough to push the preexisting ambient field upward and compress it. The newly emerging flux pulled cool photospheric and chromospheric material with it to great heights. This rise first halted some 10 Mm above the photosphere, where further expansion was slowed by the preexisting field as the emerging and ambient fields attained about equal strengths of $30-50$ Gauss at 10 Mm. This in contrast with the previous simulation of @2014ApJ...788L...2A and @2017ApJ...839...22H, where the preexisting ambient field was very weak and was completely swept away by the newly emerging flux and fully removed the original corona. In the current simulation the preexisting ambient field was strong enough to hinder the expansion of the emerging bubbles and the ejection of the initial corona, which forces the bubbles to interact more strongly as both horizontal and vertical expansion is suppressed. This results in stronger and longer lasting current sheets that achieve lengths or heights of several thousand kilometers. The evolution of the emerging and expanding magnetic field is evident in the lower panels of Figure \[fig:ebuv\_bz\_tg\_evol\], which show that the entire horizontal extent of the flux sheet rises and eventually impacts the corona, pushing it aside and filling the upper atmosphere with field and cool gas, but only up to a height of 10 Mm. The rising field expands from the photosphere in the form of cold magnetized bubbles, as described by [@2014ApJ...781..126O]. As these bubbles expand, they eventually come into contact with each other, especially in regions where photospheric motions have brought their footpoints close together. The interaction between the preexisting ambient field and the newly emerging field, which occurs near the top of the emerging bubble, essentially through high-angle reconnection, causes the coronal plasma temperatures at heights at and above 10 Mm to increase significantly. These temperatures, at times of about 10 MK as the flux rises and of about 4 MK on longer timescales, are much higher than those generally achieved by low-angle reconnection that is driven solely by photospheric braiding [see, e.g.,   @2007ApJ...666..516G; @2015ApJ...811..106H]. However, we here concentrate on the high-angle reconnection, close to $180^{\circ}$, which occurs at lower heights, in the first few thousand kilometers above the photosphere, where oppositely directed magnetic fields that largely stem from the newly emerged field interact with each other: They heat and accelerate the cold plasma that is carried up by the emerging magnetic field. Formation of the current sheet and reconnection ----------------------------------------------- The oppositely directed magnetic field is brought together as the field expands into the upper chromosphere and by photospheric motions in many locations in the region that is covered by the emerging flux sheet. In particular, we concentrate on the fields in the vicinity of the center of the computational domain, located at $x=11$ Mm and $y=11$ Mm, which is clearly visible in the center of the upper right panel of Figure \[fig:ebuv\_bz\_tg\_evol\]. At this location, two footpoints of opposite polarity, each the nexus of loop systems that are oriented roughly along the $y$-axis that span three to four granules, are brought into contact: A current sheet forms starting at $t=6\,000$ s. This current sheet strengthens with time, initially slanted but becoming nearly vertical in the next $1500$ s, stretching from the photosphere to almost 4 Mm above. After formation, the current sheet maintains its shape and remains vertical and in the same location for at least the next $1\,000$ s, that is, to $t=9\,000$ s (or slightly longer). The topology of the magnetic field at the height of the UV-burst emission is shown in figure \[fig:ebuv\_3D\], which also shows how oppositely directed fields are brought together and become the source of strong Joule heating and reconnection jets, which accelerate the plasma to high velocities. Above the reconnection region, we find emerged and previously reconnected field lines that have entered the outer solar atmosphere. They are oriented along the $y$-axis and are aligned with the injected field coming from below. The current sheet leaves an imprint on the temperature of the rising cool gas and is visible as a heated region in the lower central panel of Figure \[fig:ebuv\_bz\_tg\_evol\], which shows a $xz$-oriented vertical cut of the atmosphere at $y=10.78$ Mm and $t=8\,220$ s, as a ‘leaf-shaped’ object. The form of the current sheet heating is also visible in the top panel of figure \[fig:ebuv\_3D\]. The general shape and variation in plasma parameters (temperature, magnetic field strength, vertical velocity, and $j^2/B^2$) in and around the current sheet are further shown in Figure \[fig:emergence\_side\], which shows a $yz$-oriented cut at $x=10.78$ Mm through the center of the sheet. A movie of its evolution, showing the same panels as in figure \[fig:emergence\_side\], is included in the supplementary material to this article. The high-temperature phase of the current sheet, with temperatures well above $10$ kK, which is responsible for the UV burst discussed below, only lasts for a limited (200 s) time. In contrast, the current sheet itself with associated reconnection events last much longer, and the temperature along the current sheet is at all times higher than ambient. A number of bidirectional jets with velocities of up to $200$ km/s are excited along the current sheet as reconnection commences. These jets heat the plasma, thus forming a thin vertical ‘leaf’ of hot accelerated plasma that is located between the interacting bubbles of newly emerged cool gas. The activity engendered by the reconnection perturbs the plasma along the entire current sheet all the way from coronal heights ($3.5$ Mm) and down to the photosphere. The movie of the current sheet evolution also shows that a number of plasmoids are generated and are accelerated either upward or downward during the lifetime of the current sheet. Ellerman bomb {#sec:resultEB} ------------- In the first few scale heights above the photosphere, this activity leads to the formation of an EB. The synthetic [H$\alpha$]{} spectra calculated using the MULTI3D code [@2009ASPC..415...87L] display many features that are the same as those that are observed. For example, in Figure \[fig:ha\_profile\] the [H$\alpha$]{} emission is shown as seen from the side at an angle of $\mu=0.5$: the wings of the profile form the typical EB moustaches, while the line core shows no brightening or evidence of heated gas. The EB forms a flame-like structure that is clearly visible in spectroheliograms made in the red and blue wings of [H$\alpha$]{} (the red wing is shown in figure \[fig:ha\_profile\]). This structure extends to 1.2 Mm above the photosphere. The line core, shown in the right panel of figure \[fig:ha\_profile\], shows extensive fibrils, some have about the length of the computational box, $20$ Mm, but no evidence of brightening directly above the site of the EB. The two loop systems that meet and are the source of the EB are visible in the center of the right panel of figure \[fig:ha\_profile\]. They stretch toward greater and lower $y$-values from their meeting point close to $y=11$ Mm. ? When viewed from directly above, the EB region is bright in [H$\alpha$]{} in the line core as well, but this is due to emission that is formed much higher in the atmosphere, at least 5 Mm above the photosphere: The cool canopy above the current sheet is heated by Ly$\alpha$ and Ly-continuum radiation from the hot plasma of the current sheet below, raising the temperature of the canopy plasma significantly, as is visible in the lower middle panel of figure \[fig:ebuv\_bz\_tg\_evol\] and in the left panel of figure \[fig:emergence\_side\]. This relatively high-temperature gas excites some [H$\alpha$]{} emission in the region that is heated above the current sheet. This heating at large heights directly above the reconnection region is probably a result of the method we chose for the coronal back-radiation and raises the temperature more than what would occur on the real Sun. The heating does not affect the evolution or diagnostics of the EB or UV burst. ![[H$\alpha$]{} spectroheliograms at $+0.1$ nm and at line center (center and right panels, respectively) at a viewing angle of $\mu=0.5$. The EB is located at $x=17$ Mm. Because of the projection effect of looking from the side, this corresponds to $x=11$ when viewed from directly above. The line profile over the EB, shown in the left panel, is located at $[x,y]=[17,11]$ Mm and is evident in the line wings. []{data-label="fig:ha_profile"}](ha_profile_ix=556_scan.pdf){width="\hsize"} Swedish 1-meter Solar Telescope observations [@Ortiz_etal2018] show that the wing of the  $854.2$ nm line can be a very good proxy for the existence of EBs, with enhanced emission that is colocated and roughly cotemporal with bright [H$\alpha$]{} wing enhancement. These observations in the blue wings of both the  $854.2$ nm and [H$\alpha$]{} lines also show dark, presumably cool, surges that occasionally emanate from the site of EB emission. We find that the EB in this location shows brightening, starting at time $t=7500$ s and lasting for at least $1\,200$ s thereafter. This enhanced emission, 0.0735 nm in the blue wing of the  854.2 nm triplet line, is shown in figure \[fig:ca\_triplet\_profile\], which also shows emission in the line core and in the shape of the line profile. Similar as for [H$\alpha$]{}, the cores of the [H&K]{}and the 854.2 nm triplet lines show the loops that emanate from the vicinity of the current sheet as narrow thin fibrils that stretch toward higher and lower $y$-values, approximately in the $y$-direction. The EB is visible as a narrow band of emission in the line wing, and it is also visible at the time of this image ($t=8220$ s) in the line core, although this emission is partially covered by overlying longer fibrils. Apparently rising from the EB location, about parallel with the positive $y$-axis, a small cool surge is seen to be accelerated along the field lines, or loops, that form the cool portion of the canopy. Signs of the surge are visible for some minutes (from $t=8140$ s to $t=8300$ s). The line profile of the EB region shows enhanced emission in both wings and self-reversal in the line core. The profile of the small surge shows absorption that extends to $0.15$ nm (35 km/s) in the blue wing of the  854.2 nm line. ![image](ca_profile_en024031_412.pdf){width="0.95\hsize"} The structure of the atmosphere in the vicinity of the current sheet that produced the EB is shown in figure \[fig:ebuv\_uztg\_EB\]. In the location where positive and negative polarities meet, that is, $x=[11,12]$ Mm, $y\approx 10.5$ Mm, we see heightened temperatures. The current sheet is the site of high-velocity downflows and hotter-than-average chromospheric temperatures that extend all the way down to the photosphere. The temperature around the current sheet is about $7\,500$ K, at least $1\,500$ K warmer than the ambient average chromosphere. (On the other hand, temperatures [**]{} the photosphere in the location of the current sheet are lower than ambient.) We find large downflows of 25 km/s or more, which are rapidly decelerated in the region down to $500$ km. Even closer to the photosphere and below, downflows of about $10$ km/s fill the area in and around the current sheet. These higher temperatures and high downflow velocities are concentrated in a narrow region with horizontal extent of some 700 km where field lines of opposite polarity meet, that is, in the region of the current sheet. In the flux-emerging region outside of the current sheet, the chromosphere included in the magnetic bubbles does not generally have highly supersonic velocities, nor a temperature rise to coronal temperatures until the canopy is reached at some 9 Mm above the photosphere. An exception to this is found in Figure \[fig:ebuv\_uztg\_EB\], which shows that we sometimes also find areas of high downflow velocities, 10 km/s or more, in the chromosphere outside the current sheet, such as the circular region near $[x,y]=[11,9.5]$ Mm, but without a corresponding temperature rise. Temperatures of $<3\,000$ K are seen in regions that contain large unidirectional fields. These may be the sites of convective collapse, with no associated temperature rise and thus no EB. ![Velocity, temperature, and field strength in the $xy$-plane $z=150$ km above the photosphere (rightmost three panels) in the vicinity of the current sheet that generated the EB described in the text. The current sheet is located in the center of the small box shown in the center of the right central panel. The two leftmost panels show with yellow lines the vertical velocity and temperature as functions of height in vertical columns in this small box, averaged over tiles of 150 km ($5\times 5$ grid zones). The green lines show the average of the same variables some distance away from the current sheet, near $[x,y]=[10,9]$ Mm. The blue lines show the average velocity and temperature as a function of height in the entire computational domain. []{data-label="fig:ebuv_uztg_EB"}](ebuv_uztg_EB_it=414.pdf){width="\hsize"} ![image](si_profile_en024031_416.pdf){width="0.95\hsize"} ![Velocity, temperature, and field strength in the $xy$-plane $z=2$ Mm above the photosphere (rightmost three panels) in the vicinity of the current sheet that generated the UV burst described in the text. The current sheet is located in the center of the small box shown in the center of the right central panel. The two leftmost panels show with yellow lines the vertical velocity and temperature as functions of height in vertical columns in this small box, averaged over tiles of 150 km ($5\times 5$ grid zones). The green lines show the average of the same variables some distance away from the current sheet, near $[x,y]=[10,9]$ Mm. The blue lines show the average velocity and temperature as a function of height in the entire computational domain. []{data-label="fig:ebuv_uztg_UV"}](ebuv_uztg_UV_it=414.pdf){width="\hsize"} The UV burst ------------ While the EB is well established already at $t=7500$ s, the UV burst is not visible before $t=8040$ s (although there is an earlier UV burst in about the same location that is part of the same flux system) at which point the temperature in the current sheet at heights from $700$ km up to $3\,500$ km rises rapidly, increasing from $10\,000$ K to more than 1 MK within 20 s or less. This event occurs in the same horizontal location and as part of the same magnetic flux system that produces the EB described in the previous section. As described below, this reconnection event leads to a increase in the  emission of more than two orders of magnitude above average total intensities. Above the current sheet, the plasma remains cool, $<10\,000$ K all the way up to 9 Mm above the photosphere, where it rapidly rises to coronal values. The plasma is at times heated to above 1 MK, and in principle, emission might also be expected in the  $19.51$ nm line from the current sheet. However, this turns out not to be the case when the line is calculated as optically thin using data from the CHIANTI package [@2006ApJS..162..261L]. Figure \[fig:si\_profile\] shows that emission in the  line appears to consist of long strands outlining the canopy above the flux-emerging region, showing mainly the long ($20$ Mm) strands or loops that lie above the shorter interacting loop systems described above in section \[sec:resultEB\]. No additional  emission is visible in the vicinity of the current sheet. This is not because the current sheet does not emit in the  19.51 nm line, but arises because absorption from the neutral hydrogen and helium gas overlies the current sheet, which has high opacity that is included in the calculation and is found to absorb the emission from this line. Running the calculation without absorption does indeed show bright emission emanating from the site of the current sheet. Recently, @2019ApJ...871...82G have found  emission in the (usually very weak) 134.9 nm line observed with IRIS, cospatial with  UV burst emission. This provides observational evidence for coronal temperatures in this event. As shown in the central panel of figure \[fig:si\_profile\], the  139.376 nm line has copious emission in the vicinity of the current sheet, up to two or more orders of magnitude brighter than the average intensity of this line in the computational domain. We find that the average line profile has a total intensity of 2 W/m$^2$/sr, an average redshift of 10 km/s, and a width of 25 km/s (which is reasonably close to thermal). The situation is found to be radically different in the area around the current sheet: As seen from above, the total intensity of this line reaches very high values in a thin region with a width of some hundred kilometers, sometimes extending to several hundred kilometers, stretching along the extent of the current sheet ‘leaf’, roughly 500–700 km. As noted and shown in figure \[fig:ebuv\_bz\_tg\_evol\], an extended region of cool gas lies above the current sheet. Because continuum absorption is weak, this cool gas is relatively transparent near 140 nm, but we find line absorption in the  139.376 nm line when it becomes broad enough to be blended with cool lines such as the  139.332 nm line, which lies 93 km/s blueward of the  139.376 nm line center. We show in figure \[fig:si\_profile\] the line profile as calculated with RH and including the opacity of . The  line has attained high intensity and an extremely non-Gaussian shape with a width of more than 200 km/s. Absorption in the  line is visible (the  140.277 nm line profile has an almost identical shape, but shows no intensity dip at 93 km/s blueward of the line center). This shows that enough cool material lies above the current sheet to produce the UV burst to both hinder emission in the  19.51 nm line and show absorption for a cool line such as  139.332 nm. The latter line is at times shifted 5–10 km/s blueward, which is an indication that the bubble of cool gas carried by the emerging magnetic flux is still partially ascending at this time. We consider the temperature and velocity structure of the upper portion of the current sheet in more detail. We find very high velocities throughout the current sheet at heights from 700 km to $3.5$ Mm. The upper part of the current sheet has upflow velocities of up to $200$ km/s in single-pixel columns, while the lower portion of the current sheet shows downflows of more than $-50$ km/s. This is clear from Figure \[fig:ebuv\_uztg\_UV\], which shows $5\times 5$ grid-point averages of the vertical velocities and temperatures in and in the vicinity of the current sheet. Reconnection in the current sheet drives bidirectional flows, and the highest temperatures are in locations where these flows are decelerated in shocks, near $z=1$ Mm for the downflows and $z=3$ Mm for the upflows. This plasma, heated to well above $100\,000$ K and moving in both directions at high velocity, gives rise to the extreme emission in transition region lines, such as shown above for the  lines around 140 nm. Figure \[fig:emergence\_side\], showing the total intensity of the  $139.376$ nm line as seen from the side (and as calculated as optically thin using data from the CHIANTI atomic data package), reaches high values, up to two orders of magnitude above ambient, along the entire upper part of the current sheet from some 700 km to 3 Mm above the photosphere. Discussion and conclusions ========================== The simulations presented in this paper show that the conditions required to generate many of the diagnostics associated with both EBs and UV bursts can occur naturally as a result of the emergence of a fairly strong but untwisted magnetic flux sheet into an atmosphere that contains a preexisting magnetic field of similar strength. The successfully synthesized diagnostics include [H$\alpha$]{}  wing enhancements in the form of ‘moustaches’ and flame-like structures in [H$\alpha$]{}  spectroheliograms. We also found brightenings in the wings of the  triplet lines (as well as occasionally the line core), and the acceleration of cool surges was seen in the  triplet wings. We did not include phenomena that may be important to the physics or diagnostics of the chromosphere, such as ambipolar diffusion and nonequilibrium ionization. In addition, the magnetic diffusivity of the numerical model is, by necessity, several orders of magnitude higher than what is the case on the real Sun. However, the very good fit between the synthetic diagnostics and the observations and the nonexistent change in these diagnostics when the resolution was increased by nearly a factor 2 compared to the @2017ApJ...839...22H study, make us confident that this model is qualitatively correct. Colocated and cotemporal with the EB discussed in this paper, we also found that the  lines near 140 nm were greatly enhanced, by two to three orders of magnitude. They also showed extremely non-Gaussian line profiles and broadening, which is indicative of velocities of up to $>200$ km/s. The  19.51 nm emission, on the other hand, shows no reaction to the presence of hot dense gas in the first few megameter above the photosphere. The general picture that we obtain here is that when EBs and UV bursts occur simultaneously as part of the same structure, EBs are formed in the first few hundred to one thousand kilometers of the upper photosphere and lower chromosphere, while the UV burst is formed up to several chromospheric scale heights higher in the chromosphere, over an extended region. In the specific example considered in this paper, the EB is formed from the photosphere up to 1200 km above, while the UV burst is formed at heights between 700 km and 3 Mm above the photosphere. The vertically elongated and flame-like shape of the EB is very similar to what is observed [@2011ApJ...736...71W], while the observations of UV bursts are not as clear: [@2018SSRv..214..120Y] write that [*“…a burst may appear with extended structure (jet, fibril, loop) connected to it, but these are typically less bright. The burst itself may also appear spatially extended into one direction ‘flame’, but remains $<2$ arcsec”*]{}. It remains to be determined whether this is compatible with the emission that this simulation produced. The observations indicate that between 10% and 20% of EBs show signatures of UV bursts . We found some examples in the simulation described here, but not enough to carry out a statistical analysis. Furthermore, such an analysis should probably take into consideration a more realistic topology of the emerging field, for example, from an emerging active region- This is beyond the scope of this study. The source of this emission is the current sheet that is formed by the collision of two oppositely directed bundles of magnetic flux that are pushed together by photospheric motions. The current sheet stretches from the photosphere some 3 Mm into the chromosphere while spanning 1-2 Mm horizontally. It is the site of vigorous reconnection, accelerating bidirectional jets of plasma to velocities of about the Alfvén speed; tens of km/s near the photosphere up to several hundred km/s at heights of 1 Mm or more. While the [H$\alpha$]{} and  wing emission is formed mainly in the first few hundred kilometers above the photosphere, we also find absorption by fast-moving gas at much greater heights, as seen in the blue wing of  in Figure \[fig:ca\_triplet\_profile\], for example, where the cool surge is comprised of gas 2 Mm above the photosphere. This is presumably gas accelerated by the Lorentz force, as first described by @1996PASJ...48..353Y, or with additional acceleration by pressure gradient forces as the upward-moving gas is pressed against a ‘wall’ of strong field, as discussed by @2016ApJ...822...18N. The main result of reconnection is the heating of plasma in the vicinity of the current sheet, either directly through Joule heating or resulting from the thermalization of the kinetic energy of the gas that is accelerated by reconnection jets as they are slowed down in shocks. In the first few hundred kilometers above the photosphere, the gas is heated $2-3\,000$ K above ambient temperatures, but at greater heights, where radiative losses are less efficient, the plasma can at times attain temperatures of 1 MK or more for several hundred seconds. The current sheet is located in a large bubble of emerging magnetic field, carrying with it cool gas from the photosphere. This cool gas has high opacity in typically photospheric or cool lines, but also in the continua of hydrogen and helium shortward of 91.1 nm, which is relevant for the EUV Imaging Spectrometer for Hinode and the Advanced Imaging Assembly on the Solar Dynamics Observatory observables such as  30.4 nm,  19.51 nm, and  17.1 nm. Thus we find emission from the current sheet suppressed in the  19.51 nm line, and we find line absorption by  139.332 nm line in the wing of  139.376 when this line becomes broad enough. During the course of the simulation, several regions of opposite magnetic field polarity were brought together as a result of photospheric motions. These led to reconnection in or just above the photosphere, in the middle to upper chromosphere, or in some cases in both when the topology of the interacting magnetic bubbles allowed the formation of a current sheet with a length of several megameters. We therefore expect either EBs, UV bursts or both to be generated readily as a result of flux emergence. When both phenomena occur simultaneously or nearly so, as described in this paper, it may be possible to observe a certain shift in location of the EB and UV burst depending on the orientation of the current sheet and the viewing angle; such shifts would presumably be easier to confirm when they are observed close to the solar limb. We therefore see no compelling reason to posit that UV bursts occur in the photosphere. Instead we propose that the observations point to a scenario where the chromosphere has become vastly bloated with slowly rising (10 km/s) cool fairly dense gas, up to 10 Mm or more. Further, that this cool gas supplies the necessary opacity to explain absorption in lines such as  and , and in the continua of hydrogen and neutral and singly-ionized helium. These provide narrow absorption bands in the   and  line wings, and continuum absorption suppressing the evidence for intense heating in the AIA bands such as 30.3, 17.1, and 19.3 nm in the lower regions of the expanded active region chromosphere. This research was supported by the Research Council of Norway through grant 170935/V30, through its Centres of Excellence scheme, project number 262622. Computing time has come through grants from the Norwegian Programme for Supercomputing, as well as from the Pleiades cluster through the computing project s1061, from the High End Computing (HEC) division of NASA. The 3D radiative transfer computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the High Performance Computing Center North (HPC2N) at Umeå University and the PDC Centre for High Performance Computing (PDC-HPC) at the Royal Institute of Technology in Stockholm. Some images were produced by VAPOR ([www.vapor.ucar.edu]{}), a product of the Computational Information Systems Laboratory at the National Center for Atmospheric Research.
--- abstract: 'A modified definition of quantum mechanical uncertainty $\Delta$ for spin systems, which is invariant under the action of $SU(2)$, is suggested. Its range is shown to be $\hbar^2j\leq\Delta\leq\hbar^2j(j+1)$ within any irreducible representation $j$ of $SU(2)$ and its mean value in Hilbert space computed using the Fubini-Study metric is determined to be ${\rm mean}(\Delta )=\hbar^2j(j+1/2)$. The most used sets of coherent states in spin systems coincide with the set of minimum $\Delta$ uncertainty states.' address: | Fysikum, Stockholms Universitet, Box 6730, 113 85 Stockholm, Sverige\ [and]{} DCTD, Universidade dos Açores, 9500 Ponta Delgada, Portugal author: - 'Nuno Barros e Sá[^1]' title: Uncertainty for spin systems --- Coherent states are an important tool in the study of wave phenomena finding many relevant applications in Quantum physics [@gl1; @ksu]. The familiar Glauber states [@schr; @gl2] can be equivalently defined as the elements of the orbit of the Heisenberg-Weyl group which contains the ground state, as the eigenstates of the annihilation operator or as the minimum uncertainty wave-packets. Following these different definitions there are different approaches to the generalization of the concept of coherent states, the one based on the first definition [@kl1] being the most popular. The generalization procedure has been extended to include spin systems [@kl2; @rad] and others [@kl3; @bgi; @per; @bbd]. A full account of applications of coherent states in different areas of Physics can be found in [@ksk]. In the group theoretical approach to coherent states Hilbert space is decomposed into the union of disjoint sets of coherent states, the group orbits. For spin systems the orbit space (the set of orbits) is composed almost entirely of $3$-dimensional orbits with the exception of a finite number of $2$-dimensional orbits which consist of the eigenvectors of $\vec r\cdot\vec J$, with $\vec J$ the generators of the Lie algebra of $SU(2)$ and $\vec r$ any numeric vector [@bac; @sa]. In each irreducible representation of $SU(2)$ there is one particular orbit which admits an analytic representation in the complex plane, $$|z>=\frac{1}{(1+|z|^2)^j}e^{zJ_-}|j>\ .$$ It turns out that this orbit is singled out by the structure of orbit space: it is the $2$-dimensional orbit composed of the eigenvectors of $\vec r\cdot\vec J$ with the highest absolute value of its eigenvalue [@sa]. However the states belonging to this orbit are not all minimum uncertainty states; they do not even have constant uncertainty. Uncertainty is an important property of a physical state, and it would be desirable to keep it playing a major role in the definition of coherent states. Minimum uncertainty states have been studied in [@nie] and, in the context of spin systems, states saturating the equality in the Heisenberg relation have been studied in [@acs] and called intelligent states. States saturating the equality in the Robertson relation have also been studied [@tri; @bba]. One unsatisfactory feature of intelligent states and of the commonly used definition of uncertainty for spin systems is that they are not invariants under the action of $SU(2)$. As a consequence sets of coherent states based on these definitions cannot be represented as orbits of $SU(2)$. This is in contrast with the situation in particle mechanics where the Heisenberg inequality and the uncertainty function used are invariants under the action of the Heisenberg-Weyl group. Here we propose a new definition of uncertainty for spin systems, $$\Delta=\Delta {J_x}^2+\Delta {J_y}^2+\Delta {J_z}^2\ ,$$ which is a positive increasing function of the variances and which is invariant under the action of $SU(2)$. It obeys the following invariant inequalities: $$\hbar^2j\leq\Delta\leq\hbar^2j(j+1)\ ,$$ which play the role of uncertainty relations. As an immediate application we show that the particular set of coherent states which admits an analytic representation in the complex plane coincides with the set of minimum uncertainty states for this inequality. We use the Fubini-Study metric to compute the mean value of the uncertainty $\Delta$ in Hilbert space with the result: $${\rm mean}(\Delta )=\hbar^2j\left( j+\frac{1}{2}\right)\ ,$$ for any irreducible representation $j$. This shows that in higher dimensional representation spaces of $SU(2)$ most of the states have high values of uncertainty. In particular one has $$\lim_{j\to\infty}\frac{{\rm mean}(\Delta )}{\max(\Delta )}=1\ .$$ The paper is organized as follows: In section \[se1\] we review some mathematical definitions concerning group orbits and invariants, the Glauber coherent states and their generalization, and the construction of spin coherent states. In section \[se2\] we discuss the issue of Heisenberg-like inequalities and uncertainty relations. We propose the new definition of uncertainty $\Delta$ for spin systems and we state and prove the statements about $\Delta$ made above. We include an appendix on how to average quantities in $CP^N$ using the Fubini-Study metric. Introduction {#se1} ============ Group orbits and invariants {#se11} --------------------------- Let $U(g)$ be a representation of the Lie group $G$ on the Hilbert space $\cal H$. The $G$-orbit through $|\phi >\in{\cal H}$ is the subset of $\cal H$ given by $${\cal C}_\phi =\left\{ |\psi >\in {\cal H}: |\psi >= U(g)|\phi >\ ,\ g\in G\right\}\ .\label{gor}$$ It follows that $$\dim{\cal C}_\phi\leq\dim G\ \ {\rm and}\ \ \dim{\cal C}_\phi\leq\dim{\cal H}\ .$$ The relation “$|\phi '>$ lies on the same orbit as $|\phi >$” is clearly an equivalence relation: reflexive, symmetric and transitive. As a consequence $\cal H$ can be partitioned into disjoint orbits $${\cal H}=\bigcup_\phi {\cal C}_\phi$$ where the label $\phi$ runs over orbits (equivalence classes) and not over vectors. The quotient space ${\cal H}/G$ is called the orbit space. A function $f(|\psi >)$ in Hilbert space $\cal H$ is said to be $G$-invariant if $$f(U(g)|\psi >)=f(|\psi >)\ ,\ \forall g\in G\ ,\ \forall |\psi >\in {\cal H}\ .$$ It follows that $G$-invariant functions are also functions on orbit space ${\cal H}/G$. For more information on these issues see for instance [@mic; @asa]. Glauber states {#se12} -------------- The familiar Glauber states $|q,p>$ in particle mechanics can be seen as the $G$-orbit of the Heisenberg-Weyl group through the vacuum state $|0>$, $$|q,p>=U(q,p)|0>\ ,\label{rqr3}$$ where $U(p,q)$ is the Weyl operator $$U(q,p)=e^{i(pQ-qP)/\hbar}\ .$$ They are eigenstates of the annihilation operator and they admit the useful analytic representation in the complex plane $$|p,q>=e^{(za^+-z^*a)}|0>=e^{-|z|^2/2}\sum_n\frac{z^n}{\sqrt{n!}}|n>\ , \label{coe}$$ with $z=(q+ip)/\sqrt{2\hbar}$. It can be shown that the Glauber states are minimum uncertainty states since $$\Delta Q^2=\Delta P^2=\hbar /2\ ,$$ and the equality sign is satisfied in the Heisenberg uncertainty relation (sometimes the square root of this relation is used; here we prefer this form) $$\Delta Q^2\Delta P^2\geq\hbar^2/4\ .\label{hei}$$ The remaining $G$-orbits of the Heisenberg-Weyl group can be seen as generalized coherent states [@kl1; @ksk] but they are not eigenstates of any particularly simple operator, they do not admit an analytic representation in the complex plane, and they are not minimum uncertainty states. Nevertheless they have constant values of uncertainty since both factors $\Delta Q^2$ and $\Delta P^2$ are $G$-invariant functions [@sa]. Spin coherent states {#se13} -------------------- The group $SU(2)$ admits representations classified according to integer and semi-integer values $j$ with the Casimir operator $J^2=j(j+1)\hbar^2$. Let $\cal H$ be a Hilbert space carrying one such representation. Sets of generalized coherent states can be generated as the orbits of $SU(2)$ in $\cal H$, $$\begin{aligned} {\cal C}_\phi &=&\left\{ |\vec r>\in {\cal H}: |\vec r>=U(\vec r)|\phi >\ ,\ \vec r\in (4\pi )^3\right\}\label{iuu}\\ U(\vec r)&=&e^{i\vec r\cdot\vec J/\hbar}\ ,\label{rqr2}\end{aligned}$$ where we used the so-called canonical group coordinates for generality. Using the group parameterization $$U(z,\theta)=Ne^{zJ_-/\hbar}e^{-z^*J_+/\hbar}e^{-i\theta J_z/\hbar}\ ,$$ where $J_\pm$ are the ladder operators $J_\pm =J_x\pm iJ_y$, and choosing the fiducial state $|\phi>$ to be an eigenstate of $J_z$, $|m>$ with $m=-j,..,j$, one has [@rad] $$|z;m>=U(z)|m>=Ne^{zJ_-/\hbar}e^{-z^*J_+/\hbar}|m>\ ,\label{an}$$ where the phase factor resulting from $e^{-i\theta J_z/\hbar}$ has been ignored and $N$ stands for a normalization factor. Further choosing $|j>$ as the fiducial state one has $e^{-z^*J_+/\hbar}|j>=|j>$ and $$|z>=\frac{1}{(1+|z|^2)^j}e^{zJ_-}|j>\ ,\label{anal}$$ after determination of the normalization factor. This analytic representation is not available in general for the sets (\[rqr2\]) generated from arbitrary fiducial vectors. The analogous relation for spin systems to the Heisenberg inequality for canonically conjugate operators (\[hei\]) is $$\Delta J_x{^2}\Delta J_y{^2}\geq \frac{\hbar^2}{4}\overline{J_z}^2\ . \label{var4}$$ Notice the important difference with (\[hei\]) that now the right hand side of the inequality is not a constant. Following [@acs] we shall call the left hand side of (\[var4\]) the uncertainty $\Delta J_x{^2}\Delta J_y{^2}$. Then it is clear that the set of states for which the equality in (\[var4\]) is saturated and the set of states of minimum uncertainty are not the same. Moreover none of them coincide with any set of coherent states (\[iuu\]). On the other hand in particle mechanics the Glauber states satisfy the Heisenberg inequality and they are states of minimum uncertainty. In [@acs] the spin states satisfying the equality sign in (\[var4\]) have been called intelligent states. They are given by $$\begin{aligned} |\tau,N>&=&\frac{A_N}{(1+|\tau|^2)^j}\sum_{l=0}^N\left( \begin{array}{c} N\\ l\end{array}\right) (2j-l)!\times\nonumber\\ &&\times\left(- \frac{2}{\hbar}\tau J_+\right) ^le^{\tau J_+/\hbar}|-j>\end{aligned}$$ where $N$ is a discrete label satisfying $0\leq N\leq 2j$ and $\tau$ is a continuous label which can be either real or purely imaginary. $A_N$ is a normalization factor. Finally we comment that the space of physical states for the irreducible representation $j$ of $SU(2)$ is $CP^N$ with $N=2j$ (see the appendix): $$j\ \rightarrow\ \dim{\cal H}=2j+1\ \rightarrow\ {\rm projective\ space:}\ CP^{2j}\ .$$ Its real dimension is $4j$. Uncertainty {#se2} =========== Uncertainty relations {#se21} --------------------- We recall the inequality valid for hermitian operators $A$ and $B$ [@sha] $$\Delta A^2\Delta B^2\geq\frac{1}{4}\left( \sigma_{AB}{^2}- \overline{[A,B]}^2\right)\ ,\label{rob} \label{var2}$$ where $\Delta A$ and $\Delta B$ are the standard deviations of the operators $A$ and $B$ $$\Delta A^2=\overline{A^2}-\overline{A}^2= <\psi |A^2|\psi >-<\psi |A|\psi >^2\ .$$ and where $$\sigma_{AB}=\overline{\{ A,B\} }-2\overline{A}\ \overline{B}\geq 0$$ is the covariance of $A$ and $B$. Since for hermitian operators $\sigma_{AB}$ is real and $\overline{[A,B]}$ is purely imaginary, both parcels on the right hand side of (\[var2\]) are positive and one can state that $$\Delta A^2\Delta B^2\geq -\frac{1}{4}\overline{[A,B]}^2\ . \label{var3}$$ This is called the Heisenberg relation while (\[rob\]) is often called the Robertson relation. For canonically conjugate operators $Q$ and $P$ one has $[Q,P]=i\hbar$ and the Heisenberg uncertainty relation (\[hei\]) follows immediately from (\[var3\]). For spin systems (\[var4\]) follows from $[J_x,J_y]=i\hbar J_z$. Notice that the equality can hold only if $\sigma_{AB}=0$. The left hand side of the Heisenberg inequality (\[var3\]) is sometimes called the uncertainty. It is invariant under the action of the Heisenberg-Weyl group. And the right hand side of (\[var3\]) is a constant. It is therefore natural to assign a particular physical significance to $\Delta Q^2\Delta P^2$ and to the states satisfying the equality sign in this inequality. But the left hand side of the analogous spin inequality (\[var4\]) is not invariant under the action of $SU(2)$ neither is its right hand side a constant. Therefore there seems to be no reason why $\Delta {J_x}^2\Delta {J_y}^2$ should play a role for spin systems similar to the one played by $\Delta Q^2\Delta P^2$ in particle mechanics, nor why states saturating the equality in (\[var3\]) or in (\[rob\]) should be particularly distinguished. Such states (intelligent states) have been studied in [@acs] and in [@tri; @bba] respectively and may certainly be important for the study of spin systems with Hamiltonians that break the $SU(2)$ symmetry such as systems under the action of one particular magnetic field pointing in the $z$-direction, but in what concerns the study of $CP^N$ as the representation space for spin systems prior to the definition of the Hamiltonian one should look for a $G$-invariant definition of uncertainty. We look for an uncertainty function which is positive and which increases with increasing values of the variances of the elements of the Lie algebra. The following additive rather than multiplicative combination of variances does the job $$\Delta =\Delta {J_x}^2+\Delta {J_y}^2+\Delta {J_z}^2\ .$$ The following results hold: - [*The uncertainty $\Delta$ is $G$-invariant and therefore it is constant within sets of coherent states generated as orbits of $SU(2)$ in $CP^N$.*]{} - [*The uncertainty $\Delta$ is bounded from below and from above $$\hbar^2j\le\Delta\le\hbar^2j(j+1)\ .$$ All values within this range are present in Hilbert space except for the representation $j=1/2$ where all states have the same uncertainty $\Delta=\hbar^2j$.*]{} - [*The set $$\{|\psi>\in{\cal H}:\Delta(|\psi>)=\hbar^2j\}$$ of minimum uncertainty vectors in the irreducible representation $j$ of $SU(2)$ coincides with the set of coherent states $$|z>=(1+|z|^2)^{-j}e^{zJ_-}|j>$$ generated as an orbit of $SU(2)$ in $\cal H$ and admitting an analytic representation in the complex plane.*]{} - [*The mean value evaluated with the volume element naturally associated to the Fubini-Study metric of uncertainty on the whole of Hilbert space is given by $${\rm mean}(\Delta)=\hbar^2j(j+1/2)$$ for any irreducible representation $j$ of $SU(2)$.*]{} Notice that the last statement is consistent with the second one for the $j=1/2$ representation. Proof {#se22} ----- . We have $$U^+(\vec r)J_iU(\vec r)=\Lambda_i{^j}(\vec r)J_j\ ,$$ where $\Lambda_i{^j}$ are the matrices of the adjoint representation of $SU(2)$, satisfying $$\Lambda_i{^j}(\vec r)\Lambda_i{^k}(\vec r)=\delta^{jk}\ \ ,\ \forall\vec r\ .$$ The mean values of $J_i$ transforms, within an orbit, according to the adjoint representation too, $$\overline{J_i}=<\vec r|J_i|\vec r>=<\phi |U^+(\vec r)J_iU(\vec r)| \phi >=\Lambda_i{^j}(\vec r)<\phi |J_j|\phi >\ .$$ Then $\overline{J_i}\ \overline{J_i}$ is a $G$-invariant function $$\overline{J_i}\ \overline{J_i}=\Lambda_i{^j}(\vec r)<\phi |J_j|\phi > \Lambda_i{^k}(\vec r)<\phi |J_k|\phi >= <\phi |J_i|\phi ><\phi |J_i|\phi >\ .$$ This is one example of a wider set of invariants defined in [@sa]. The Casimir operator $J_iJ_i$ is invariant within the whole representation and consequently $\overline{J_iJ_i}$ is $G$-invariant. Then $$\Delta =\sum_{i=x,y,z}{\Delta J_i}^2= \sum_{i=x,y,z}\overline{J_iJ_i}- \overline{J_i}\ \overline{J_i}\label{uncr}$$ is the difference between two $G$-invariant functions and is therefore $G$-invariant too. . It is always possible to choose a representative $|\psi>=\sum_{m=-j}^mc_m|m>$ within each orbit such that $<\psi|\vec J|\psi>=\overline{J_z}\vec e_z$. Then $\overline{J_i}\ \overline{J_i}=\overline{J_z}^2$. But $$\overline{J_z}=\sum_{m=-j}^jm\hbar |c_m|^2\ \Rightarrow\ |\overline{J_z}|\le\hbar j\ .\label{joz}$$ Therefore $\overline{J_i}\ \overline{J_i}\le\hbar^2j^2$, and this inequality is valid all over Hilbert space since it concerns a $G$-invariant function. On the other hand it is obvious that $\overline{J_i}\ \overline{J_i}\ge 0$. Since $J_iJ_i=\hbar^2j(j+1)$ it follows that $$0\le\overline{J_i}\ \overline{J_i}\le\hbar^2j^2\ \Leftrightarrow\ \hbar^2j\le\Delta\le\hbar^2j(j+1)\ .$$ Now we consider the one-parameter set of vectors $$|\alpha>=\cos\alpha|j>+\sin\alpha|-j>\ {\rm with}\ \alpha\in [0,\pi/2]\ .$$ We have $$\overline{J_x}=\hbar\sqrt{j/2}\sin(2\alpha)\delta_j^{1-j}\ ,\ \overline{J_y}=0\ ,\ \overline{J_z}=\hbar j\cos(2\alpha)\ \Rightarrow\ \overline{J_i}\ \overline{J_i}=\left\{ \begin{array}{l} \hbar^2j^2\ {\rm for}\ j=1/2\\ \hbar^2j^2\cos^2(2\alpha)\ {\rm for}\ j\ne 1/2\end{array}\right.\ .$$ There is only one orbit in the $j=1/2$ representation [@sa]; since $\Delta$ is $G$-invariant it can only assume the value $\hbar^2j^2$. On the other hand, for $j\ne 1/2$ it is clear that $\overline{J_i}\ \overline{J_i}$ maps $\alpha$ onto $[0,\hbar^2j^2]$, and the statements about the range of $\overline{J_i}\ \overline{J_i}$ in Hilbert space are proven. . We notice from (\[joz\]) that the maximum value of $\overline{J_i}\ \overline{J_i}$ is attained only at the vectors $|j>$ and $|-j>$ which we know to belong to the same orbit [@sa]. This single orbit coincides with the set (\[anal\]) of coherent states $|z>$ since for $z=0$ we have $|z>=|j>$. . We use the coordinates (\[dsoo\]) defined in the appendix to label physical states $$|\psi>=\sum_{m=-j}^jc_m|m>=\sum_{n=0}^NZ_n(\theta_i,\beta_j)|n-N/2> =|\{\theta_i\} ,\{\beta_j\} >\ .$$ Using the standard representation of the generator $J_z$ of the $SU(2)$ Lie algebra [@sak] its mean value on a state $|\{\theta_i\} ,\{\beta_j\} >$ is $$\overline{J_z}=\sum_{m=-j}^j|c_m|^2\hbar m=\hbar \sum_{n=0}^{N}x_n^2 \left( n-\frac{N}{2}\right) \\$$ The mean value of $\overline{J_z}^2$ in the whole of Hilbert space is thus (see (\[av\]) in the appendix) $${\rm mean}(\overline{J_z}^2)=\frac{\hbar^2}{V_N}\int_{CP^N}dv \overline{J_z}^2=\frac{\hbar^2}{V_N}\sum_{m,n=0}^N\left[ \left( m- \frac{N}{2}\right)\left( n-\frac{N}{2}\right)\int_{CP^N}dv (x_mx_n)^2\right]$$ Now we compute $$\int_{CP^N}dv(x_mx_n)^2=\frac{\pi^N}{(N+2)!}(1+\delta_{mn})$$ and $$\sum_{m,n=0}^N\left( m-\frac{N}{2}\right) \left( n-\frac{N}{2}\right)(1+\delta_{mn})= \sum_{n=0}^N\left( n-\frac{N}{2}\right) ^2= \frac{N(N+1)(N+2)}{12}$$ to arrive at $${\rm mean}(\overline{J_z}^2)=\frac{\hbar^2}{V_N}\frac{\pi^N}{(N+2)!} \frac{N(N+1)(N+2)}{12}=\frac{\hbar^2N}{12}$$ By symmetry one has $${\rm mean}(\overline{J_x}^2)={\rm mean}(\overline{J_y}^2)={\rm mean}(\overline{J_z}^2)$$ and consequently $${\rm mean}(\overline{J_i}\ \overline{J_i})=3\ {\rm mean}(\overline{J_z}^2)=\frac{\hbar^2N}{4}=\frac{\hbar^2j}{2}\ .$$ The mean value of uncertainty (\[uncr\]) in Hilbert space is therefore $${\rm mean}(\Delta)={\rm mean}(\overline{J_iJ_i})- {\rm mean}(\overline{J_i}\ \overline{J_i})=\hbar^2j(j+1)- \frac{\hbar^2j}{2}=\hbar^2j\left( j+\frac{1}{2}\right)\ .$$ The Fubini-Study metric and the volume element in $CP^N$ {#aaa} ======================================================== Two vectors in Hilbert space $\cal H$ differing by a multiplicative non-zero complex constant $\alpha$ represent the same physical state, $$|z'>\sim |z>\ \ {\rm if}\ \ |z'>=\alpha |z>\label{proj}$$ Therefore the space of physical states is the space of rays in Hilbert space or projective space, that is the space of equivalence classes defined by (\[proj\]) and excluding the vector $|\psi>=0$. The projective spaces constructed from finite-dimensional Hilbert spaces are called $CP^N$ and are well studied spaces [@ben; @kno]. The superscript $N$ stands for their complex dimension which is one unit lower then the complex dimension of the Hilbert space from which they are constructed. If $|n>$ is a basis for $(N+1)$-dimensional Hilbert space any vector $|\psi>$ can be written as $$|\psi>=\sum_{n=0}^NZ_n|n>\ .$$ The complex numbers $Z_n$ are homogeneous coordinates in $\cal H$ and they can also be used as coordinates in $CP^N$ provided one makes the identifications $$Z'_n\sim Z_n\ {\rm if}\ \exists\alpha:\forall n, Z'_n=\alpha Z_n\ .$$ We start by reminding the reader that the unit $N$-sphere can be defined as the hyper-surface in $(N+1)$-dimensional Euclidean space with coordinates $x_i\ ,\ i=0,..,N$ that satisfies $$\sum_{i=0}^Nx_i^2=1\ .$$ Intrinsic coordinates $\theta_i$ can be defined by $$x_i=\cos\theta_i\prod_{j=i+1}^N\sin\theta_j\ ,\label{esfcor}$$ Their range is $(0,\pi)$ except for for $\theta_1$ with range $(0,2\pi)$. $\theta_0=0$ is not a coordinate. The metric induced on the $N$-sphere by its embedding in $(N+1)$-dimensional Euclidean space in this coordinates is diagonal with components $$g_{ii}=\left( \prod_{j=i+1}^{N}\sin\theta_j\right) ^2\ ,\label{lee}$$ and the volume element is $$dv=\prod_{i=1}^{N}\sin^{(n-1)}\theta_i\ d\theta_i\ .\label{dvx}$$ Real projective space $RP^N$ follows the same construction with the range of $\theta_1$ being $(0,\pi)$ too, plus the identifications $$(0,\theta_2,..,\theta_N)\equiv (\pi,\pi-\theta_2,..,\pi-\theta_N)\ .$$ For quantum mechanical purposes the metric of interest in $CP^N$ is the Fubini-Study metric [@kno]. Its line element in the homogeneous coordinates $Z_i$ is $$ds^2=\frac{1}{X^2}\sum_{i=0}^{N}dZ_id\bar Z_i-\frac{1}{X^4} \sum_{i=0}^{N}dZ_i\bar Z_i\sum_{j=0}^{N}Z_jd\bar Z_j\ ,$$ where we have defined $$X^2=\sum_{i=0}^{N}Z_i\bar Z_i\ .$$ Splitting the complex homogeneous coordinates into their absolute values and phases $$Z_i=X_ie^{i\alpha_i}\ ,$$ the Fubini-Study metric splits into two blocks relative to the $X_i$ and to the $\alpha_i$, $$ds^2=ds_X^2+ds_\alpha^2\ ,$$ with $$\begin{aligned} ds_X^2&=&\frac{1}{X^2}\left( \sum_{i=0}^NdX_i^2-dX^2\right) \label{le1}\\ ds_\alpha^2&=&\frac{1}{X^2}\sum_{i=0}^NX_i^2d\alpha_i^2- \frac{1}{X^4} \left( \sum_{i=0}^Nx_i^2d\alpha_i\right) ^2\label{le2}\end{aligned}$$ The intrinsic coordinates on the sphere (\[esfcor\]) and the phases relative to $\alpha_0$ $$\beta_i=\alpha_i-\alpha_0\ ,\quad i=1,..,N\label{ancor}$$ can be used as intrinsic coordinates on $CP^N$. However we should remark that the ranges of all the coordinates $\theta_i$ are $(0,\pi /2)$ since the $X_i$ are absolute values and cannot therefore be negative. Moreover these coordinates are clearly singular whenever $\theta_i=\{ 0,\pi/2\}$. The relation of this coordinates with the homogeneous ones is $$Z_i=Xe^{i\alpha_0}x_i(\theta_j)e^{i\beta_i}\ .\label{dsoo}$$ Plugging this expression into the previous formulas for the line elements (\[le1\])-(\[le2\]) one gets $$\begin{aligned} ds_X^2&=&\sum_{i=0}^Ndx_i^2=\sum_{i=1}^Ng_{ii}d\theta_i^2\\ ds_\alpha^2&=&\sum_{i=1}^Nx_i^2d\beta_i^2- \left( \sum_{i=1}^Nx_i^2d\beta_i\right) ^2 =\sum_{i,j=1}^Nh_{ij}d\beta_id\beta_j\ .\end{aligned}$$ The first is the line element in the unit sphere (\[lee\]) and in the phase line element $ds_\alpha^2$ we have defined the metric $$h_{ij}=x_i^2(\delta_{ij}-x_j^2)$$ with inverse $$h^{ij}=\frac{1}{x_0^2}+\frac{\delta_{ij}}{x_i^2}\ .$$ The volume element for the phase coordinates is $$\begin{aligned} dv_\alpha&=&\sqrt{\det (h_{ij})}\prod_{k=1}^Nd\beta_k= \sqrt{\det (\delta_{ij}-x_j^2)}\prod_{k=1}^Nx_kd\beta_k= \sqrt{1-\sum_{i=1}^Nx_i^2}\prod_{k=1}^Nx_kd\beta_k= \nonumber\\ &=&\prod_{i=0}^Nx_i\prod_{j=1}^Nd\beta_j =\prod_{i=1}^N\cos\theta_i\sin^i\theta_i\ d\beta_i\ ,\end{aligned}$$ where we used (\[esfcor\]) for $x_i$ in the last equality. Using (\[dvx\]) for $dv_X$ the combined volume element is $$dv=dv_Xdv_\alpha=\prod_{i=1}^N\cos\theta_i\sin^{2i-1}\theta_i\ d\theta_id\beta_i\ .\label{vefs}$$ The total volume of $CP^N$ becomes easy to compute $$V_N=\prod_{i=1}^N\int_0^{\pi/2}d\theta_i\cos\theta_i\sin^{2i-1} \theta_i\int_0^{2\pi}d\beta_i =\prod_{i=1}^N\frac{1}{2i}2\pi=\frac{\pi^N}{N!}\ .\label{vcp}$$ Now we are able to compute mean values of functions in Hilbert space as their integral in $CP^N$ weighted with the Fubini-Study volume element (\[vefs\]) and divided by the volume $V_N$ of $CP^N$ (\[vcp\]). Since the functions we are interested in are of the type $<\psi|A|\psi>=\overline{A}$ we shall write explicitly ${\rm mean}(\overline{A})$ to emphasize that the mean value is not taken on quantum states but rather on the whole of $CP^N$, $${\rm mean}(\overline{A})=\frac{1}{V_N}\int_{CP^N}dv\overline{A}\ .\label{av}$$ I thank Ingemar Bengtsson for discussions. R.Glauber, [*Quantum optics and electronics*]{}, eds. C.DeWitt, A.Blandin and C.Cohen-Tannoudji Gordon and Breach (New York 1964). J.Klauder and E.Sudarshan, [*Fundamentals of quantum optics*]{}, Benjamin (New York 1968). E.Schrödinger, p.41, [*Collected papers on Wave mechanics*]{}, Blackie and Son (London 1928). R.Glauber, Phys.Rev.Lett. [**10**]{} (1963) 84. J.Klauder, J.Math.Phys. [**4**]{} (1963) 1055. J.Klauder, Ann.Phys.(N.Y.) [**11**]{} (1960) 123. J.Radcliffe, J.Phys.A:Gen.Phys. [**4**]{} (1971) 313. J.Klauder, J.Math.Phys. [**4**]{} (1963) 1058. A.Barut and L.Girandello, Commun.Math.Phys. [**21**]{} (1972) 41. A.Peremolov, Commun.Math.Phys. [**26**]{} (1972) 222. D.Bhaumik, K.Bhaumik and B.Dutta-Roy, J.Phys.A:Math.Gen. [**9**]{} (1976) 1507. J.Klauder and B-S.Skagerstam, [*Coherent states - Applications in Physics and Mathematical physics*]{}, World Scientific (Singapore 1985). H.Bacry, J.Math.Phys., [**15**]{} (1974) 1686. N.Barros e Sá, preprint quant-ph/0009022. M.Nieto, p.174, vol II of [*Group theoretical methods in Physics*]{}, Proceedings of the International seminar at Zvenigorod 1982, ed. M.Markov, Nauka (Moscow 1983). C.Aragone, E.Chalbaud and S.Salamó, J.Math.Phys. [**17**]{} (1976) 1963. D.Trifonov, J.Math.Phys., [**35**]{} (1994) 2297. C.Brif and Y.Ben-Aryeh, J.Phys.A:Math.Gen., [**27**]{} (1994) 8185. L.Michel, Rev.Mod.Phys., [**52**]{} (1980) 617. M.Abud and G.Sartori, Ann.Phys.(N.Y.), [**150**]{} (1983) 307. R.Shankar, [*Principles of Quantum mechanics*]{}, 2nd ed., Plenum Press (New York 1994). J.Sakurai, [*Modern quantum mechanics*]{}, Addison-Wesley (1994). I.Bengtsson, [*Geometry of quantum mechanics*]{}, Lecture notes (1998). S.Kobayashi and K.Nomizu, [*Foundations of differential geometry*]{}, Wiley (New York 1969). [^1]: Email address: nunosa@vanosf.physto.se. Supported by grant PRODEP-Acção 5.2.
**A Pythagoras proof of Szemerédi’s regularity lemma** We give a short proof of Szemerédi’s regularity lemma, based on elementary geometry. The ‘regularity lemma’ of Endre Szemerédi \[1\] roughly asserts that, for each $\varepsilon>0$, there exists a number $k$ such that the vertex set $V$ of any graph $G=(V,E)$ can be partitioned into at most $k$ [*almost*]{} equal-sized classes so that between [*almost*]{} any two classes, the edges are distributed [*almost*]{} homogeneously. Here [*almost*]{} depends on $\varepsilon$. The important issue is that $k$ (though generally extremely huge) only depends on $\varepsilon$, and not on the size of the graph. The lemma has several applications in graph and number theory, discrete geometry, and theoretical computer science. We give a short proof based on elementary Euclidean geometry. The general line of the proof is like that of the standard proof (in fact, Szemerédi’s original proof), but most of the technicalities are swallowed by Pythagoras’ theorem. We prove two lemmas, one on ‘$\varepsilon$-balanced’ partitions, the other on ‘$\varepsilon$-regular’ partitions. Let $V$ be a finite set. A [*partition*]{} of $V$ is a collection of disjoint nonempty sets (called [*classes*]{}) with union $V$. Partition $Q$ of $V$ is a [*refinement*]{} of partition $P$ if each class of $Q$ is contained in some class of $P$. For $\varepsilon>0$, partition $P$ of $V$ is called [*$\varepsilon$-balanced*]{} if $P$ contains a subcollection $ C$ such that all sets in $C$ have the same size and such that $|V\setminus\bigcup C|\leq\varepsilon|V|$. [[**Lemma .**]{}\[20no12h\][ *Each partition $P$ of $V$ has an $\varepsilon$-balanced refinement $Q$ with $|Q|\leq (1+\varepsilon^{-1})|P|$.* ]{}]{} [[**Proof.**]{} ]{}Define $t:=\varepsilon|V|/|P|$. Split each class of $P$ into classes, each of size $\lceil t\rceil$, except for at most one of size less than $t$. This gives $Q$. Then $|Q|\leq |P|+|V|/t=(1+\varepsilon^{-1})|P|$. Also, the union of the classes of $Q$ of size less than $t$ has size at most $|P|t=\varepsilon|V|$. So $Q$ is $\varepsilon$-balanced. Let $G=(V,E)$ be a graph. For nonempty $I,J\subseteq V$, the [*density*]{} $d(I,J)$ of $(I,J)$ is the number of adjacent pairs of vertices in $I\times J$, divided by $|I\times J|$. Call the pair $(I,J)$ [*$\varepsilon$-regular*]{} if for all $X\subseteq I, Y\subseteq J$: \[27no12b\] [ ]{} if $|X|>\varepsilon|I|$ and $|Y|>\varepsilon|J|$ then $|d(X,Y)-d(I,J)|\leq\varepsilon$. A partition $P$ of $V$ is called [*$\varepsilon$-regular*]{} if \[20no12f\] [ ]{} $\displaystyle \sum_{I,J\in P\atop (I,J)\text{ \rm $\varepsilon$-irregular}} \hspace*{-4mm} |I||J|\leq\varepsilon |V|^2. $ For Lemma \[28no12a\] we need the following. Consider the matrix space ${{\mathbb{R}}}^{V\times V}$, with the Frobenius norm $\|M\|={{\text{Tr}}}(M{^{\sf T}}M)^{1/2}$ for $M\in{{\mathbb{R}}}^{V\times V}$. For nonempty $I,J\subseteq V$, let $L_{I,J}$ be the $1$-dimensional subspace of ${{\mathbb{R}}}^{V\times V}$ consisting of all matrices that are constant on $I\times J$ and 0 outside $I\times J$. For any $M\in{{\mathbb{R}}}^{V\times V}$, let $M_{I,J}$ be the orthogonal projection of $M$ onto $L_{I,J}$. So the entries of $M_{I,J}$ on $I\times J$ are all equal to the average value of $M$ on $I\times J$. If $P$ is a partition of $V$, let $L_P$ be the sum of the spaces $L_{I,J}$ with $I,J\in P$, and let $M_P$ be the orthogonal projection of $M$ onto $L_P$. So $M_P=\sum_{I,J\in P}M_{I,J}$. Note that if $Q$ is a refinement of $P$, then $L_P\subseteq L_Q$, hence $\|M_P\|\leq\|M_Q\|$. [[**Lemma .**]{}\[28no12a\][ *Let $\varepsilon>0$ and $G=(V,E)$ be a graph, with adjacency matrix $A$. Then each $\varepsilon$-irregular partition $P$ has a refinement $Q$ with $|Q|\leq |P|4^{|P|}$ and $\|A_Q\|^2>\|A_P\|^2+\varepsilon^5|V|^2$.* ]{}]{} [[**Proof.**]{} ]{}Let $(I_1,J_1),\ldots,(I_n,J_n)$ be the $\varepsilon$-irregular pairs in $P^2$. For each $i=1,\ldots,n$, we can choose (by definition [[(\[27no12b\])]{}]{}) subsets $X_i\subseteq I_i$ and $Y_i\subseteq J_i$ with $|X_i|>\varepsilon|I_i|$, $|Y_i|>\varepsilon|J_i|$ and $|d(X_i,Y_i)-d(I_i,J_i)|>\varepsilon$. For any fixed $K\in P$, there exists a partition $Q_K$ of $K$ such that each $X_i$ with $I_i=K$ and each $Y_i$ with $J_i=K$ is a union of classes of $Q_K$ and such that $|Q_K|\leq 2^{2|P|}=4^{|P|}$. [^1] Let $Q:=\bigcup_{K\in P}Q_K$. Then $Q$ is a refinement of $P$ such that each $X_i$ and each $Y_i$ is a union of classes of $Q$. Moreover, $|Q|\leq |P|4^{|P|}$. Now note that for each $i$, since $(A_Q)_{X_i,Y_i}=A_{X_i,Y_i}$ (as $L_{X_i,Y_i}\subseteq L_Q$) and since $A_{X_i,Y_i}$ and $A_P$ are constant on $X_i\times Y_i$, with values $d(X_i,Y_i)$ and $d(I_i,J_i)$, respectively: \[14de12a\] [ ]{} $\displaystyle \hspace*{-4mm} \|(A_Q\hspace*{-0.6mm}-\hspace*{-0.7mm}A_P)_{X_i,Y_i}\|^2 = \|A_{X_i,Y_i}\hspace*{-0.6mm}-\hspace*{-0.6mm}(A_P)_{X_i,Y_i}\|^2 = |X_i||Y_i|(d(X_i,Y_i)\hspace*{-0.6mm}-\hspace*{-0.6mm}d(I_i,J_i))^2 > \varepsilon^4|I_i||J_i|. $ Then negating [[(\[20no12f\])]{}]{} gives with Pythagoras, as $A_P$ is orthogonal to $A_Q-A_P$ (as $L_P\subseteq L_Q$), and as the spaces $L_{X_i,Y_i}$ are pairwise orthogonal, \[25no12b\] [ ]{} ${\displaystyle}\hspace*{-4mm} \|A_Q\|^2\hspace*{-0.7mm}-\hspace*{-0.7mm}\|A_P\|^2 = \|A_Q\hspace*{-0.6mm}-\hspace*{-0.7mm}A_P\|^2 \geq \sum_{i=1}^n \|(A_Q\hspace*{-0.6mm}-\hspace*{-0.7mm}A_P)_{X_i,Y_i}\|^2 \geq \sum_{i=1}^n \varepsilon^4|I_i||J_i| > \varepsilon^5|V|^2. {\hspace*{\fill} \hbox{\hskip 1pt \vrule width 4pt height 8pt depth 1.5pt \hskip 1pt}}$ Define $f_{\varepsilon}(x):=(1+\varepsilon^{-1})x4^x$ for $\varepsilon,x>0$. For $n\in{{\mathbb{N}}}$, $f_{\varepsilon}^n$ denotes the $n$-th iterate of $f_{\varepsilon}$. [**Szemerédi’s regularity lemma.**]{}[ For each $\varepsilon>0$ and graph $G=(V,E)$, each partition $P$ of $V$ has an $\varepsilon$-balanced $\varepsilon$-regular refinement of size $\leq f_{\varepsilon}^{\lfloor\varepsilon^{-5}\rfloor}((1+\varepsilon^{-1})|P|)$. ]{} [[**Proof.**]{} ]{}Let $A$ be the adjacency matrix of $G$. Starting with $P$, iteratively apply Lemmas \[20no12h\] and \[28no12a\] alternatingly. At each application of Lemma \[20no12h\], $\|A_P\|^2$ does not decrease, and at each application of Lemma \[28no12a\], $\|A_P\|^2$ increases by more that $\varepsilon^5|V|^2$. Now, for any partition $Q$ of $V$, $\|A_Q\|^2\leq \|A\|^2\leq |V|^2$. Hence, after at most $\lfloor\varepsilon^{-5}\rfloor$ iterations we must have an $\varepsilon$-balanced $\varepsilon$-regular partition as required. We note that if $P$ is an $\varepsilon$-balanced $\varepsilon$-regular partition of $V$, and $C\subseteq P$ is such that all sets in $C$ have the same size and such that $|V\setminus\bigcup C|\leq\varepsilon|V|$, then the number $s$ of $\varepsilon$-irregular pairs in $C^2$ is at most $\varepsilon(1-\varepsilon)^{-2}|C|^2$. For let $t$ be the common size of the sets in $C$. Then, by [[(\[20no12f\])]{}]{}, $st^2\leq\varepsilon|V|^2\leq \varepsilon(1-\varepsilon)^{-2}|\bigcup C|^2 = \varepsilon(1-\varepsilon)^{-2}(t|C|)^2 = \varepsilon(1-\varepsilon)^{-2}|C|^2t^2 $. [**Reference**]{} E. Szemerédi, Regular partitions of graphs, in: [*Problèmes combinatoires et théorie des graphes*]{} (Proceedings Colloque International C.N.R.S., Paris-Orsay, 1976) \[Colloques Internationaux du C.N.R.S. N$^o$ 260\], Éditions du C.N.R.S., Paris, 1978, pp. 399–401. [^1]: For any collection $C$ of subsets of a finite set $S$, there is a partition $R$ of $S$ such that any set in $C$ is a union of classes of $R$ and such that $|R|\leq 2^{|C|}$: take $R:= \{\bigcap_{X\in D}X\cap\bigcap_{Y\in C\setminus D}S\setminus Y\mid D\subseteq C\}\setminus\{\emptyset\}$.
--- abstract: 'The incorporation site of Er dopants inserted at high and low concentration (respectively 5 and 0.5 mol %) in nanoparticles of CaF$_2$ is studied by X-ray Absorption Spectroscopy (XAS) at the Er L$_{III}$ edge. The experimental data are compared with the results of structural modeling based on Density Functional Theory (DFT). DFT-based molecular dynamics is also used to simulate complete theoretical EXAFS spectra of the model structures. The results is that Er substitutes for Ca in the structure and in the low concentration case the dopant ions are isolated. At high concentration the rare earth ions cluster together binding Ca vacancies.' address: - 'CNR-IOM-OGG, c/o ESRF, F-38043 Grenoble France' - 'University of Bologna, Phys. Dept., I-40127, Bologna Italy' - 'Nice Sophia Antipolis University, CNRS, LPMC UMR 7336, 06108 Nice Cedex 2, France' - 'Chimie ParisTech, CNRS, Institut de Recherche de Chimie Paris, PSL Research University, Paris, France' - '$^*$ also at Sorbonne University, UPMC Univ. Paris 06, Paris, France' author: - 'F. d’Acapito' - 'S. Pelli-Cresi' - 'W. Blanc, M. Benabdesselam, F. Mady' - 'P. Gredin$^*$, M. Mortier' bibliography: - 'ercaf2\_r03s.bib' title: 'The incorporation site of Er in nanosized CaF$_2$' --- Introduction ============ Rare Earth (RE) nanoparticles (NPs) have recently attracted a considerable interest due to their optical properties [@Mai2006]. In particular, NPs of Sodium-based fluorides of the class Na(RE)F$_4$ are reported as efficient systems for upconversion, particularly appreciated for bioimaging [@Bouzigues2011; @Nyk2008; @Kumar2007]. Among fluorides, CaF$_2$ constitutes a particularly interesting material for its stability and a non hygroscopicity. Dy - and Tm- doped CaF$_2$ showed very interesting dosimetric properties. Ionizing radiation dosimeters TLD200 and TLD300 were respectively manufactured and marketed as thermoluminescent dosimeters (TLDs). CaF$_2$:Tm is particularly interesting for mixed (n, g) field dosimetry because of the different sensitivity of its thermoluminescent peaks to neutrons and to gamma rays [@Bos1991]. Er-doped CaF$_2$ is reported as an efficient material for lasers at 2.79 $\mu$m [@Labbe-02] whereas Yb-doped CaF$_2$ is reported in applications for tunable laser [@Petit2004; @Aballea-15; @Akchurin-13] and Er-doped CaF$_2$ NPs are reported as promising materials for applications in the third window of low absorption for silica based optic fibers [@Bensalah2006].\ CaF$_2 $ cristallizes with the Fluorite structure [@Smakula1955] consisting in a cubic arrangement of F$^-$ ions with Ca$^{2+}$ occupying the center of alternate cubes. Er$^{3+}$ substitutes for Ca$^{2+}$ in the structure and, to balance the excess of positive charge, it is associated to a charge balancing defect. The structural properties of trivalent dopants in CaF$_2$ single crystals have been investigated in the past using Electron Paramagnetic Resonance [@McLaughlan1966], Ionic Thermo Current [@Stott1971; @Kitts1974], Audio-Frequency Dielectric Relaxation [@Fontanella1976; @Andeen1979]. In the case of low-doped ($\leq 0.1 Mol \%$) samples it has been established that the principal complex consists in (adopting the Kroger-Vink notation) a Er$_{Ca}^{\bullet}$+F$_I^{'}$ where F is placed in a nearest (called nearest neighbor configuration, NN) free cube center respect to Ca. Minor contributions were proposed to be due to bonds to oxygen ions and complexes with the F ion in a next-nearest neighbor (NNN) positions [@Edgar-75]. For higher concentration values (a few %) theoretical calculations [@Bendall1984; @Corish1982] foresee complexes involving aggregates of REs substituting for Ca associated to F interstitials. Direct structural investigations on the RE site based on X-ray Absorption Spectroscopy (XAS) [@Catlow1984] permitted to confirm the theoretical calculations in particular by observing in the case of Er: 9 F ions at 2.35 Åand 8 Ca ions at 3.93 Å.\ In the present study XAS has been used to determine the incorporation site of Er in CaF$_2$ nanoparticles, i.e. in a system not produced with the high equilibrium procedures as the single crystals. Experimental ============ Sample preparation ------------------ All syntheses were made using commercially available nitrates: $Ca(NO_{3})_{2},4H_{2}O$ 99.98% and $Er(NO_{3})_{3},5H_{2}O$ 99.99 % from Alfa Aesar. 48 wt% hydrofluoric acid (Normapur Prolabo) is used as the fluorinating agent. Particles of erbium doped calcium fluoride are obtained using a co-precipitation method. A solution containing the cationic precursors in stoichiometric proportion, was made by dissolving nitrate salts in deionized water. Then this solution was added dropwise to the hydrofluoric acid solution which was stirred magnetically, leading to the formation of $Er^{3+}$ doped $CaF_{2}$ particles according to the following theoretical reaction:\ $(1-x)Ca(NO_{3})_{2} + xEr(NO_{3})_{3} + (2+x)HF \longrightarrow Ca_{1-x}Er_{x}F_{2+x}$\ The hydrofluoric solution contains a large excess of $F^{-}$ anions (10 times the stoichiometric quantity needed) in order to keep a concentration of fluorine anions approximately constant during the reaction.\ The obtained mixture was centrifugated at 13,000 rpm for 20 minutes. The recovered particles were then washed and centrifugated with deionized water 4 times before drying at 80$^{\circ}$ C. Two batches of powder were prepared with Er content at 0.5 mol % and 5 mol %. The obtained powder was annealed at 350$^{\circ}$ C under argon atmosphere for 3h. X-ray Diffraction ----------------- The samples were analyzed using powder X-ray diffraction (XRD). Measurements were performed on a Bruker D8 Endeavour using a Co-radiation diffraction ( $\lambda_{K\alpha1} = 1.78892$ Åand $\lambda_{K\alpha2} = 1.79278$ Å). The 2$\theta$ angular resolution was 0.029$^{\circ}$. The diffraction patterns were scanned over the 2$\theta$ range 10 - 110$^{\circ}$ (Fig.\[fig:xrd\]). ![\[fig:xrd\] XRD pattern refinement of $CaF_{2}:5\%Er$. Circles correspond to the observed profile and full curve to the calculated one. The short vertical lines below the profile curves mark the positions of all Bragg reflections. The lower curves show the difference between observed and calculated profiles.](fig-1.eps){width="80.00000%"} The XRD patterns of the particles present peaks which can be indexed in $CaF_{2}$ cubic phase of the fluorite-type structure (space group Fm3m). XRD patterns present broad peaks characteristic of small crystallite size with a mean value $\Lambda$ calculated using the facilities of the refinement program Fullprof [@Rodriguez-01]. After correction for the instrumental broadening the width of the peaks is similar in the two cases (5 % and 0.5 % doped) evidencing that this parameter is dominated by the reduced crystallite size. This fact prevents the detection of possible fine lattice distortions caused by the dopants and associated defects especially in the highly doped sample. The lattice parameter measured was a$_{0.5\%Er}$=5.471(2) Åand a$_{5\%Er}$=5.493(2) Åcorresponding to an expansion of respectively 0.1% and 0.5% respect to the literature value (a=5.46342(2) Å[@smakula-55]). The size of the crystallites derived from the peak width values was found to be $\Lambda \approx$ 30 nm and this value has been confirmed by Transmission Electron Microscopy which evidenced an average particle size of 25nm. X-ray Absorption spectroscopy measurements ------------------------------------------ X-ray Absorption Spectroscopy (XAS) [@RevModPhys.53.769] measurements at the Er-L$_{III}$ edge were carried out at the LISA (formerly GILDA) beamline at the European Synchrotron Radiation Facility in Grenoble [@dacapito-14]. The monochromator was equipped with a pair of Si(311) crystals and was run in Dynamically focusing mode. Pd-coated mirrors (E$_{cutoff}$=19 keV) were used for vertical collimation, focusing and harmonic suppression. XAS data were collected at Liquid Nitrogen Temperature in Fluorescence detection mode using an hyper-pure Ge 12-elements detector array for the most diluted samples and in transmission mode for the others. Samples were obtained by finely grounding the powder, mixing to cellulose binder and pressing in pellets. The pellets were sufficiently transparent to allow the collection of the absorption spectrum of a $Er_2O_3$ reference after them allowing a precise control of the energy calibration during the data collection. For each sample 2 to 4 spectra were collected and averaged in order to improve the Signal to Noise ratio. Ab-Initio Structural Modeling and Molecular Dynamics ==================================================== Theoretical methods were used to simulate the structure around the Er ions, to estimate the formation energy of different defects and to simulate full XAS spectra via Molecular Dynamics (MD). The structure of different point defects with Er in CaF$_2$ were calculated with Density Functional Theory (DFT) as implemented in the VASP code [@kresse-96] on rhombohedral supercells based on the $CaF_2$ structure. The choice of the dimension of the supercell is a tradeoff between the need of obtaining a sufficiently large cell to mimic an isolated defect ($2Er_{Ca}^{\bullet}$+$V_{Ca}^{"}$ has a dimension of about 6.5 Å) and the need of keeping the computational time at a reasonable value. A $2\times2\times2$ cell would have been too small (side 7.7 Åbarely separating the defects) whereas the $5\times5\times5$ would have contained an untreatable number of atoms (375). For the DFT calculations (involving a high number of points in the mesh of the reciprocal $k$ space) $3\times3\times3$ rhombohedral supercells containing about 81 atoms (side about 11.6 Å) were used. For the DFT-MD calculations, using just the $\Gamma$ point in $k$ space but needing large clusters to calculate EXAFS signals at large distances, $4\times4\times4$ cells of size 15.5 Åand a total 192 atoms were used. Calculations were done with projector augmented wave (PAW) pseudopotentials and the exchange-correlation functional used was the generalized gradient approximation (GGA) [@96-prl-perdew]. Plane waves were considered with a cut-off energy of 650 eV. The Brillouin zone was sampled using a $4\times4\times4$ k-point mesh (using the Monkhorst-Pack scheme [@76-prb-monkhorst]). At each ionic step, the electronic structure was optimized until attaining a convergence of the total energy within $10^{-6}$ eV, whereas the ionic positions were optimized until Hellman Feynman forces were below $10^{-4}$ eV/Å.The validity of the procedure was tested on simulating some well known compounds and comparing the obtained lattice parameters with those available from literature. For the simulated CaF$_2$ structure a lattice parameter a$_{CaF2}^{DFT}$=5.4982 Åwas found against a bibliography [@smakula-55] value of a$_{CaF2}^{Bib}$=5.4634 Åmeaning that in our case the value found was only 6/1000 expanded than the experimental value. The same test carried out on the orthorhombic phase of ErF$_3$ [@kraemer-96] yielded changes of 0%, -3%, +7% of the a, b and c axes lengths respect to the literature values. The point defects considered here were a “substitutional” $Er_{Ca}^\bullet$, a charge compensated “double substitutional + Ca vacancy” $2Er_{Ca}^{\bullet}$+$V_{Ca}^{"}$ and the “substitutional plus F interstitial” $Er_{Ca}^\bullet$+$F_{I} ^{'}$, following the standard Kroger-Vink notation. In the latter cases the charge compensating vacancy/interstitial was either associated to the Er ions (placed as near as possible: F nearest anionic neighbor, Ca first cationic neighbor) or dissociated (placed as far as possible in the supercell). The details of the relaxed structures are shown in the last lines of Tab. \[tab:exares\]. The Er concentration levels were around 4-8 cationic %; concerning the lattice parameters these simulations confirm the lattice expansion found experimentally although at a lower level (0.06%-0.2% depending on the particular complex).\ In order to state the possibility of finding these complexes in the crystal the associated formation energies were calculated. The elemental chemical potentials $\mu_{Ca}$, $\mu_{Er}$, $\mu_{F}$ were derived from the formation energies of compounds ErF$_3$, Ca$F_2$ considered in equilibrium with F$_2$ gas, following a procedure already adopted in previous works [@DAcapito2013]. The formation energies of the complexes are calculated as the difference between the sum of the chemical potentials and the total energy provided by VASP. A collection of results is presented in Tab. \[tab:formEn\]. Complex Formation energy (eV) ----------------------------------------- ----------------------- Er$_{Ca}^{\bullet}$ 7.04 2Er$_{Ca}^{\bullet}$+V$_{Ca}^"$ assoc. 0.88 2Er$_{Ca}^{\bullet}$+V$_{Ca}^"$ dissoc. 1.21 Er$_{Ca}^{\bullet}$+F$_{I}^{'}$ assoc. 1.04 Er$_{Ca}^{\bullet}$+F$_{I}^{'}$ dissoc. 1.09 In the case of charge-compensated sites the structure around Er presents several bond distance values that make difficult a direct comparison with the results of the EXAFS analysis. The theoretical EXAFS signal generated by Feff can not be used as it does not calculate the Debye-Waller factors (DWf) that heavily influence the overall shape of the signal. In order to obtain a realistic comparison with experimental data a DWf must be taken into account and for this reason a simulation of the EXAFS spectra via Molecular Dynamics (MD) was carried out. Following the procedure presented in [@palmer-96] MD trajectories for each defect were calculated using MD-DFT as implemented in VASP. The starting point was the relaxed structure obtained by ’static’ DFT and then the MD was carried out in the canonical ensemble (NVT) with the temperature stabilized at 300 K via a Nose thermostat. Atomic steps were carried out every 1fs the total relaxation time was about 300 fs. The last 150 frames were used to calculate the XAS signals with the Feff9.6.4 code [@Rehr-10] using theoretical signals, calculated via a Self-Consistent procedure, and averaged. The residuals between N averaged spectra $\Theta_N $ and N-1 (defined as $\xi = \sqrt {\sum_k (\Theta_N^2 - \Theta_{N-1}^2)} $ were a few $10^{-3}$ units. The obtained spectra were compared to those obtained from a smaller cell ($3\times3\times3$, 81 atoms) and longer evolution times (1ps) finding an identical damping of the main signals, meaning that the present averaging time was sufficient to correctly reproduce the spectrum. The comparison of the experimental data with the simulated spectra can be extremely effective in the determination of a dopant site in a matrix as shown in previous works [@cartechini-11]. Results ======= The XANES spectra are shown in Fig.\[fig:allxan\] The edges of the samples and that of ErF$_3$ coincide so it can be derived that the state of Er ion is 3+. The EXAFS spectra of the different samples are shown in Fig.\[fig:allexa\] whereas the related Fourier Transforms (FT) are shown in Fig.\[fig:allfou\]. Also in this case it can be noticed that the low-concentration Er-doped samples appear different from the highly concentrated ones. In particular high frequency components are visible in the related EXAFS spectra (sharp oscillations at $k \approx 5.5$ , $ \approx 7.5$, $ \approx 9.2$, $ \approx 10.1$ Å$^{-1}$) that are strongly attenuated in the spectra of the high concentration samples. Correspondingly, in the FT a second shell peak is well visible at $R \approx 3.5$ Åin the spectra of the diluted samples that is strongly damped in the high concentration ones. The XAS data analysis was carried out by calculating theoretical XAS signals on a cluster (radius 6.75 Å, 98 atoms) derived from the structural model found by DFT for the Er$_{Ca}^\bullet$ point defect. Calculations were carried out using the Feff9.6.4 code [@Rehr-10] using a Hedin-Lunqvist approximation for the exchange potential. Scattering potentials were calculated via the Self Consistent procedure implemented in Feff using a cluster of 5.6 Å(50 atoms). Data extraction and modeling were carried out with the ATHENA/ARTEMIS codes [@Ravel:ph5155]. Fitting was carried out in R space in the interval R=\[1-4.4\]Å. The structure was reproduced with a two shell model Er-F and Er-Ca with associated shell radii R and Debye-Waller factors $\sigma^2$. Coordination numbers were fixed to the expected crystallographic values; the S$_0^2$ global amplitude parameter was fixed to 0.95 after a fit of an $Er_2O_3$ model compound. This structural model revealed to reproduce all the main features in k and R space of the spectrum (taking onto account noise on the data) with the exception of the sharp peak at $k \approx 5.5$ Å. Carrying out simulations of the XAS spectrum of the $Er_{Ca}$ complex from the static structures obtained by DFT it was possible to evidence that this peak is originated by the constructive interference of several signals at distances up to 9 Å. As this value exceeds the maximum cluster dimension for the subsequent MD calculations, the fit of the structure at such distance was not attempted. The R factor value is about 0.02 so indicating the goodness of the fit [@ifeffit-faq]. The results of the quantitative EXAFS data analysis are shown in Tab.\[tab:exares\]. Sample ----------------------------------- --- --------- ------------------- ---- --------- ------------------- N R (Å) $\sigma^2$(Å$^2$) N R (Å) $\sigma^2$(Å$^2$) CaF$_2$ 8 2.381 - 12 3.888 - 0.5 % 8 2.27(2) 0.005(1) 12 3.91(2) 0.003 (2) 5.0 % 8 2.26(2) 0.007(2) 12 3.93(4) 0.02 (1) Er$_{Ca}^{\bullet}$ 8 2.282 - 12 3.92 - 2Er$_{Ca}^{\bullet}$+V$_{Ca}^{"}$ 2 2.19 - 4 3.87 - 6 2.31 - 7 3.93 - Er$_{Ca}^{\bullet}$+F$_{I}^{'}$ 4 2.28 - 4 3.72 - 5 2.38 - 4 3.94 - - 4 4.02 - Discussion ========== The XAS data provide a complete information on the chemistry and local structure around Er ions in nanostructured CaF$_2$. XANES reveals that Er is predominantly in the 3+ state as the absorption edge is in the same position as for ErF$_3$. The high and low concentration spectra present a clear difference in the high energy side of the white line and this aspect will be discussed later. The EXAFS quantitative analysis reveals that Er shrinks the structure at the local scale as the Er-F bond is considerably shorter than the corresponding Ca-F bond. The second shell on the other hand, remains at the same distance, meaning that the distortion is only limited to the first shell. No formation of oxides is evident as instead it was evidenced in previous studies on fluoride nanoparticles [@Fortes2014]. Comparing the results of the EXAFS analysis with the DFT static simulations it is evident that the $Er_{Ca}^\bullet$ and 2Er$_{Ca}^{\bullet}$+V$_{Ca}^{"}$ exhibit structural parameters ($<R_{Er-F}>$=2.28 Å,$<R_{Er-Ca}>$=3.91 Å) well in agreement with the experiment. On the contrary the Er$_{Ca}^{\bullet}$+F$_{I}^{'}$ is not compatible with the XAS data as it presents an average value for the first shell Er-F distance $<R_{Er-F}>$=2.33 Åthat is appreciably longer than the experimental value. It is worth noticing that this value is extremely close to what observed by XAS and calculated in Ref.[@Catlow1984] ($R_{Er-F}$=2.35 Å) demonstrating the consistency of the present ab-initio calculations with those presented there and that in our case the incorporation site of Er is different from what observed in bulk crystals. This fact is moreover confirmed by the comparison of the present XAS spectra with those of Ref.[@Catlow1984] where the peak at 5.5 Å$ ^{-1} $ is a faint shoulder. Isolated substitutional or coupled with Ca vacancies are then the best candidates for the incorporation site for Er at low concentration.\ These data are supported by the analysis of the formation energies of the various complexes. If the charged $Er_{Ca}^\bullet$ complex possess a considerable formation energy ($\approx$ 7 eV) the presence of a V$_{Ca}^"$ lowers considerably this value to 1.21 eV. This means that even in the case of *isolated* Er$_{Ca}^{\bullet}$ defects they must be associated (though not closely bound) to charge compensating Ca vacancies. The close association of these two centers leads to a further lowering of the energy (0.88 eV) that results to be even slightly lower than that of the Er$_{Ca}^{\bullet}$+F$_{I}^{'}$ complex (1.04 eV in the associated configuration). This confirms that complexes involving Ca vacancies can be obtained in the CaF$_2$ crystal and that their formation energy is about the same value of that involving a F interstitial.\ A clearer vision about the structure of the complexes can be obtained from the XAS spectra simulated by Molecular Dynamics. Indeed, binding a charge compensating defect leads to a considerable disordering of the local environment (see Tab.\[tab:exares\]) with the appearance of subshells that in principle should lead to a reduction of the related EXAFS signal. This is exactly what observed for the second coordination shell in the samples with high Er content.\ The comparison with the MD-DFT ( Fig.\[fig:allexa\_mdComp\] and Fig.\[fig:allfou\_mdComp\]) simulated spectra provides a deeper insight in this issue. Indeed, the simulated EXAFS spectra for Er$_{Ca}^{\bullet}$ and 2Er$_{Ca}$+V$_{Ca}$ present some features ( shoulders of the main oscillation at $k \approx 5.5$ Å$^{-1}$ and $k \approx 7.5$ Å$^{-1}$) that are absent in the simulated spectrum of 2Er$_{Ca}$+F$_{I}$. We could then privilege the first two as the possible incorporation sites for Er, and this in agreement with the conclusion drawn from the analysis of the static DFT data. The cited shoulders are more evident in the spectrum of Er$_{Ca}^{\bullet}$ and appear dampened in the spectrum of 2Er$_{Ca}$+V$_{Ca}$, reproducing qualitatively the situation observed upon increase of Er concentration in the crystal. It can be then stated that the increase of Er content leads to a clustering of dopants with charge compensating defects forming neutral complexes with appreciable disorder namely in the second coordination shell.The possible presence of REs ions even in the second shell cannot be excluded (clustering of dopants has been already foreseen and observed, namely in [@satta-prb-05; @seo-jpcm-13]) but could not be assessed due to the faintness of this signal in the high concentration sample.\ The aggregation of REs is particularly interesting for applications in mid-infrared lasers (2.7 $\mu$m) involving the transition $ ^4I_{11/2} \rightarrow ^4I_{13/2} $. Indeed, the level lifetime reduction due to concentration is reported [@Labbe-02; @Ma-16] to be greater for the $^4I_{13/2}$ respect to the $^4I_{11/2}$ meaning that the RE clustering facilitates the population inversion between these two levels. Indeed 4% Er doped CaF$_2$ is reported [@Ma-16] to possess an optimal laser performance. A further confirmation of the RE clustering can be derived from the simulation of the XANES spectra for the Er$_{Ca}^{\bullet}$ and the 2Er$_{Ca}^{\bullet}$+V$_{Ca}^{"}$c complexes (the calculation was carried out on the static structures) as shown in Fig. \[fig:allxan\_dft\]. The increase of intensity on the high energy side of the white line observed in the experiment (Fig.\[fig:allxan\]) is reproduced here, confirming the idea that at high concentration the REs tend to cluster and that, from the considerations on the first shell distance and formation energy, they bind a charge compensating Ca vacancy. It cannot be excluded that the size of the aggregates could be even larger with formation of multimer RE structures as reported in Ref.[@Bendall1984]. However, considered the large size of the supercell that would be needed for the full simulation of the EXAFS spectra and the fact that EXAFS cannot see correlations beyond the Er-Ca shell, simulations on multimers were not carried out. The dimer presented here could be taken as a prototype of the other multimers.\ This result presents a new scenario compared with the previous observations reported in literature where the role of F interstitials was shown to be predominant. Anyways, it must be taken into account that the nanometric size of the particles and the peculiar growth technique could play a role in the formation of defects in the crystalline matrix. Conclusion ========== In this study the incorporation site of Er in CaF$_2$ crystals has been studied with a combination of experimental (X-ray Absorption Spectroscopy) and theoretical (density functional theory, DFT and molecular dynamics MD-DFT) methods in the case of high (5 %) and low (0.5%) concentration values for the dopant. Er is present in the matrix as a 3+ ion and its local structure depends on the concentration. In all cases Er substitutes for Ca in the matrix with the difference that at low concentration Er ions are isolated (i.e. far from other dopants and/or charge compensating defects) whereas at high concentration they cluster together binding a charge compensating Ca vacancy. References {#references .unnumbered} ==========
--- abstract: 'We show that the following two problems are fixed-parameter tractable with parameter $k$: testing whether a connected $n$-vertex graph with $m$ edges has a square root with at most $n-1+k$ edges and testing whether such a graph has a square root with at least $m-k$ edges. Our first result implies that squares of graphs obtained from trees by adding at most $k$ edges can be recognized in polynomial time for every fixed $k\geq 0$; previously this result was known only for $k=0$. Our second result is equivalent to stating that deciding whether a graph can be modified into a square root of itself by at most $k$ edge deletions is fixed-parameter tractable with parameter $k$.' author: - 'Manfred Cochefert[^1]' - 'Jean-François Couturier' - 'Petr A. Golovach[^2]' - Dieter Kratsch - 'Dani[ë]{}l Paulusma[^3]' title: 'Parameterized Algorithms for Finding Square Roots[^4] ' --- Introduction {#sec:intro} ============ Squares and square roots are classical concepts in graph theory that are defined as follows. The [*square*]{} $G^2$ of a graph $G=(V_G,E_G)$ is the graph with vertex set $V_G$ such that any two distinct vertices $u,v\in V_G$ are adjacent in $G^2$ if and only if $u$ and $v$ are of distance at most 2 in $G$. A graph $H$ is a [*square root*]{} of $G$ if $G=H^2$. There exist graphs with no square root, graphs with a unique square root as well as graphs with many square roots. Mukhopadhyay [@Mukhopadhyay67] showed in 1967 that a connected graph $G$ with $n$ vertices $v_1,\ldots,v_n$ has a square root if and only if there exists a set of $n$ complete subgraphs $K^1,\ldots,K^n$ of $G$ with $\bigcup_iV_{K^i}=V_G$ such that $K^i$ contains $v_i$ for all $1\leq i\leq n$, and $K^i$ contains $v_j$ if and only if $K^j$ contains $v_i$ for all $1\leq i<j\leq n$. This characterization did not yield a polynomial time algorithm for recognizing squares. In fact, in 1994, Motwani and Sudan [@MotwaniS94] showed that the [Square Root]{} problem, which is that of testing whether a graph has a square root, is [[NP]{}]{}-complete. This fundamental result triggered a lot of research on the computational complexity of recognizing squares of graphs and computing square roots under the presence of additional structural assumptions. In particular, the following two recognition questions have attracted attention; here ${\cal G}$ denotes some fixed graph class. - How hard is it to recognize squares of graphs of $\cal G$? - How hard is is to recognize graphs of $\cal G$ that have a square root? Ross and Harary [@RossH60] characterized squares of a tree and proved that if a connected graph has a unique tree square root, then this root is unique up to isomorphism. Lin and Skiena [@LinS95] gave linear time algorithms for recognizing squares of trees and planar graphs with a square root. The results for trees [@LinS95; @RossH60] were generalized to block graphs by Le and Tuy [@LeT10]. Lau [@Lau06] gave a polynomial time algorithm for recognizing squares of bipartite graphs. Lau and Corneil [@LauC04] gave a polynomial time algorithm for recognizing squares of proper interval graphs and showed that the problems of recognizing squares of chordal graphs, squares of split graphs, and chordal graphs with a square root are all three [[NP]{}]{}-complete. Le and Tuy [@LeT11] gave a quadratic time algorithm for recognizing squares of strongly chordal split graphs. Milanic and Schaudt [@MilanicS13] gave linear time algorithms for recognizing trivially perfect graphs and threshold graphs with a square root. Adamaszek and Adamaszek  [@AdamaszekA11] proved that if a graph has a square root of girth at least 6, then this square root is unique up to isomorphism. Farzad, Lau, Le and Tuy [@FarzadLLT12] showed that recognizing graphs with a square root of girth at least $g$ is polynomial-time solvable if $g\geq 6$ and [[NP]{}]{}-complete if $g=4$. The missing case $g=5$ was shown to be [[NP]{}]{}-complete by Farzad and Karimi [@FarzadK12]. Our Results {#s-our} ----------- The classical [Square Root]{} problem is a decision problem. We introduce two optimization variants of it in order to be able to take a [*parameterized*]{} road to square roots. A problem with input size $n$ and a parameter $k$ is said to be *fixed parameter tractable* (or [[FPT]{}]{}) if it can be solved in time $f(k)\cdot n^{O(1)}$ for some function $f$ that only depends on $k$. We consider two natural choices for the parameter $k$ for our optimization variants of the [Square Root]{} problem and in this way obtain the first FPT algorithms for square root problems. First, in Section \[s-min\], we parameterize the [Minimum Square Root]{} problem, which is that of testing whether a graph has a square root with at most $s$ edges for some given integer $s$. Because any square root of a connected $n$-vertex graph $G$ is a connected spanning subgraph of $G$, every square root of $G$ has at least $n-1$ edges. Consequently, any instance $(G,s)$ of [Minimum Square Root]{} with $s\leq n-2$ is a no-instance if $G$ is connected, which means that we may assume that $s\geq n-1$. Hence, $k=s-(n-1)$ is the natural choice of parameter. Our main result is that [Minimum Square Root]{} is [[FPT]{}]{} with parameter $k$ [^5]. We prove this result by showing that an instance of [Minimum Square Root]{} can be reduced to an instance of a more general problem, in which we impose additional requirements on some of the edges, namely to be included or excluded from the square root. We prove that the new instance has size quadratic in $k$. In other words, we show that [Minimum Square Root]{} has a generalized kernel of quadratic size (see Section \[sec:defs\] for the definition of this notion). This result is further motivated by the observation that [Minimum Square Root]{} generalizes the problem of recognizing squares of trees (take $s=n-1$). A weaker statement of our [[FPT]{}]{} result is that of saying that the problem of recognizing squares of graphs of the class $${\cal G}_k=\{G\; |\; G\; \mbox{is a graph obtainable from a tree by adding at most $k$ edges}\}$$ is polynomial-time solvable for all fixed $k\geq 0$. As such, our result can also be seen as an extension of the aforementioned result of recognizing squares of trees [@LinS95]. Second, in Section \[s-max\], we parameterize the [Maximum Square Root]{} problem, which is that of testing whether a given graph $G$ with $m$ edges has a square root with at least $s$ edges for some given integer $s$. We show that this problem is [[FPT]{}]{} with parameter $k=m-s$. This choice of parameter is also natural, as $G$ has a square root with at least $s$ edges if and only if $G$ can be modified into a square root (of itself) by at most $k$ edge deletions. Hence, our second [[FPT]{}]{} result can be added to the growing body of parameterized results for graph editing problems, which form a well studied problem area within algorithmic graph theory. In Section \[s-max\] we also present an exact exponential time algorithm for [Maximum Square Root]{}, which could be seen as an improvement of the algorithm implied by the characterization of Mukhopadhyay [@Mukhopadhyay67]. In Section \[s-con\] we mention a number of relevant open problems. Preliminaries {#sec:defs} ------------- We only consider finite undirected graphs without loops and multiple edges. We refer to the textbook by Diestel [@Diestel10] for any undefined graph terminology and to the textbooks of Downey and Fellows [@DowneyF99], Flum and Grohe [@flum-grohe-book], and Niedermeier [@niedermeier-book] for detailed introductions to parameterized complexity theory. Let $G$ be a graph. We denote the vertex set and edge set of $G$ by $V_G$ and $E_G$, respectively. The subgraph of $G$ induced by a subset $U\subseteq V_G$ is denoted by $G[U]$. The graph $G-U$ is the graph obtained from $G$ by removing all vertices in $U$. If $U=\{u\}$, we also write $G-u$. The *distance* ${{\rm dist}}_G(u,v)$ between a pair of vertices $u$ and $v$ of $G$ is the number of edges of a shortest path between them. The *open neighborhood* of a vertex $u\in V_G$ is defined as $N_G(u) = \{v\; |\; uv\in E_G\}$, and its *closed neighborhood* is defined as $N_G[u] = N_G(u) \cup \{u\}$. Two vertices $u,v$ are said to be *true twins* if $N_G[u]=N_G[v]$, and $u,v$ are *false twins* if $N_G(u)=N_G(v)$. A vertex $u$ is *simplicial*, if $N_G(u)$ is a clique. The [*degree*]{} of a vertex $u\in V_G$ is denoted $d_G(u)=|N_G(u)|$. The maximum degree of $G$ is denoted $\Delta(G)=\max\{d_G(v)|v\in V_G\}$. A vertex of degree 1 is said to be a *pendant* vertex. Let $G$ be a connected graph. Let $S\subset V_G$, and let $X$ and $Y$ be two disjoint nonempty vertex subsets of $G-S$. Then $S$ is a [*separator*]{} of $G$ if $G-S$ is disconnected, $S$ is an *(X,Y)-separator* if $G-S$ has no path that connects a vertex of $X$ to a vertex of $Y$, and $S$ is a *minimal* $(X,Y)$-separator if $S$ is an $(X,Y)$-separator of $G$ and no proper subset of $S$ is an $(X,Y)$-separator. Moreover, $G$ is [*$2$-connected*]{} if and only if $|V_G|\ge 3$ and $G$ has no separators of size one. The [*union*]{} of two graphs $G_1$ and $G_2$ is the graph $(V_{G_1}\cup V_{G_2},E_{G_1}\cup E_{G_2})$. The graph $K_n$ denotes the complete graph on $n$ vertices. The graph $K_{1,r}$ denotes the star on $r+1$ vertices. A well-known technique to show that a parameterized problem $\Pi$ is fixed-parameter tractable is to find a *reduction to a problem kernel*. This technique replaces an instance $(I, k)$ of $\Pi$ with a reduced instance $(I', k')$ of $\Pi$ called a *(problem) kernel* such that the following three conditions hold: - $k'\leq k$ and $|I'|\leq g(k)$ for some computable function $g$; - the reduction from $(I, k)$ to $(I', k')$ is computable in polynomial time; - $(I,k)$ is a [yes]{}-instance of $\Pi$ if and only if $(I',k')$ is a [yes]{}-instance of $\Pi$. If we slightly modify this definition by letting the instance $(I',k')$ belong to a different problem than $\Pi$, then $(I',k')$ is called a [*generalized*]{} kernel for $\Pi$ in the literature. This concept has been introduced and named [*bikernel*]{} by Alon, Gutin, Kim, Szeider and Yeo [@AlonGKSY11]; a related notion is compression. An upper bound $g(k)$ on $|I'|$ is called the *kernel size*, and a kernel is called *linear* if its size is linear in $k$ and *quadratic* if its size is quadratic in $k$. It is well known that a parameterized problem is fixed-parameter tractable if and only if it has a kernel (see for example [@niedermeier-book]). The Minimum Square Root Problem {#s-min} =============================== As discussed in Section \[s-our\], we consider connected graphs only and parameterize [Minimum Square Root]{} by $k=s-(n-1)$. From now on we denote this problem as [Trees $+\;k$ Edges Square Root]{}\ [*Input:*]{} a connected graph $G$ and an integer $k\geq 0$\ [*Parameter:*]{} $k$\ [*Question:*]{} has $G$ a square root with at most $n-1+k$ edges? We show the following result. \[thm:tree-few-edges\] The [Tree $+\;k$ Edges Square Root]{} problem can be solved in time $2^{O(k^4)} + O(n^4m)$ on graphs with $n$ vertices and $m$ edges. The remainder of this section is organized as follows. In Section \[s-structural\] we show a number of structural results needed to prove Theorem \[thm:tree-few-edges\]. In Section \[s-reduction\] we consider the more general problem [Tree $+\;k$ Edges Square Root with Labels]{}\ [*Input:*]{} a connected graph $G$, an integer $k\geq 0$ and two disjoint subsets\ $R,B\subseteq E_G$\ [*Parameter:*]{} $k$.\ [*Question:*]{} has $G$ a square root $H$ with at most $n-k+1$ edges, such that $R\subseteq E_H$ and $B\cap E_H=\emptyset$? Note that the sets $R$ and $B$ in this problem are given sets of [*required*]{} edges (that have to be in the square root) and [*blocked*]{} edges (that are not allowed to be in the square root), respectively. Also note that [Tree $+\;k$ Edges Square Root with Labels]{} generalizes [Trees $+\;k$ Edge Square Root]{}; choose $R=B=\emptyset$. We reduce [Tree $+\;k$ Edges Square Root]{} to [Tree $+\;k$ Edges Square Root with Labels]{} where the size of the graph in the obtained instance is $O(k^2)$. In other words, we construct a quadratic generalized kernel for [Tree $+\;k$ Edges Square Root]{}. This means that to solve an instance of [Trees $+\;k$ Edge Square Root]{}, we can solve the obtained instance of [Tree $+\;k$ Edges Square Root with Labels]{} by a brute force algorithm. In Section \[s-solving\] we analyze the corresponding running time and complete the proof of Theorem \[thm:tree-few-edges\]. Structural Results {#s-structural} ------------------ We start with the following observation that we will frequently use. \[obs:leaves\] Let $H$ be a square root of a connected graph $G$. - If $u$ is a pendant vertex of $H$, then $u$ is a simplicial vertex of $G$. - If $u,v$ are pendant vertices of $H$ adjacent to the same vertex, then $u,v$ are true twins in $G$. - If $u,v$ are pendant vertices of $H$ adjacent to different vertices, then $u$ and $v$ are not adjacent in $G$ unless $H=K_2$. We now state five useful lemmas, the first two of which, Lemmas \[lem:leaves-one\] and \[lem:leaves-two\], can be found implicitly in the paper of Ross and Harary [@RossH60]. Ross and Harary [@RossH60] consider tree square roots, whereas we are concerned with finding general square roots. As such we give explicit statements of Lemmas \[lem:leaves-one\] and \[lem:leaves-two\]. We also give a proof of Lemma \[lem:leaves-two\] (the proof of Lemma \[lem:leaves-one\] is straightforward). \[lem:leaves-one\] Let $H$ be a square root of a graph $G$. Let $\{u_1,\ldots,u_r\}\subseteq V_H$ for some $r\geq 3$ induce a star in $H$ with central vertex $u_1$. Let $u_3,\ldots,u_r$ be pendant and $\{u_2\}$ be a $(\{u_1,u_3,\ldots,u_r\}, V_H\setminus\{u_1,\ldots,u_r\})$-separator of $H$. Then $\{u_1,\ldots,u_r\}$ is a clique of $G$, and $\{u_1,u_2\}$ is a minimal $(\{u_3,\ldots,u_r\}, V_G\setminus\{u_1\ldots,u_r\})$-separator of $G$. \[lem:leaves-two\] Let $G$ be a connected graph with a square root $H$. Let $\{u_1,\ldots,u_r\}$, $r\geq 3$ be a clique in $G$, such that $\{u_1,u_2\}$ is a minimal $(\{u_3,\ldots,u_r\}, V_G\setminus\{u_1,\ldots,u_r\})$-separator of $G$. Let $\{x_1,\ldots,x_p\}=N_G(u_1)\setminus\{u_1,\ldots,u_r\}$ for some $p\geq 1$ and $\{y_1,\ldots,y_q\}=N_G(u_2)\setminus\{u_1,\ldots,u_r\}$ for some $q\geq 1$, as shown in Figure \[fig:trimming\]. Then the following three statements hold: - $u_1u_2\in E_H$ and, either $u_3u_1,..,u_ru_1\in E_H$, $u_3u_2,\ldots,u_ru_2\notin E_H$, $u_1x_1,\ldots, $ $u_1x_p\notin E_H$, and $\{u_2\}$ is a minimal $(\{u_1,u_3,\ldots,u_r\}, V_H\setminus\{u_1,\ldots,u_r\})$-separator in $H$, or $u_3u_1,\ldots,u_ru_1\notin E_H$, $u_3u_2,\ldots,u_ru_2\in E_H$, $u_2y_1,\ldots,u_2y_q\notin E_H$ and $\{u_1\}$ is a minimal $(\{u_2,..,u_r\}, V_H\setminus\{u_1,..,u_r\})$-separator in $H$ (see Figure \[fig:trimming-i-ii-iii\] i)). - If $u_1,u_2$ are true twins in $G$, then either $u_1x_1,\ldots,u_1x_p\in E_H$ or $u_2x_1,\ldots,u_2x_p\in E_H$. Moreover, in this case, $G$ is the union of two complete graphs with vertex sets $\{u_1,\ldots,u_r\}$ and $\{u_1,u_2,x_1,\ldots,x_p\}$, respectively, and $G$ has two (isomorphic) square roots with edge sets $\{u_1u_2,\ldots,u_1u_r\}$ $\cup\{u_2x_1,\ldots,u_2x_p\}$ and $\{u_2u_1,u_2u_3,\ldots,u_2u_r\}$ $\cup\{u_1x_1,\ldots,u_1x_p\}$, respectively (see Figure \[fig:trimming-i-ii-iii\] ii)). - If $N_G[u_2]\setminus N_G[u_1]\neq\emptyset$, then $u_2u_1,\ldots,u_ru_1\in E_H$, $u_3u_2,\ldots,u_ru_2\notin E_H$, $u_1x_1,\ldots,u_1x_p\notin E_H$. Moreover, the graph $H'$ obtained from $H$ by deleting all $u_iu_j$ with $3\leq i<j\leq r$ is a square root of $G$ (in which $\{u_1,\ldots,u_r\}$ induces a star with central vertex $u_1$ and with leaves $u_2,u_3,\ldots,u_r$ that are pendant vertices except for $u_2$ (see Figure \[fig:trimming-i-ii-iii\] iii)). We first prove i). As $\{u_1,u_2\}$ is a $(\{u_3,\ldots,u_r\}, V_G\setminus\{u_1,\ldots,u_r\})$-separator of $G$, at least one vertex $u_i$ with $r\geq 3$ is adjacent to one of $u_1,u_2$ in $H$, say to $u_1$. Then $u_1x_1,\ldots,u_1x_p\notin E_H$; otherwise, that is, if $u_1$ is adjacent to some $x_j$ in $H$, then $u_ix_j\in E_G$ contradicting the fact that $\{u_1,u_2\}$ is a $(\{u_3,\ldots,u_r\}, V_G\setminus\{u_1,\ldots,u_r\})$-separator of $G$. Because $u_1x_1,\ldots,u_1x_p\notin E_H$, at least one vertex $y_h$ must be adjacent to $u_2$ in $H$ (as otherwise $H$ is not connected and hence cannot be the square root of $G$, which is a connected graph). Because $\{u_1,u_2\}$ is a $(\{u_3,\ldots,u_r\}, V_G\setminus\{u_1,\ldots,u_r\})$-separator of $G$, this means that $u_3u_2,\ldots,u_ru_2\notin E_H$. Consequently, $u_1u_2\in E_H$ and $\{u_2\}$ is a minimal $(\{u_1,u_3,\ldots,u_r\}, V_H\setminus\{u_1,\ldots,u_r\})$-separator in $H$. Suppose that there is a vertex $u_i$, $3\leq i\leq r$, such that $u_iu_1\notin E_H$. Since $u_3,\ldots,u_r$ are not adjacent to $u_2$, it follows that any $(u_2,u_i)$-path in $H$ has length at least 3, which is not possible as $u_2u_i\in E_G$. We conclude that $u_3u_1,\ldots,u_ru_1\in E_H$. Hence we have shown i). We now prove ii). Note that $\{x_1,\ldots,x_p\}=\{y_1,\ldots, y_q\}$ with $p=q$. Due to i) either $u_1$ or $u_2$ is not adjacent to any $x_i$. In the first case $u_2$ must be adjacent to all $x_i$ in $H$, as otherwise there is no required path of length at most $2$ in $H$ between some $x_i$ and $u_1$. Similarly, in the second case, $u_1$ must be adjacent to all $x_i$ in $H$. Hence, $\{u_1,u_2,x_1,\ldots,x_p\}$ is a clique in $G$. If $H$ has an edge $x_iz$ with $z\notin \{u_1,\ldots,u_r,x_1,\ldots,x_p\}$, then $zu_2\in E_G$, which is not possible. This means that $G$ is the union of two complete graphs with vertex sets $\{u_1,\ldots,u_r\}$ and $\{u_1,u_2,x_1,\ldots,x_p\}$, respectively. It is readily seen that $G$ has two (isomorphic) square roots with edge sets $\{u_1u_2,\ldots,u_1u_r\}$ $\cup\{u_2x_1,\ldots,u_2x_p\}$ and $\{u_2u_1,u_2u_3,\ldots,u_2u_r\}$ $\cup\{u_1x_1,\ldots,u_1x_p\}$, respectively. Hence we have shown ii). It remains to prove iii). Let $y_i\in N_G[u_2]\setminus N_G[u_1]\neq\emptyset$. Due to i) we have that $u_1u_2\in E_H$, and that either $u_3u_1,..,u_ru_1\in E_H$, $u_3u_2,\ldots,u_ru_2\notin E_H$, $u_1x_1,\ldots, $ $u_1x_p\notin E_H$, or $u_3u_1,\ldots,u_ru_1\notin E_H$, $u_3u_2,\ldots,u_ru_2\in E_H$, $u_2y_1,\ldots,u_2y_q\notin E_H$. If the latter case holds, then any $(u_2,y_i)$-path in $H$ has length at least 3, which is not possible as $u_2y_i\in E_G$. Hence the former case must hold. Let $H'$ be a graph obtained from $H$ by deleting all $u_iu_j$ for $i,j\in\{3,\ldots,r\}$. It is readily seen that $H'^2=H^2=G$. Hence we have shown iii). Let $G$ be a graph that contains (besides possibly some other vertices) $p+q+r$ distinct vertices $u_1,\ldots,u_r$, $x_1,\ldots,x_p$, $y_1,\ldots y_q$ for some $r\geq 3$, $p\geq 1$ and $q\geq 1$, such that the following conditions hold: - $\{u_1,\ldots,u_r\}$ is a clique in $G$; - $\{u_1,u_2,u_3\}$ is a minimal $(\{u_4,\ldots,u_r\},V_G\setminus\{u_1,\ldots,u_r\})$-separator in $G$ if $r\geq 4$; - $\{u_1,u_3,\ldots,u_r\}\cup \{x_1,\ldots,x_p\}\cup \{y_1,\ldots,y_q\}=N_G(u_2)$; - $\{u_2,u_4,u_5,\ldots,u_r\}=N_G(u_1)\cap N_G(u_3)$; - $\{x_1,\ldots,x_p\}\subseteq N_G(u_1)$ and $\{y_1,\ldots,y_q\}\subseteq N_G(u_3)$; - $x_iy_j\notin E_G$ for $i=1,\ldots, p$ and $y=1,\ldots,q$. We call $G$ an [*$F$-graph*]{} and $\{u_1,u_2,u_3\}$ an [*$F$-triple*]{} with [*outer vertices*]{} $u_1$ and $u_3$, see Figure \[f-fgraph\] for an example. Here, $F$ refers to the graph in Figure \[fig:path\]. These notions are further explained by Lemmas \[lem:path-one\] and \[lem:path-two\]. \[lem:path-one\] Let $H$ be a square root of a graph $G$. Let $H$ contain the graph $F$ of Figure \[fig:path\] as a subgraph, such that $u_4,\ldots,u_r$ are pendant vertices of $H$ (if $r\geq 4$), $d_H(u_2)=r-1$, $u_1u_2u_3$ is an induced path in $H$ that is not contained in any cycle of length at most $6$, $\{x_1,\ldots,x_p\}=N_H(u_1)\setminus\{u_2\}$ and $\{y_1,\ldots,y_q\}=N_H(u_3)\setminus\{u_2\}$. Then $G$ is an $F$-graph. Conditions (i)-(iii) and (v) are readily seen to hold. Conditions iv) and vi) follow from the condition that the path $u_1u_2u_3$ is not contained in any cycle of length at most 6 in $H$. \[lem:path-two\] Let $G$ be a connected $F$-graph. If $H$ is a square root of $G$, then the graph $F$ of Figure \[fig:path\] is a subgraph of $H$ such that $d_H(u_2)=r-1$, $\{x_1,\ldots,x_p\}=N_H(u_1)\setminus\{u_2\}$ and $\{y_1,\ldots,y_q\}=N_H(u_3)\setminus\{u_2\}$. Moreover, the graph obtained from $H$ by deleting all edges $u_iu_j$ with $4\leq i<j\leq r$ is a square root of $G$ that contains $u_4,\ldots,u_r$ as pendant vertices (if $r\geq 4$). Let $H$ be a square root of $G$. We consider the following three cases. [**Case 1.**]{} $u_1u_2,u_2u_3\in E_H$. Because $x_1u_3,\ldots,x_pu_3\notin E_G$, this means that $x_1u_2,\ldots,x_pu_2\notin E_H$. Symmetrically, $y_1u_2,\ldots,y_qu_2\notin E_H$. Since each $x_iu_2\in E_G$ but $x_iu_2\notin E_H$, $H$ has an $(x_i,u_2)$-path of length 2. Because $d_G(u_2)=p+q+r-1$, the middle vertex of this path is in $\{u_1,u_3,\ldots,u_r\}$. Because $x_i$ is not adjacent to $u_3,\ldots,u_r$ in $H$ (as it not so in $G$), this path goes through $u_1$. In other words, $x_1u_1,\ldots,x_pu_1\in E_H$ and, by symmetry, $y_1u_3,\ldots,y_qu_3\in E_H$. If a vertex $z\notin \{u_2,x_1,\ldots,x_p\}$ is adjacent to $u_1$ in $H$, then $z$ is adjacent to both $u_2$ and $x_1$ in $G$. Because $d_G(u_2)=p+q+r-1$, we find that $z\in \{u_3,\ldots,u_r\}$ or $z\in \{y_1,\ldots,y_q\}$. However, none of $\{u_3,\ldots,u_r\}$ is adjacent to $x_1$, whereas none of $\{y_1,\ldots,y_q\}$ is adjacent to $u_2$. We conclude that $\{x_1,\ldots,x_p\}=N_H(u_1)\setminus\{u_2\}$ and by using the same arguments that $\{y_1,\ldots,y_q\}=N_H(u_3)\setminus\{u_2\}$. Now we show that $u_4u_2,\ldots,u_ru_2\in E_H$. To prove it, assume that some $u_i$, $4\leq i\leq r$, is not adjacent to $u_2$ in $H$. Then $u_1$ and $u_i$ are at distance at least 3 in $H$ contradicting $u_1u_i\in E_G$. We already deduced that $x_1u_2,\ldots,x_pu_2\notin E_H$ and that $y_1u_2,\ldots,y_qu_2\notin E_H$. By assumption, $u_2$ is adjacent to both $u_1$ and $u_3$. As $d_G(u_2)=p+q+r-1$, we then find that $d_H(u_2)=r-1$. To conclude the proof for this case, it remains to observe that if some $u_i,u_j$ are adjacent in $H$ for $i,j\in\{4,\ldots,r\}$, then the graph $H'$ obtained from $H$ by the removal of these edges is a square root of $G$. [**Case 2.**]{} $u_1u_2,u_2u_3\notin E_H$. Since $u_1u_2\notin E_H$, $u_1u_2\in E_G$ and $d_G(u_2)=p+q+r-1$, there exists a vertex $z\in \{x_1,\ldots,x_p\}\cup\{u_4,\ldots,u_r\}$ such that $u_1z,zu_2\in E_H$. Because $z$ is not adjacent to $y_1,\ldots,y_q$ in $G$, we find that $y_1u_2,\ldots,y_qu_2\notin E_H$. By the same arguments, we obtain $x_1u_2,\ldots,x_pu_2\notin E_H$. Hence, $z\in\{u_4,\ldots,u_r\}$. By symmetry, some vertex from $\{u_4,\ldots,u_r\}$ is adjacent to $u_3$ in $H$. Consequently, each vertex of $\{u_1,u_2,u_3\}$ is adjacent to some vertex in $\{u_4,\ldots,u_r\}$ in $H$. As $\{u_1,u_2,u_3\}$ separates $\{u_4,\ldots,u_r\}$ from $V_G\setminus \{u_1,\ldots,u_r\}$, this means that $H$ has no edges that join $u_1,u_2,u_3$ with the vertices of $V_G\setminus \{u_1,\ldots,u_r\}$; a contradiction. Hence, this case is not possible. By symmetry, it remains to consider the following case. [**Case 3.**]{} $u_1u_2\in E_H$ and $u_2u_3\notin E_H$. Because $u_1u_2\in E_H$ and $y_1u_1,\ldots,y_qu_1\notin E_G$, we find that $y_1u_2,\ldots,y_qu_2\notin E_H$. Because $y_1u_2\in E_G$, this means that $H$ contains a $(y_1,u_2)$-path of length 2. Because $u_2u_3\notin E_H$ and $d_G(u_2)=p+q+r-1$, such a path should go through one of the vertices of $\{u_1,u_4,\ldots,u_r\}\cup\{x_1,\ldots,x_p\}$. However, none of these vertices is adjacent to $y_1$ in $G$, and consequently not in $H$ either; a contradiction. Therefore, this case is not possible either. \[lem:twins\] Let $u,v$ be true twins in a connected graph $G$ with at least three vertices. Let $G'$ be the graph obtained from $G$ by deleting $v$. The following two statements hold: - If $H'$ is a square root of $G'$, then the graph $H$ obtained from $H'$ by adding $v$ with $N_{H}(v)=N_{H'}(u)$ (that is, by adding a false twin of $u$) is a square root of $G$. - If $H$ is a square root of $G$ such that $u,v$ are false twins in $H$, then the graph $H'$ obtained by deleting $v$ is a square root of $G'$. We first prove i). Let $H'$ be a square root of $G'$, and let $H$ be the graph obtained from $H'$ by adding a false twin $v$ of $u$. As $G$ is a connected graph with at least three vertices, $u$ is adjacent in $H'$ to some vertex $z$. Then $u$ and $v$ are adjacent to $z$ in $H$ and thus $d_H(u,v)\le 2$. Hence, $uv$ is an edge of $H^2$. Then it is straightforward to see that $G=H^2$. Statement ii) follows from the fact that identifying false twins does not change the distance between any two vertices. Construction of the Generalized Kernel {#s-reduction} -------------------------------------- As discussed, in this section, we reduce [Tree $+\;k$ Edges Square Root]{} to [Tree $+\;k$ Edges Square Root with Labels]{} in such a way that the size of the graph in the obtained instance is $O(k^2)$. First, we informally sketch the main steps of the reduction. Let $G$ be a connected graph with $n$ vertices, and let $k$ be a positive integer. Suppose that $H$ is a square root of $G$ with at most $n+k-1$ edges. If $H$ has a vertex $u$ of degree at least 2 that has exactly one non-pendant neighbor $v$, then we recognize the corresponding structure in $G$ and delete those vertices of $G$ that are pendant vertices of $H$ adjacent to $u$ as shown in Figure \[fig:trim-informal\], that is, similar to the algorithm of Lin and Skiena [@LinS95], we “trim” pendant edges in potential roots. Since the root we are looking for is not a tree, our trimming is more sophisticated and based on Lemmas \[lem:leaves-one\] and \[lem:leaves-two\]. We will show that in this way we obtain a graph $G'$ with $n'$ vertices that has the following property: every pendant vertex of any square root $H'$ of $G'$ with at most $n'-1+k$ edges is adjacent to a vertex that has at least two non-pendant neighbors in $H'$. Suppose that $H'$ has a sufficiently long induced path $P$, such that every internal vertex of $P$ has exactly two non-pendant neighbors in $H'$. Let $u$ be an internal vertex of $P$, and let $x,y\in V_P$ be the two non-pendant neighbors of $u$. Using Lemmas \[lem:path-one\] and \[lem:path-two\], we recognize the corresponding structure in $G'$ and modify $G'$ as shown in Figure \[fig:path-informal\], that is, we delete $u$ in $H'$ and join $x$ any $y$ by an edge. By performing this operation recursively, we obtain a graph $G''$ with $n''$ vertices. Suppose that $H''$ is a square root of $G''$ with at most $n''+k-1$ edges. Let $H^*$ be the graph obtained from $H''$ by deleting all pendant vertices of $H''$. Then $H^*$ has no vertices of degree 1, and the length of every path $P$ with internal vertices of degree 2 in $H^*$ is bounded by a constant. This means that the size of $H^*$ is $O(k)$. The vertices of $V_{G''}\setminus V_{H^*}$ are pendant vertices of $H''$. Consider the set $Z$ of pendant vertices of $H''$ adjacent to a vertex $u\in V_{H^*}$. Then the vertices of $Z$ are simplicial vertices of $G''$. Moreover, they are true twins. We use Observation \[obs:leaves\] and Lemma \[lem:twins\] to show that we may reduce the number of true twins in $G''$ if $G''$ has too many. This results in a graph $G'''$ with $n'''$ vertices such that $n'''$ is $O(k^2)$. During the reduction from $G$ to $G''$ we label some edges, that is, we include some edges in sets $R$ or $B$ and, therefore, obtain an instance $(G''',k,R,B)$ of [Tree $+\;k$ Edges Square Root with Labels]{}. Before we give a formal description of our reduction, we introduce the following terminology. A square root $H$ of a graph $G$ that has at most $|V_G|-1+k$ edges for some $k\geq 0$ is called a [*solution*]{} of the instance $(G,k)$ of [Tree $+\;k$ Edges Square Root]{}. If $R\subseteq E_H$ and $B\cap E_H=\emptyset$ for two disjoint subsets $R$ and $B$ of $E_G$, then $H$ is also called a [*solution*]{} of the instance $(G,k,R,B)$ of [Tree $+\;k$ Edges Square Root with Labels]{}. We are now ready to give the exact details of our reduction. Let $G$ be a connected graph with $n$ vertices and $m$ edges, and let $k$ be a positive integer. First we check whether $G$ has a square root that is a tree by using the linear-time algorithm of Lin and Skiena [@LinS95]. If we find such a square root, then we stop and return a yes-answer. From now we assume that every square root of $G$ (if there exists one) has at least one cycle. Because connected graphs that have square roots are 2-connected, we also check whether $G$ is 2-connected. If so, then we stop and return a no-answer. Otherwise we continue as follows. We introduce two sets of edges $R$ and $B$. Initially, we set $R=B=\emptyset$. Next, we “trim” pendant edges in potential roots, that is, we exhaustively apply the following rule that consists of five steps that must be performed in increasing order. [**Trimming Rule**]{} 1. Find a pair $S=\{u_1,u_2\}$ of two adjacent vertices such that one connected component of $G-S$ consists of $r\geq 3$ vertices $u_3,\ldots,u_r$ that together with $u_1,u_2$ form a clique in $G$. 2. If $N_G[u_1]=N_G[u_2]$ then stop and return a no-answer. 3. If $N_G[u_1]\setminus N_G[u_2]\neq\emptyset$ and $N_G[u_2]\setminus N_G[u_1]\neq\emptyset$, then stop and return a no-answer. 4. If $N_G[u_1]\setminus N_G[u_2]\neq\emptyset$, then rename $u_1$ by $u_2$ and $u_2$ by $u_1$ (this step is for notational convenience only and has no further meaning). 5. Define sets $R'=\{u_1u_2,\ldots,u_1u_r\}$ and $B'=\{u_iu_j\; |\; 2\leq i<j\leq r\}\cup\{u_1x\; |\; x\in N_G(u_1)\setminus\{u_2,\ldots,u_r\}\}$. 6. If $R\cap B'\neq\emptyset$ or $R'\cap B\neq\emptyset$, then stop and return a no-answer. Otherwise, set $R=R\cup R'$, $B=B\cup B'$, delete $u_3,\ldots,u_r$ from $G$ and also delete all edges incident to $u_3,\ldots,u_r$ from $R$ and $B$. Exhaustively applying the trimming rule yields a sequence of instances $(G_0,k,R_0,B_0),\ldots, (G_\ell,k,R_\ell,B_\ell)$ of [Tree $+\;k$ Edges Square Root with Labels]{} for some integer $\ell\geq 0$, where $(G_0,k,R_0,B_0)=(G,k,\emptyset,\emptyset)$ and where $(G_\ell,k,R_\ell,B_\ell)$ is an instance for which we have either returned a no-answer (in steps 2, 3 or 6) or for which there does not exist a set $S$ as specified in step 1. For $0\leq i\leq \ell-1$ we denote the sets $R'$ and $B'$ constructed in the $(i+1)$th call of the trimming rule by $R_i'$ and $B'_i$, respectively. We need the following lemma. \[lem:trimming\] The instance $(G_\ell,k,B_\ell,R_\ell)$ has no solution that is a tree, and $G_\ell$ is $2$-connected. Moreover, $(G_\ell,k,R_\ell,B_\ell)$ has a solution if and only if $(G_0,k,R_0,B_0)$ has a solution. If the trimming rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$, then $(G_0,k,R_0,B_0)$ has no solution. For $0\leq i\leq \ell$, we use induction to show that the graph $G_i$ is $2$-connected and that $(G_i,k,B_i,R_i)$ has no solution that is a tree. Moreover, for all $1\leq i\leq \ell$, we show that $(G_i,k,R_i,B_i)$ has a solution if and only if $(G_{i-1},k,R_{i-1},B_{i-1})$ has a solution. Finally, we prove that if the trimming rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$, then $(G_0,k,R_0,B_0)$ has no solution. If $i=0$, then $G_i$ is 2-connected and $(G_0,k,B_0,R_0)$ has no solution that is a tree by our initial assumption (as we had preprocessed $G$ with respect to these two properties). Now suppose that $1\leq i\leq \ell$. By our induction hypothesis, we may assume that $G_{i-1}$ is 2-connected and that $(G_{i-1},k,B_{i-1},R_{i-1})$ has no solution that is a tree. Because the trimming rule applied on $(G_{i-1},k,R_{i-1},B_{i-1})$ yielded a new instance $(G_i,k,R_i,B_i)$, the graph $G_{i-1}$ has a pair $S=\{u_1,u_2\}$ of adjacent vertices such that one connected component of $G-S$ consists of vertices $u_3,\ldots,u_r$ that together with $u_1,u_2$ form a clique in $G_{i-1}$. Step 6 implies that $G_i=G_{i-1}-\{u_3,\ldots,u_r\}$. Because we did not return a no-answer for $(G_{i-1},k,R_{i-1},B_{i-1})$, we find that $N_{G_{i-1}}[u_1] \subset N_{G_{i-1}}[u_2]$. Hence, $G_{i-1}$ is not a complete graph. Because $G_{i-1}$ is 2-connected, this means that $G_i$ is 2-connected. We now show that any solution for $(G_{i-1},k,B_{i-1},R_{i-1})$ corresponds to a solution for $(G_i,k,B_i,R_i)$, and vice versa. First suppose that $H_{i-1}$ is an arbitrary solution for $(G_{i-1},k,B_{i-1},R_{i-1})$. Let $N_{G_{i-1}}(u_1)\setminus \{u_2,\ldots,u_r\}=\{x_1,\ldots,x_p\}$. Because $N_{G_{i-1}}[u_2]\setminus N_{G_{i-1}}[u_1]\neq\emptyset$, we find that $G_{i-1}-\{u_1,u_2\}$ contains at least two connected components. As $G_{i-1}$ is 2-connected, this means that $\{u_1,u_2\}$ is a minimal $\{u_3,\ldots,u_r\},V_{G_{i-1}}\setminus \{u_1,\ldots,u_r\}$-separator of $G_{i-1}$. Hence we may apply Lemma \[lem:leaves-two\] iii), which tells us that $u_2u_1,\ldots,u_ru_1\in E_{H_{i-1}}$, $u_3u_2,\ldots,u_ru_2\notin E_{H_{i-1}}$, and $u_1x_1,\ldots,u_1x_p\notin E_{H_{i-1}}$. As $R_i\subseteq R_{i-1}\cup \{u_1u_2\}$ and $B_i\subseteq B\cup \{u_1x_1,\ldots,u_1x_p\}$, this means that the graph obtained from $H_{i-1}$ by deleting $u_3,\ldots,u_r$ is a solution for $(G_i,k,R_i,B_i)$; in particular note that $|E_{H_i}|\leq |E_{H_{i-1}}|-(r-3)\leq |V_{G_{i-1}}|-1+k-(r-3)=|V_{G_i}|-1+k$, as required. Now suppose that $H_i$ is an arbitrary solution for $(G_i,k,R_i,B_i)$. Then adding the edges $u_1u_3,\ldots,u_1u_r$ to $H_i$ yields a graph $H$ that is a square root of $G_{i-1}$. The edges $u_1u_3,\ldots,u_1u_r$ are not in $B_{i-1}$, as they are in the set $R'_{i-1}$ constructed in step 5 and $R'_{i-1}\cap B_{i-1}=\emptyset$ (otherwise the trimming rule would have stopped when processing $(G_{i-1},k,R_{i-1},B_{i-1})$ in step 6). Now suppose that $R_{i-1}$ contains an edge not in $H$. By definition of $R_i$, this edge must be between some $u_s$ and $u_t$ with $3\leq s<t\leq r$. Then $u_su_t$ belongs to $R_i$, because it was placed in the set $R_h$ for some $h\leq i-1$. In step 4 of the corresponding call of the trimming rule, also one of the edges $u_su_1$ or $u_tu_1$ was placed in $B_h$. Hence either $u_su_1$ or $u_tu_1$ belongs to $B_{i-1}$. This yields a contraction as both $u_su_1$ and $u_tu_1$ belong to $R'_{i-1}$ and $R'_{i-1}\cap B_{i-1}=\emptyset$ (otherwise the trimming rule would have stopped when processing $(G_{i-1},k,R_{i-1},B_{i-1})$ in step 6). Hence, after observing that $|E_H|=|E_{H_i}|+(r-3)\leq |V_{G_i}|-1+k+(r-3)=|V_{G_{i-1}}|-1+k$, we conclude that $H$ is a solution for $(G_{i-1},k,R_{i-1},B_{i-1})$. We observe that $H_i$ cannot be a tree, as this would imply that $H$ is a tree, which is not possible as $(G_{i-1},k,R_{i-1},B_{i-1})$ does not have such a solution. We are left to show that if the trimming rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$, then $(G_0,k,R_0,B_0)$ has no solution. Due to the above, this comes down to showing that $(G_\ell,k,R_\ell,B_\ell)$ has no solution. Suppose that the trimming rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$. Then this must have happened in step 2, 3 or 6, thus after step 1. Hence, there exists a pair of adjacent vertices $S=\{u_1,u_2\}$ in $G_\ell$, such that one connected component of $G_\ell-S$ has vertex set $\{u_3,\ldots,u_r\}$ and $\{u_1,\ldots,u_r\}$ is a clique. First assume that $S$ is not a separator of $G_\ell$, that is, $G_\ell$ is a complete graph with vertex set $\{u_1,\ldots,u_r\}$. Then $N_G[u_1]=N_G[u_2]$ (and the no-answer given by the trimming rule happens in step 2). In order to obtain a contradiction, assume that $(G_\ell,k,R_\ell,B_\ell)$ has a solution $H$. Any star on $|V_{G_\ell}|$ vertices is a square root of $G_\ell$ with at most $|V_{G_\ell}|-1+k$ edges. However, $H$ cannot be such a star, as $(G_\ell,k,R_\ell,B_\ell)$ has no solution that is a tree. Hence, $R_\ell\neq\emptyset$ or $B_\ell\neq\emptyset$. Recall that $B_0=R_0=\emptyset$. Hence, $\ell\geq 1$, and non-emptiness of $R_\ell$ or $B_\ell$ must have been obtained in a previous call of the trimming rule, say in the $(h+1)$th call of the trimming rule for some $0\leq h\leq \ell-1$. By definition of steps 5 and 6, we find that $B_h\neq \emptyset$ implies that $R_h\neq \emptyset$. Hence, $R_h\neq \emptyset$. Let $u_iu_j\in R_h$. By steps 5 and 6, this edge has an end-vertex, say $u_i$, such that $u_iu_s\in B_\ell$ for all $s\in\{1,\ldots,r\}\setminus \{i,j\}$. Consequently, $u_ju_s\in E_H$ for all $s\in\{1,\ldots,r\}\setminus \{j\}$. Because the star with central vertex $u_j$ and leaves $V_{G_\ell}\setminus\{u_j\}$ is not a solution for $(G_\ell,k,R_\ell,B_\ell)$, there must be an edge $u_su_t\in R_\ell$ with $s,t\in\{1,\ldots,r\}\setminus \{j\}$. However then, due to steps 5 and 6, $u_ju_s\in B_\ell$ or $u_ju_t\in B_\ell$, that is, at least one of these edges cannot be in $H$; a contradiction. Now assume that $S$ is a separator of $G_\ell$. Because $G_\ell$ is 2-connected, both $u_1$ and $u_2$ have at least one neighbor in $V_{G_\ell}\setminus\{u_1,\ldots,u_r\}$. Hence $\{u_1,u_2\}$ is a minimal separator (and we may apply Lemma \[lem:leaves-two\] in the remainder). Recall that the trimming rule only returns a no-answer in steps 2, 3, or 6. We consider each of these three cases separately. [**Case 1.**]{} The no-answer is given in step 2. Then $N_G[u_1]=N_G[u_2]$. By Lemma \[lem:leaves-two\] i) and ii), $G_\ell$ is the union of two cliques $\{u_1,\ldots,u_r\}$ and $\{u_1,u_2,x_1, \ldots x_p\}$ where $\{x_1,\ldots,x_p\}=N_G(u_1)\setminus\{u_2,\ldots,u_r\}$. In order to obtain a contradiction, suppose that $(G_\ell,k,R_\ell,B_\ell)$ has a solution $H$. By Lemma \[lem:leaves-two\] i) and ii), we may assume without loss of generality that $u_1u_2,\ldots,u_1u_r\in E_H$, $u_2u_3,\ldots,u_2u_r\notin E_H$, $u_1x_1,\ldots,u_1x_p\notin E_H$ and $u_2x_1,\ldots,u_2x_p\in E_H$. Recall that $(G_\ell,k,R_\ell,B_\ell)$ has no solution that is a tree. Hence, there exists an edge $u_iu_j\in R_\ell$ for some $i,j\in \{2,\ldots,r\}$ or an edge $x_ix_j\in R_\ell$ for some $i,j\in \{x_1,\ldots,x_p\}$. By symmetry, we only need to consider the case $u_iu_j\in R_\ell$. This edge was placed in $R_\ell$ in some previous call of the trimming rule. However, due to steps 5 and 6 performed in that call, we find that $u_iu_1\in B_\ell$ or $u_ju_1\in B_\ell$, that is, at least one of these two edges cannot be in $H$; a contradiction. [**Case 2.**]{} The no-answer is given in step 3. Then we have $N_G[u_1]\setminus N_G[u_2]\neq\emptyset$ and $N_G[u_2]\setminus N_G[u_1]\neq\emptyset$. Due to Lemma \[lem:leaves-two\] i) and iii), $(G_\ell,k,R_\ell,B_\ell)$ has no solution. [**Case 3.**]{} The no-answer is given in step 6. Then $R_\ell\cap B_\ell'\neq \emptyset$ or $R'_\ell\cap B_\ell\neq \emptyset$. By step 4, we may assume that $N_G[u_1]\setminus N_G[u_2]=\emptyset$ and that $N_G[u_2]\setminus N_G[u_1]\neq\emptyset$. In order to obtain a contradiction, suppose that $(G_\ell,k,R_\ell,B_\ell)$ has a solution $H$. By Lemma \[lem:leaves-two\] iii), $R'_\ell=\{u_2u_1,\ldots,u_ru_1\} \subseteq E_H$. Hence $R'_\ell\cap B_\ell=\emptyset$, which means that $R_\ell\cap B_\ell'\neq \emptyset$. Let $\{x_1,\ldots,x_p\}=N_G(u_1)\setminus\{u_1,\ldots,u_r\}$. Then we have that $B_\ell'=\{u_iu_j\; |\; 2\leq i<j\leq r\}\cup\{u_1x_1,\ldots,u_1x_p\}$. By the same arguments as used in Case 1, we find that $u_iu_j\notin R_\ell$ for all $2\leq i<j\leq r$. By Lemma \[lem:leaves-two\] iii), we find that $E_H$, and hence $R_\ell$, does not contain the edges $u_1x_1,\ldots,u_1x_p$. We conclude that $R_\ell\cap B_\ell'=\emptyset$; a contradiction. Lemma \[lem:trimming\] shows that the trimming rule is safe, that is, we either found that $(G,k,\emptyset,\emptyset)$ has no solution, or that we may continue with the instance $(G_\ell,k,R_\ell,B_\ell)$ instead. Suppose the latter case holds. Recall that $(G_\ell,k,R_\ell,B_\ell)$ has no set $S$ as specified in step 1, as otherwise we would have applied the trimming rule once more. To simplify notation, we write $(G,k,R,B)=(G_\ell,k,R_\ell,B_\ell)$. We need the following properties that hold for every solution of $(G,k,R,B)$ (should $(G,k,R,B)$ have a solution). \[lem:trimmed\] Any solution $H$ of $(G,k,R,B)$ satisfies the following properties: - the neighbor of every pendant vertex of $H$ has at least two non-pendant neighbors in $H$; - only edges of $G$ incident to pendant vertices of $H$ can be in $R$ or $B$; - if a pendant vertex $v$ of $H$ is incident to an edge of $R$ in $G$, then all other edges of $G$ that are incident to $v$ are in $B$. In order to show i), suppose that $H$ is a solution of an instance $(G,k,R,B)$, such that $H$ contains a pendant vertex $u$ adjacent to a vertex $v$. If $d_H(v)=1$, then $H$ is isomorphic to $K_2$, which is not possible as $(G,k,R,B)$ has no solution that is a tree. Hence $d_H(v)\geq 2$ and $v$ has at least one neighbor other than $u$. If all neighbors of $v$ are pendant, then $H$ is a tree; a contradiction. Hence, $v$ has at least one non-pendant neighbor. If $v$ has a unique non-pendant neighbor $w$, then by Lemma \[lem:leaves-one\], $G-\{v,w\}$ contains a connected component induced by the pendant neighbors of $v$ whose vertices together with $v$ and $w$ form a clique in $G$. Hence, we can apply the trimming rule on $S=\{v,w\}$, which is a contradiction. Properties ii) and iii) follow from the construction of $R$ and $B$ in steps 4 and 5 of the trimming rule. We now exhaustively apply the following rule on $(G,k,R,B)$. This rule consists of four steps that must be performed in increasing order. [**Path Reduction Rule**]{} 1. Find an $F$-triple $S=\{u_1,u_2,u_3\}$. 2. Set $R'=\{u_2u_1,u_2u_3,\ldots,u_2u_r\}$ and $B'=\{x_1u_2,\ldots,x_pu_2\}\cup\{y_1u_2,..,y_qu_2\}\cup\{u_1u_3,\ldots,u_1u_r\}\cup\{u_3u_4,\ldots,u_3u_r\}$ (note that the set $\{u_3u_4,\ldots, u_3u_r\}=\emptyset$ if $r=3$). 3. If $R\cap B'\neq\emptyset$ or $R'\cap B\neq\emptyset$, then stop and return a no-answer. 4. Delete $u_2,u_4,\ldots,u_r$ from $G$. Delete all edges incident to $u_2,u_4,\ldots,u_r$ from $R$ and $B$. If $u_1u_3\in B$, then delete $u_1u_3$ from $B$. Add $u_1u_3$ to $R$. Add $x_1u_3,\ldots,x_pu_3$ and $y_1u_1,\ldots,y_qu_1$ in $G$. Put these edges in $B$. Exhaustively applying the path reduction rule yields a sequence of instances $(G_0,k,R_0,B_0),\ldots, (G_\ell,k,R_\ell,B_\ell)$ of [Tree $+\;k$ Edges Square Root with Labels]{} for some integer $\ell\geq 0$, where $(G_0,k,R_0,B_0)=(G,k,R,B)$ and where $(G_\ell,k,R_\ell,B_\ell)$ is an instance for which we have either returned a no-answer (in step 3) or for which there does not exist an $F$-triple $S$. For $0\leq i\leq \ell$ we denote the sets $R'$ and $B'$ constructed in the $(i+1)$th call of the path reduction rule by $R_i'$ and $B'_i$, respectively. We need the following lemma, which we will use at several places. \[l-back\] Let $1\leq i\leq \ell$ and $\{u_1,u_2,u_3\}$ be the $F$-triple that yielded instance $(G_i,k,R_i,B_i)$. If $H_i$ is a solution for $(G_i,k,R_i,B_i)$, then $u_1u_3\in E_{H_i}$ and the graph $H_{i-1}$ obtained from $H_i$ by removing the edge $u_1u_3$ and by adding $u_2$ and vertices $u_4,\ldots,u_r$ (if $r\geq 4$) together with edges $u_2u_1,u_2u_3,\ldots,u_2u_r$ is a solution for $(G_{i-1},k,R_{i-1}, B_{i-1})$. We find that $u_1u_3$ is an edge in $H_i$, because $u_1u_3\in R_i$ due to step 4 of the last call of the path reduction rule. The graph $H_{i-1}$ is not only a square root of $G_{i-1}$ but even a solution for $(G_{i-1},k,R_{i-1},B_{i-1})$ for the following reasons. First, $H_{i-1}$ has at most $|V_{G_{i-1}}|-1+k$ edges. Second, $H_{i-1}$ contains no edge of $B_{i-1}$ as the added edges $u_2u_1,u_2u_3,\ldots,u_2u_r$ are all in $R_{i-1}'$ and $R_{i-1}'\cap B_{i-1}=\emptyset$. Third, $H_{i-1}$ contains all the edges of $R_{i-1}$, which can be seen as follows. Suppose that $H_{i-1}$ misses an edge of $R_{i-1}$. Then this edge must be in $\{x_1u_2,\ldots,x_pu_2\}\cup\{y_1u_2,..,y_qu_2\}\cup\{u_1u_3,\ldots,u_1u_r\}\cup\{u_3u_4,\ldots,u_3u_r\}$. However, this set is equal to $B'_{i-1}$ and $R_{i-1}\cap B_{i-1}'=\emptyset$. We also need the following lemma about true twins in $G_0,\ldots,G_\ell$ that we will use later as well. \[l-twins\] Let $1\leq i\leq \ell$ and $\{u_1,u_2,u_3\}$ be the $F$-triple that yielded instance $(G_i,k,R_i,B_i)$. Then any true twins $v,w\in V_{G_i}\setminus\{u_1,u_3\}$ in $G_i$ are true twins in $G_{i-1}$. Suppose that $G_i$ has true twins $v,w\in V_{G_i}\setminus\{u_1,u_3\}$ that are not true twins in $G_{i-1}$. Consider the corresponding $F$-graph that yielded the instance $(G_i,k,R_i,B_i)$. Because $v,w$ are not true twins in $G_{i-1}$, the neighborhood of $v$ or $w$ is modified by the path reduction rule. We may assume without loss of generality that the neighborhood of $v$ is changed. Note that neither $v=u_2$ nor $v\in \{u_4,\ldots,u_r\}$ if $r\geq 3$, because these vertices have been removed in step 4 of the path reduction rule when $G_i$ was constructed. As $v\notin \{u_1,u_3\}$ either, we find that $v\in\{x_1,\ldots,x_p\}\cup\{y_1,\ldots,y_q\}$. By symmetry we may assume that $v\in\{x_1,\ldots,x_p\}$. We observe that $v$ is adjacent to both $u_1$ and $u_3$ in $G_i$. Because the neighborhood of each $x_i$ is modified in the same way (namely by the removal of $u_2$ and the addition of $u_3$), we find that $w\notin\{x_1,\ldots,x_p\}$. Because $u$ and $v$ are true twins, they are adjacent. Because no two vertices $x_i$ and $y_j$ are adjacent in $G_i$, we then obtain that $w\notin\{y_1,\ldots,y_q\}$. We conclude that the neighborhood of $w$ is not modified by the application of the path reduction rule. Because $v$ is adjacent to $u_1$ and $u_3$ in $G_i$ and $v,w$ are true twins in $G_i$, this means that $w$ is adjacent to $u_1$ and $u_3$ in $G_{i-1}$ already. However, by definition of an $F$-graph, $N_{G_{i-1}}(u_1)\cup N_{G_{i-1}}(u_3)=\{u_2,u_4,\ldots,u_r\}$, and $u_2,u_4,\ldots,u_r$ are not in $G_i$ as they were removed by the path reduction rule; a contradiction. The next lemma is the analog of Lemma \[lem:trimming\] for the path reduction rule. \[l-reduction\] The instance $(G_\ell,k,B_\ell,R_\ell)$ has no solution that is a tree, and $G_\ell$ is $2$-connected. Moreover, $(G_\ell,k,R_\ell,B_\ell)$ has a solution if and only if $(G_0,k,R_0,B_0)$ has a solution. If the path reduction rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$, then $(G_0,k,R_0,B_0)$ has no solution. For $0\leq i\leq \ell$, we use induction to show that the graph $G_i$ is $2$-connected and that $(G_i,k,B_i,R_i)$ has no solution that is a tree. Moreover, for all $1\leq i\leq \ell$, we show that $(G_i,k,R_i,B_i)$ has a solution if and only if $(G_{i-1},k,R_{i-1},B_{i-1})$ has a solution. Finally, we prove that if the path reduction rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$, then $(G_0,k,R_0,B_0)$ has no solution. If $i=0$, then $G_i$ is 2-connected and $(G_0,k,B_0,R_0)$ has no solution that is a tree by Lemma \[lem:trimming\]. Now suppose that $1\leq i\leq \ell$. By our induction hypothesis, we may assume that $G_{i-1}$ is 2-connected and that $(G_{i-1},k,B_{i-1},R_{i-1})$ has no solution that is a tree. Because the path reduction rule applied on $(G_{i-1},k,R_{i-1},B_{i-1})$ yielded a new instance $(G_i,k,R_i,B_i)$, the graph $G_{i-1}$ has an $F$-triple $S=\{u_1,u_2,u_3\}$. Because $G_{i-1}$ is 2-connected, $G_i$ is 2-connected; in particular note that $p\geq 1$ and $q\geq 1$ by definition of an $F$-triple. First suppose that $H_{i-1}$ is a solution for $(G_{i-1},k,R_{i-1},B_{i-1})$. We claim that $H_{i-1}$ contains no edge $u_su_t\in R_{i-1}$ with $4\leq s<t\leq r$. We prove this claim by contradiction: let $u_su_t\in E_{H_{i-1}}\cap R_{i-1}$ for some $4\leq s<t\leq r$. Suppose that $u_su_t\in R_0$. We may apply Lemma \[lem:trimmed\] as $(G_0,k,R_0,B_0)$ has a solution $H_0$; if $i\geq 1$ this fact follows from the induction hypothesis. By Lemma \[lem:trimmed\] we find that either $u_s$ is a pendant vertex in $H_0$ with $u_t$ as its (unique) neighbor, or the other way around. We may assume without loss of generality that the first case holds, that is, $u_s$ is pendant in $H_0$ and has $u_t$ as its neighbor. Note that $N_{G_0}[u_s]\subseteq N_{G_0}[u_t]$. We claim that $N_{G_h}[u_s]\subseteq N_{G_h}[u_t]$ for all $0\leq h\leq i-1$. To obtain a contradiction, suppose not. Then at some point $u_s$ will be made adjacent to a vertex $v$ not adjacent to $u_t$ for the first time in step 4 of some call of the path reduction rule. Let $S=\{u_1',u_2',u_3'\}$ be the corresponding $F$-triple. Then we may assume without loss of generality that either $u_s\notin \{u_1',u_2',u_3'\}$ is adjacent to $u_1'$ and $u_2'$ but not to $u_3'=v$, or that $v\notin \{u_1',u_2',u_3'\}$ is adjacent to $u_1',u_2'$ but not to $u_3'=u_s$. In the first case, $u_t$ is not in $\{u_1',u_2'\}$, but must be adjacent to $u_1'$ and $u_2'$ by our assumption, and hence, the edge $u_tu_3'=u_tv$ will be added in the same step; a contradiction. In the second case, as $u_s$ is adjacent to $u_1'$ and $u_2'$, also $u_t$ is adjacent to $u_1'$ and $u_2'$ (again by our assumption). Because $u_t$ does not get removed in this step (as $u_t$ belongs to $G_{i-1}$), this violates the definition of an $F$-triple. We conclude that $N_{G_h}[u_s]\subseteq N_{G_h}[u_t]$ for all $0\leq h\leq i-1$. We first assume that $u_su_2$ is an edge in $G_0$. Step 4 of the path reduction rule only moves an edge $u_1'u_3'$ from a $B$-set to an $R$-set if $u_1'$ and $u_3'$ are outer vertices of an $F$-triple. In that case all their common neighbors will be removed from the graph by the definition of an $F$-triple. Because $N_{G_h}[u_s]\subseteq N_{G_h}[u_t]$ for all $0\leq h\leq i-1$, we find that $u_t$ is a common neighbor of $u_2$ and $u_s$ in $G_h$ for all $0\leq h\leq i-1$; in particular $u_t$ belongs to $G_{i-1}$. Hence, the edge $u_su_2$ will never be moved from $B_h$ to $R_h$ in step 4 of the $(h+1)$th call of the path reduction rule for some $0\leq h\leq i-1$. If $u_su_2$ is not an edge in $G_0$, then at some point it will be an edge due to step 4 of some call of the path reduction rule, say the $(h^*+1)$th call for some $0\leq h^*\leq i-1$. In the same step, $u_su_2$ will be placed in the set $B_{h^*}$. Then, again because $N_{G_h}[u_s]\subseteq N_{G_h}[u_t]$ for all $0\leq h\leq i-1$, the edge $u_su_2$ will never be moved from $B_{h^*}$ to a set $R_h$ for some $h^*< h\leq i-1$. Hence, in both cases, we find that $u_su_2\in B_{i-1}$ even if $i\geq 1$. As $u_su_2\in R'_{i-1}$ (due to step 2 in the $i$th call), we find that $R'_{i-1}\cap B_{i-1}\neq \emptyset$. Hence, the path reduction rule would return a no-answer for $(G_{i-1},k,R_{i-1},B_{i-1})$ in step 3, and consequently the instance $(G_i,k,R_i,B_i)$ would not exist; a contradiction. Now suppose that $u_su_t$ was placed in some set $R_h$ for some $1\leq h\leq i-1$. Properties ii) and iii) of an $F$-graph together with step 4 of the path reduction rule imply the following: if $u_s$ and $u_t$ form a triangle with some vertex $z$, then $u_sz\in B_h$ or $u_tz\in B_h$. Moreover, in the case in which $z\in V_{G_{i-1}}$, this property is not violated by any subsequent intermediate calls of the path reduction rule. Hence, if $u_su_t\in R_{i-1}$, then $u_su_2\in B_{i-1}$ or $u_tu_2\in B_{i-1}$, and as $\{u_su_2,u_tu_2\}\subseteq R'_{i-1}$ as well, we derive the same contradiction as before. We conclude that $H_{i-1}$ contains no edge $u_su_t\in R_{i-1}$ with $4\leq s<t\leq r$. Also, by Lemma \[lem:path-two\], we may assume without loss of generality that $H_{i-1}$ contains no edge $u_su_t\notin R_{i-1}$ with $4\leq s<t\leq r$; otherwise we could remove such an edge from $H_{i-1}$, and the resulting graph would still be a solution for $(G_{i-1},k,R_{i-1},B_{i-1})$. Consequently, $u_4,\ldots,u_r$ are pendant vertices of $H_{i-1}$. This means that the graph $H$ obtained from $H_{i-1}$ by deleting vertices $u_2,u_4,\ldots,u_r$ and adding the edge $u_1u_3$ is not only a square root of $G_i$ with at most $|V_{G_i}|-1+k$ edges but even a solution for $(G_i,k,R_i,B_i)$. Now suppose that $H_i$ is a solution for $(G_i,k,R_i,B_i)$. By Lemma \[l-back\], the graph $H$ obtained from $H_i$ by removing the edge $u_1u_3$ and by adding $u_2$ and vertices $u_4,\ldots,u_r$ (if $r\geq 4$) together with edges $u_2u_1,u_2u_3,\ldots,u_2u_r$ is a solution for $(G_{i-1},k,R_{i-1}, B_{i-1})$. We observe that $H_i$ cannot be a tree, as this would imply that $H$ is a tree, which is not possible as $(G_{i-1},k,R_{i-1},B_{i-1})$ does not have such a solution by the induction hypothesis. Finally, suppose that the path reduction rule returned a no-answer for $(G_\ell,k,R_\ell,B_\ell)$. We must show that $(G_0,k,R_0,B_0)$ has no solution. Due to the above this comes down to showing that $(G_\ell,k,R_\ell,B_\ell)$ has no solution. The only step in which the path reduction rule can return a no-answer is in step 3, meaning that $G_\ell$ has an $F$-triple $S=\{u_1,u_2,u_3\}$ such that $R_\ell\cap B_\ell'\neq\emptyset$ or $R_\ell'\cap B_\ell\neq\emptyset$. In order to obtain a contradiction, suppose that $(G_\ell,k,R_\ell,B_\ell)$ has a solution $H$. By Lemma \[lem:path-two\], the graph $F$ shown in Figure \[fig:path\] is a subgraph of $H$ such that $d_H(u_2)=r-1$, $\{x_1,\ldots,x_p\}=N_H(u_1)\setminus\{u_2\}$ and $\{y_1,\ldots,y_q\}=N_H(u_3)\setminus\{u_2\}$. Consequently, $R'_\ell=\{u_2u_1,u_2u_3,\ldots,u_2u_r\}\subseteq E_H$, and hence $R'_\ell\cap B_\ell=\emptyset$, and moreover, $E_H\cap B'_\ell= E_H\cap (\{x_1u_2,\ldots,x_pu_2\}\cup\{y_1u_2,..,y_qu_2\}\cup\{u_1u_3,\ldots,u_1u_r\}\cup\{u_3u_4,\ldots,u_3u_r\})=\emptyset$, and hence $R_\ell\cap B'_\ell=\emptyset$; a contradiction. Lemma \[l-reduction\] shows that the path reduction rule is safe, that is, we either found that $(G_0,k,R_0,B_0)$ has no solution, or that we may continue with the instance $(G_\ell,k,R_\ell,B_\ell)$ instead. Suppose the latter case holds. Recall that $(G_\ell,k,R_\ell,B_\ell)$ has no $F$-triple, as otherwise we would have applied the path reduction rule once more. Also recall that $R_0$ is the set of vertices in the set $R$ immediately after the trimming rule. We write $R^1=R_0\cap R_\ell$ and $R^2=R_\ell\setminus R_0$. To simplify notation, from now on, we also write $(G,k,R,B)=(G_\ell,k,R_\ell,B_\ell)$; note that $R=R^1\cup R^2$. We need the following properties that hold for every solution of $(G,k,R,B)$ (should $(G,k,R,B)$ have a solution). We call an induced cycle $C$ in a graph $H$ [*semi-pendant*]{} if all but at most one of the vertices of $C$ are only adjacent to pendant vertices of $H$ and their neighbors on $C$. Similarly, we call an induced path $P$ in a graph $H$ [*semi-pendant*]{} if all internal vertices of $P$ are only adjacent to pendant vertices of $H$ and their neighbors on $P$. \[lem:reduced\] Any solution $H$ of $(G,k,R,B)$ has the following properties: - the neighbor of every pendant vertex of $H$ has at least two non-pendant neighbors in $H$; - only edges of $G$ incident to pendant vertices of $H$ can be in $R^1$, and if a pendant vertex $v$ of $H$ is incident to an edge of $R$, then all other edges of $G$ that are incident to $v$ are in $B$; - no edge of $R^2$ is incident to a pendant vertex of $H$; - the length of every semi-pendant path in $H$ is at most $5$; - the length of every semi-pendant cycle in $H$ is at most $6$. We prove that property i) holds by contradiction. Suppose that $H$ contains a vertex $v$ that is the (unique) neighbor of a pendant vertex $u$, such that $v$ has at most one non-pendant neighbor in $H$. If all neighbors of $v$ in $H$ are pendant, then $H$ is a tree. However, this would contradict Lemma \[l-reduction\]. Hence, $v$ has a unique non-pendant neighbor in $H$. Recall that $H$ is a solution for $(G_\ell,k,R_\ell,B_\ell)$. Note that if $v$ is an outer vertex of the corresponding $F$-triple, then Lemma \[l-back\] tells us that $(G_{\ell-1},k,R_{\ell-1},B_{\ell-1})$ has a solution $H_{\ell-1}$ in which $v$ is a non-pendant vertex that has at least one pendant neighbor and that has a unique non-pendant neighbor. Hence, by applying Lemma \[l-back\] inductively, we obtain that $(G_0,k,R_0,B_0)$ has a solution $H_0$ containing a vertex with exactly the same property. This contradicts Lemma \[lem:trimmed\] i). We conclude that property i) holds. We now show property ii). By Lemma \[lem:trimmed\], every edge of $G_0$ that is in $R_0$ is incident to a pendant vertex $u$ of any solution for $(G_0,k,R_0,B_0)$ such that all the other edges of $u$ belong to $B_0$. We observe that, when applying the path reduction rule, $u$ will neither be in an $F$-triple nor removed from the graph, but $u$ could be a vertex of $x$-type or $y$-type. Hence, the path reduction rule may change the neighbors of $u$ but if so any new edges incident to it will be placed in $B$ (and stay in $B$ afterward). Consequently, $u$ must be a pendant vertex in any solution for $(G,k,R,B)(=(G_\ell,k,R_\ell,B_\ell)$) as well. We conclude that ii) holds. We now prove prove property iii). Recall that we applied the path reduction rule only after first applying the trimming rule exhaustively. When we apply the path reduction rule on an $F$-triple $\{u_1,u_2,u_3\}$, then afterward $u_1$ and $u_3$ have degree at least 2 in any solution for the resulting instance, which can be seen as follows. The edge $u_1u_3$ is added to $R^2\subseteq R$, and hence belongs to any solution. We also have that $u_1$ is adjacent to $x_1$ in $G$, whereas the edge $u_3x_1$ belongs to $B$. This means that $u_1$ cannot be made adjacent to $x_1$ via the path $u_1u_3x_1$ in $H$, and as such must have at least one other neighbor in $H$. For the same reason $u_3$, which is adjacent to $y_1$ in $G$ whereas $u_1y_1\in B$, must have another neighbor in $H$ besides $u_1$. As a consequence, any edge in $R^2$ cannot be incident to a pendant vertex of $H$, that is, we have shown property iii). We now prove property iv). Let $P$ be a semi-pendant path of length at least 6 in $H$. By definition, $P$ is an induced path. Hence, we can take any three consecutive vertices of $P$ as the three vertices $u_1,u_2,u_3$ in Lemma \[lem:path-one\]. By applying this lemma, we find that $G$ is an $F$-graph implying that we could have applied the path reduction rule once more; a contradiction. Property v) can be proven by using the same arguments. We need the following lemma that holds in case a solution exists for $(G,k,R,B)$. \[l-hstarbounded\] The number of non-pendant vertices of any solution for $(G,k,R,B)$ is at most $15k-14$. Suppose $(G,k,R,B)$ has a solution $H$. Let $Z$ be the set of pendant vertices of $H$, and let $H^*=H-Z$. We need to show that $V_{H^*}$ has at most $15k-14$ vertices. Let $V'$ be the set of vertices that have degree at least $3$ in $H^*$, and let $V''$ be the set of vertices of degree 2 in $H^*$. By Lemma \[lem:reduced\] i) every vertex of $H$ that is adjacent to a pendant vertex of $H$ has degree at least $2$ in $H^*$. Hence, $H^*$ has no vertices of degree at most 1, that is, $V_{H^*}=V'\cup V''$. Because $H$ is a solution for $(G,k,R,B)$, we have that $|E_H|\leq |V_G|-1+k=|V_H|-1+k$. This means that $$\begin{array}{lcl} |V'|+|V''|-1+k &= &|V_H|-|Z|-1+k\\[1mm] &\geq &|E_H|-|Z|\\[1mm] &= &|E_{H^*}|\\[1mm] &= &\frac{1}{2}\sum_{v}d_{H^*}(v)\\[2mm] &\geq &\frac{1}{2}(3|V'|+2|V''|). \end{array}$$ Hence, $|V'|\leq 2k-2$. Let $\alpha$ be the number of paths in $H^*$ that only have internal vertices of degree 2; note that by Lemma \[lem:reduced\] iv) the length of such paths is at most 5. Let $\beta$ be the number of cycles in $H^*$ that have exactly one vertex of degree at least 3; note that by Lemma \[lem:reduced\] v) the length of such cycles is at most 6. Because $|E_{H^*}|\leq |V'|+|V''|-1+k$, we find that $\alpha+\beta\leq 2k-2-1+k=3k-3$ and that $\beta\leq k$. Hence, $|V''|\leq 5k+4((3k-3)-k)=13k-12$. Consequently, $H^*$ has at most $2k-2+13k-12=15k-14$ vertices. We are now ready to state our final reduction rule. The goal of this rule is to apply it once in order to deduce either that $(G,k,R,B)$ has no solution or to derive a new instance of bounded size. A [*true twin partition*]{} of a set of vertices $S$ of a graph $G$ is a partition $S_1,\ldots,S_t$ of $S$ such that for all $u,v\in S$ and all $1\leq i\leq t$ we have that $u$ and $v$ are in $S_i$ if and only if $u$ and $v$ are true twins in $G$. If $S$ consists of simplicial vertices only we observe that there is no edge between any two vertices that belong to different sets $S_i$ and $S_j$. [**Simplicial Vertex Reduction Rule**]{} 1. Find the set $S$ of all simplicial vertices of $G$ that are not incident to the edges of $R^2$, and moreover, that have all but one of their incident edges in $B$ should they be incident to an edge of $R^1$. 2. If $|V_G\setminus S|>15k-14$, then stop and return a no-answer. 3. Construct the true twin partition $S_1,\ldots,S_t$ of $S$. Let $X_1,\ldots,X_t$ be the sets of vertices incident to an edge of $R^1$ in $S_1,\ldots,S_t$, respectively. 4. If $t> 15k-14$, then stop and return a no-answer. 5. If there exist a set $X_i$ such that the edges of $R^1$ incident to a vertex of $X_i$ have no common end-vertex, then stop and return a no-answer. 6. If there exist a set $S_i$ such that $|S_i\setminus X_i|\geq 15k-13$ and such that there are three vertices $u\in X_i$, $v\in N_G(u)$ and $x\in S_i\setminus X_i$ with $uv\in R^1$ and $xv\in B$, then stop and return a no-answer. 7. For $i=1,\ldots,t$, if $|X_i|>1$, then take $|X_i|-1$ arbitrary vertices of $X_i$ and delete them both from $G$ and from $S_i$, also delete the edges of $R$ and $B$ that are incident to these vertices. 8. For $i=1,\ldots,t$, if $|S_i|>15k-13$, then delete $|S_i|-15k+13$ arbitrary vertices of $S_i\setminus X_i$ from $G$, also delete the edges of $R$ and $B$ that are incident to these vertices. Applying the simplicial vertex reduction rule on $(G,k,R,B)$ either yields a no-answer (in step 2, 4, 5 or 6) or a new instance $(\hat{G},k,\hat{R},\hat{B})$ of [Tree $+\;k$ Edges Square Root with Labels]{}. We will show that if $\hat{G}$ exists, then its size is bounded by a quadratic function of $k$. For doing so we first need the following two lemmas. \[l-claim1\] For $i=1,\ldots,t$, no vertex of $S_i\setminus X_i$ is incident to an edge in $R$. By definition of $S_i$, no vertex of $S_i$, and hence no vertex of $S_i\setminus X_i$, is incident to an edge in $R^2$. By definition of $X_i$, no vertex in $S_i\setminus X_i$ is incident to an edge in $R^1$. Because $R=R^1\cup R^2$, we have proven Lemma \[l-claim1\]. For $x\in V_G$, we let $B(x)$ denote the set of edges of $B$ incident to $x$. \[l-claim2\] $B(x)=B(y)$ for all $x,y\in S_i\setminus X_i$. Let $x,y\in S_i\setminus X_i$ and let $xz\in B$ for some $z\in V_G$. We first show that $y\neq z$ and we then prove that $yz\in B$. In order to obtain a contradiction, assume that $y=z$. Then $xy$ was included in $B$ either by an application of the trimming rule or by an application of the path reduction rule. In both cases, $xy$ was also made adjacent to an edge of $R$. This edge may be deleted later on. Deleting an edge $e$ from $R$ happens either in step 6 of the trimming rule or in step 4 of the path reduction rule. However, both rules add a new edge $e'$ to $R$ that is adjacent to all the edges that were previously adjacent to $e$ and that were not deleted by the two rules. Hence, $xy$ is still adjacent to an edge of $R$ in $G$. In other words, $x$ or $y$ is incident to an edge of $R$ in $G$. Because $x$ and $y$ belong to $S_i\setminus X_i$, this is not possible due to Lemma \[l-claim1\]. Hence, $y\neq z$. In order to show that $yz\in B$, we again use the observation that whenever the trimming or path reduction rule deletes an edge $e\in R$, the rule adds a new edge $e'$ in $R$ such that $e'$ is adjacent to all the edges $uv$ that were previously adjacent to $e$ and that were not deleted by the rules. In this case we make the extra observation that if a vertex $u$ is an end-vertex of $e$ that is not deleted by the rule, then $u$ is an end-vertex of $e'$. Because the vertices in $S_i\setminus X_i$ are not incident to any edges in $R$ by Lemma \[l-claim1\], we find that $z$ was incident to an edge of $R$ after applying the trimming rule or path reduction rule that added the edge $xz$ to $B$. We also observe that an edge in $B$ is only deleted from $B$ if one of its end-vertices is deleted unless it is added to $R$ by the path reduction rule. This means that we can argue as follows. First suppose that $xz$ was added to $B$ due to an application of the trimming rule. If $y$ was adjacent to $z$ when the rule was applied, then $yz$ was included in $B$ as well by the definition of this rule. If $y$ was made adjacent to $z$ by the path reduction rule afterwards, then $yz\in B$ by the definition of the path reduction rule. Now suppose that $xz$ was added to $B$ due to an application of the path reduction rule. By definition of this rule, $x$ and $z$ were not adjacent to each other before. Suppose that $yz\notin B$. Then $xy,yz$ are edges of the original input graph of the [Tree $+\;k$ Edges Square Root]{} problem. Because $xz$ was not such an edge, $x$ and $y$ only became true twins due to an application of the path reduction rule. Then, by Lemma \[l-twins\], $x$ or $y$ must be an outer vertex of some $F$-triple, that is, at least one of these two vertices must be incident to an edge of $R$. Then there is an edge of $R$ incident to at least one of these two vertices after the exhaustive application of the path reduction rule. Because $x$ and $y$ are in $S_i\setminus X_i$, this is a contradiction to Lemma \[l-claim1\]. Hence, $yz\in B$. This completes the proof of Lemma \[l-claim2\]. We prove the following lemma, which is our final lemma; in particular note that if $\hat{G}$ exists then its size is bounded by a quadratic function of $k$. \[lem:final\] If the simplicial vertex reduction rule returned a no-answer for $(G,k,R,B)$, then $(G,k,R,B)$ has no solution. Otherwise, the new instance $(\hat{G},k,\hat{R},\hat{B})$ has a solution if and only if $(G,k,R,B)$ has a solution. Moreover, $\hat{G}$ has at most $(15k-14)(15k-12)$ vertices. We start by showing that $(G,k,R,B)$ has no solution if the simplicial vertex reduction rule returned a no-answer for $(G,k,R,B)$. This can happen in step 2, 4, 5 or 6, each of which we discuss in a separate case. [**Case 1.**]{} The no-answer is given in step 2. Suppose $(G,k,R,B)$ has a solution $H$. We will prove that $|V_G\setminus S|\leq 15k-14$, which means that returning a no-answer is correct if $|V_G\setminus S|>15k-14$. Let $Z$ be the set of pendant vertices of $H$, and let $H^*=H-Z$. By Observation \[obs:leaves\] i), vertices in $Z$ are simplicial vertices of $G$. Then, by Lemma \[lem:reduced\] ii) and iii), we find that $Z\subseteq S$. Hence, $|V_G\setminus S|=|V_G|-|S|=|V_H|- |S|\leq |V_H|-|Z|=|V_{H^*}|\leq 15k-14$, where the last inequality follows from Lemma \[l-hstarbounded\]. [**Case 2.**]{} The no-answer is given in step 4. Suppose $(G,k,R,B)$ has a solution $H$. We will prove that $t \leq 15k-14$, which means that returning a no-answer is correct if $t>15k-14$. Let $H^*$ be the graph obtained from $H$ after removing all pendant vertices of $H$. Then $|V_{H^*}|\leq 15k-14$ by Lemma \[l-hstarbounded\]. If a set $S_i$ contains a pendant vertex $u$ of $H$, then $u$ is adjacent to a vertex $v$ of $H^*$. Then, by Observation \[obs:leaves\] ii), $v$ is not adjacent to pendant vertices of $H$ in any $S_j$ with $j\neq i$. Otherwise $S_i$ consists of non-pendant vertices of $H$, that is, vertices of $H^*$; being nonempty $S_i$ contains at least one vertex of $H^*$. We conclude that every set in the true twin partition of $S$ corresponds to at least one unique vertex of $H^*$. If their total number $t>15k-14$, this means that $|V_{H^*}|>15k-14$; a contradiction. Hence, $t\leq 15k-14$, as we had to show. [**Case 3.**]{} The no-answer is given in step 5. Suppose that $(G,k,R,B)$ has a solution $H$. We will prove that the edges of $R^1$ incident to a set $X_i$ have a common end-vertex for $i=1,\ldots,t$, which means that returning a no-answer is correct should this not be the case. In order to obtain a contradiction, suppose that some set $X_i$ contains two vertices $u$ and $v$ that are incident to edges $uu', vv'\in R^1$ with $u'\neq v'$. By Lemma \[lem:reduced\] ii), we find that $uu'$ and $vv'$ are incident to pendant vertices of $H$. By Observation \[obs:leaves\] iii), these pendant vertices are not adjacent in $G$. However, from the definition of $S_i$ we deduce that $u,v,u',v'$ are mutually adjacent; a contradiction. This completes Case 3. [**Case 4.**]{} The no-answer is given in step 6. Then there exists a set $S_i$ such that $|S_i\setminus X_i|\geq 15k-13$ and such that there are three vertices $u\in X_i$, $v\in N_G(u)$ and $x\in S_i\setminus X_i$ with $uv\in R^1$ and $xv\in B$. In order to obtain a contradiction, assume that $(G,k,R,B)$ has a solution $H$. By Lemma \[l-hstarbounded\], $H$ has at most $15k-14$ non-pendant vertices. Because $|S_i\setminus X_i|\geq 15k-13$, this means that at least one vertex $y\in S_i\setminus X_i$ is a pendant vertex of $H$. Also, $u\in X_i$ is a pendant vertex of $H$ that has $v$ as its unique neighbor, because $uv\in R^1$ and all other edges incident to $u$ belong to $B$ by definition of $S$. If $y=x$, then $v$ is not adjacent to $y$ in $H$, because $xv\in B$. If $y\neq x$, then $v$ is not adjacent to $y$ in $H$ either, because $xv\in B$ and $B(x)=B(y)$ (due to Lemma \[l-claim2\]) imply $yv\in B$. We conclude that $u$ and $y$ are pendant vertices of $H$ adjacent to different vertices. However, from Observation \[obs:leaves\] iii) we derive that $u$ and $y$ are not adjacent in $G$. This is a contradiction, because $u$ and $y$ are true twins in $G$ by definition of $S_i$. This completes Case 4. From now on assume that the simplicial vertex reduction rule did not return a no-answer after performing step 6. Let $(G',k,R',B')$ be the instance created after applying step 7 to some set $X_i=\{x_1,\ldots,x_{\ell}\}$ with $\ell\geq 2$, that is, $G'$ is the graph obtained from $G$ after deleting $x_2,\ldots,x_\ell$, whereas $R'$ and $B'$ are the sets obtained from $R$ and $B$, respectively, after deleting edges incident to $x_2,\ldots,x_\ell$ from them. We claim that $(G',k,R',B')$ has a solution if and only if $(G,k,R,B)$ has a solution. Before we prove this claim, we first observe that in any solution $H$ for $(G,k,R,B)$ the vertices $x_1,\ldots,x_\ell$ are pendant vertices in $H$. This is because $x_1,\ldots,x_\ell$ are incident to exactly one edge in $R^1$, whereas all the other edges incident to them belong to $B$. Moreover, $x_1,\ldots,x_\ell$ have a (unique) common neighbor in $H$, as otherwise a no-answer would have been returned in step 5. We let $v$ denote this common neighbor. Similarly, $x_1$ is a pendant vertex that has $v$ as its (unique) neighbor in any solution $H'$ for $(G',k,R',B')$. First suppose that $(G',k,R',B')$ has a solution $H'$. Then the graph obtained from $H'$ by adding the vertices $x_2,\ldots,x_{\ell}$ and the edges $x_2v,\ldots,x_{\ell}v$ is a square root of $G$ by Lemma \[lem:twins\] i). By definition of $R'$, $B'$ and the set $X_i$ (all of whose vertices are incident to one edge of $R^1\subseteq R$ and to edges in $B$) it is a solution for $(G,k,R,B)$ as well. Now suppose that $(G,k,R,B)$ has a solution $H$. Then the graph obtained from $H$ after deleting $x_2,\ldots,x_{\ell}$ is a square root of $G'$ by Lemma \[lem:twins\] ii). By definition of $R'$ and $B'$, it is a solution for $(G',k,R',B')$ as well. We denote the instance resulting from step 7 by $(G,k,R,B)$ again and observe that every $X_i$ now contains at most one vertex. It remains to consider what happens at step 8. We let $(G',k,R',B')$ be the instance created after applying step 8 to some set $S_i$ with $|S_i|>15k-13$, that is, $G'$ is the graph obtained from $G$ after deleting a set $T$ of $|S_i|-15k+13\geq 1$ arbitrary vertices from $S_i\setminus X_i$ (note that this is possible as $|X_i|\leq 1$), whereas $R'$ and $B'$ are the sets obtained from $R$ and $B$, respectively, after deleting the edges that are incident to vertices of $T$. We claim that $(G',k,R',B')$ has a solution if and only if $(G,k,R,B)$ has a solution. First suppose that $(G',k,R',B')$ has a solution $H'$. Because we could not apply the trimming and path reduction rules for $(G,k,R,B)$, we cannot apply these rules for $(G',k,R',B')$ either. Then, by using the same arguments that we applied for $(G,k,R,B)$ in the proof of Lemma \[l-hstarbounded\], we find that $H'$ contains at most $15k-14$ non-pendant vertices. Note that $H'$ contains at least $15k-13$ vertices, which are all in $S_i$. Hence, $H'$ has at least one pendant vertex $x$ that belongs to $S_i$. Let $v$ be the (unique) vertex adjacent to $x$ in $H'$. Then the graph $H$ obtained from $H'$ by adding the vertices of $T$ and their edges incident to $v$ is a square root of $G$ by Lemma \[lem:twins\] i). We argue that $H$ is a solution for $(G,k,R,B)$ as well. Because the vertices of $T\subseteq S_i\setminus X_i$ are not incident to the edges of $R$ due to Lemma \[l-claim1\], we have to show that none of the $|T|$ edges that we added in order to obtain $H$ belong to $B$. If $x\in S_i\setminus X_i$, then $xv\notin B$ and because $B(x)=B(y)$ for all $y\in S_i\setminus X_i$, we have that $yv\notin B$ for all $y\in T$. Assume that $x\in X_i$. Recall that $|X_i|\leq 1$ after step 7. Because $|S_i|>15k-13$ after step 7, $|S_i\setminus X_i|\geq 15k-13$. Then $yv\notin B$ for all $y\in S_i\setminus X_i$ as otherwise the algorithm would have produced a no-answer at step 6. Now suppose that $(G,k,R,B)$ has a solution $H$. By Lemma \[l-hstarbounded\], the graph $H$ contains at most $15k-14$ non-pendant vertices. Hence, $H$ has at least $|S_i|-15k+14\geq 15k-12-15k+14=2$ pendant vertices. Because vertices in $S_i\setminus X_i$ are true twins not incident to edges of $R$ and $B(x)=B(y)$ for any $x,y\in S_i\setminus X_i$, we may assume without loss of generality that the vertices of $T$ are amongst these pendant vertices of $H$. If $X_i=\{x\}\neq\emptyset$, then $x$ is a pendant vertex in $H$ incident to a unique edge $xv\in R^1$. By Observation \[obs:leaves\], all pendant vertices of $H$ that are in $S_i$ are adjacent to $v$ in $H$. Then the graph obtained from $H$ after deleting the vertices of $T$ is a square root of $G'$ by Lemma \[lem:twins\] ii). By definition of $R'$ and $B'$, it is a solution for $(G',k,R',B')$ as well. If $X_i=\emptyset$, then all pendant vertices of $H$ that are in $S_i$ are adjacent to some $v$ in $H$ by Observation \[obs:leaves\]. Then, by Lemma \[lem:twins\] (ii), the graph obtained from $H$ by deleting the vertices of $T$ is a square root of $G'$. By definition of $R'$ and $B'$, it is a solution for $(G',k,R',B')$ as well. From the above it follows that the instance $(\hat{G},k,\hat{R},\hat{B})$ obtained after step 8 has a solution if and only if $(G,k,R,B)$ has a solution. In order to complete the proof, we must show that $\hat{G}$ has at most $(15k-14)(15k-12)$ vertices. Each $S_i$ has at most $15k-13$ vertices due to step 8, and we also have $t\leq 15k-14$ due to step 4. Hence $|S|\leq (15k-14)(15k-13)$. As the number of vertices in $V_G\setminus S$ is at most $15k-14$ due to step 2, we obtain that $|V_{\hat{G}}|\leq (15k-14)(15k-13)+15k-14= (15k-14)(15k-12)$, as required. Solving the Labeled Variant and Running Time Analysis {#s-solving} ----------------------------------------------------- Let $n$ and $m$ denote the number of vertices and edges of the graph $G$ of the original instance $(G,k)$ of [Tree $+\;k$ Edges Square Root]{}. In order to complete the proof of Theorem \[thm:tree-few-edges\], we first note that the trimming and path reduction rules are applied at most $n$ times to construct the instance $(\hat{G},k,\hat{R},\hat{B})$. Each application of the trimming rule can be done in time $O(n^2m)$ and each application of the path reduction rule takes time $O(n^3m)$. Finally, the simplicial vertex reduction rule can be done in time $O(nm)$. Hence, our kernelization algorithm runs in time $O(n^4m)$, and it remains to solve the obtained reduced instance $(\hat{G},k,\hat{R},\hat{B})$. Because $\hat{G}$ has at most $(15k-14)(15k-12)$ vertices, $\hat{G}$ has at most $\frac{1}{2}(15k-14)(15k-12)( (15k-14)(15k-12)-1)=O(k^4)$ edges. Therefore, we can solve [Tree $+\;k$ Edges Square Root with Labels]{} for instance $(\hat{G},k,\hat{R},\hat{B})$ in time $2^{O(k^4)}$; we consider all edge subsets of $\hat{G}$ that have size at most $|V_{\hat{G}}|-1+k$ and use brute force. We conclude that the total running time of our algorithm is $2^{O(k^4)}+O(n^4m)$, as required. We finish this section with the following remarks. First, recall that our quadratic kernel is a generalized kernel for the [Tree $+\;k$ Edges Square Root]{} problem. We believe that a quadratic kernel exists for this problem as well by using a similar reduction. However, proving this seemed to be more technical and also to yield a graph with more than $(15k-14)(15k-12)$ vertices. We therefore chose to prove our [[FPT]{}]{} result by using a reduction leading to a generalized kernel. Second, it should also be noted that our generalized kernel for [Tree $+\;k$ Edges Square Root]{} does not imply a kernel for [Tree $+\;k$ Edges Square Root with Labels]{}, because our reduction rules require that the original instance is unlabeled. We do not know whether the (more general) problem [Tree $+\;k$ Edges Square Root with Labels]{} is [[FPT]{}]{} as well. The Maximum Square Root Problem {#s-max} =============================== Recall that the [Maximum Square Root]{} problem is that of testing whether a given graph $G$ with $m$ edges has a square root with at least $s$ edges for some given integer $s$. In this section we give an [[FPT]{}]{} algorithm for this problem with parameter $k=m-s$. In other words, we show that the problem of deciding whether a graph $G$ has a square root that can be obtained by removing at most $k$ edges of $G$ is fixed-parameter tractable when parameterized by $k$. We also present an exact algorithm for the [Maximum Square Root]{} problem. Both algorithms are based on the observation that in order to construct a square root $H$ from a given graph $G$, we must delete at least one of every pair of adjacent edges that do not belong to a triangle in $G$. We therefore construct an auxiliary graph ${\cal P}(G)$ that has vertex set $E_G$ and an edge between two vertices $e_1$ and $e_2$ if and only if $e_1=xy$ and $e_2=yz$ for three distinct vertices $x,y,z\in V_G$ with $xz\notin E_G$. Observe that ${\cal P}(G)$ is a spanning subgraph of the line graph of $G$. We need the following lemma. \[lem:charact\] Let $H$ be a spanning subgraph of a graph $G$. Then $H$ is a square root of $G$ if and only if $E_H$ is an independent set of ${\cal P}(G)$ and every two adjacent vertices in $G$ are at distance at most $2$ in $H$. First suppose that $H$ is a square root of $G$. By definition, every two adjacent vertices in $G$ are of distance at most 2 in $H$. In order to show that $E_H$ is an independent set in ${\cal P}(G)$, assume that two edges $e_1,e_2\in E_H$ are adjacent vertices in ${\cal P}(G)$. Then $e_1=xy$ and $e_2=yz$ for three distinct vertices $x,y,z\in V_G$ with $xz\notin E_G$. This means that $x$ and $z$ are of distance 2 in $H$ implying that $xz\in E_G$, which is a contradiction. Now suppose that $E_H$ is an independent set of ${\cal P}(G)$ and that every two adjacent vertices in $G$ are at distance at most $2$ in $H$. In order to show that $H$ is a square root of $G$, it suffices to show that every two non-adjacent vertices in $G$ have distance at least 3 in $H$. Let $u$ and $v$ be two non-adjacent vertices in $G$ that have distance at most 2 in $H$. Then there exists a vertex $z\notin \{u,v\}$ such that $uz,vz\in E_H$. Then $e_1=uz$ and $e_2=vz$ are adjacent in ${\cal P}(G)$ contradicting the independence of $E_H$ in ${\cal P}(G)$. We use Lemma \[lem:charact\] to prove Propositions \[p-fpt\] and \[p-exact\]. Here, we use the $O^*$-notation to suppress any polynomial factors. A [*vertex cover*]{} is a subset $U\subseteq V$ such that every edge is incident with at least one vertex of $U$. The [Vertex Cover]{} problem is that of testing whether a given graph has a vertex cover of size at most $p$ for a given integer $p$. In Proposition \[p-fpt\] we prove that there is a $O^*(2^k)$ time algorithm to decide whether a given graph $G$ has square root $H$ such that $|E_G\setminus E_H|\le k$. \[p-fpt\] [Maximum Square Root]{} can be solved in time $O^*(2^k)$. Let $G$ be a graph with $n$ vertices and $m$ edges, and let $k\geq 0$ be an integer. By Lemma \[lem:charact\] it suffices to check whether ${\cal P}(G)$ has a vertex cover $U$ of size at most $k$ such that $H_U=(V_G,E_G\setminus U)$ is a square root of $G$. All vertex covers of size at most $k$ of a graph can be enumerated by adapting the standard $O^*(2^k)$ branching algorithm for the [Vertex Cover]{} problem (see for example [@DowneyF99]). It requires $O(m^2)$ time to compute ${\cal P}(G)$ and $O(nm)$ time to check whether a graph $H_U$ is a square root of $G$. Hence the overall running time of our algorithm is $O^*(2^k)$. We observe that [Maximum Square Root]{} has a linear kernel for connected graphs. This immediately follows from a result of Aingworth, Motwani and Harary [@AingworthMH98], who proved that if $H$ is a square root of a connected $n$-vertex graph $G\neq K_n$, then $|E_G\setminus E_H|\geq n-2$. Hence, $n\leq k+2$ for every yes-instance $(G,k)$ of [Maximum Square Root]{} with $G\neq K_n$ (trivially, $K_n$ is its own square root). Note that this kernel does not lead to a faster running time than $O^*(2^k)$. In Proposition \[p-exact\] we present our exact algorithm, which does not only solve the decision problem but in fact determines a square root of a given graph that has maximum number of edges. \[p-exact\] [Maximum Square Root]{} can be solved in time $O^*(3^{m/3})$ on graphs with $m$ vertices. Let $G$ be a graph with $n$ vertices and $m$ edges, and let $k\geq 0$ be an integer. We compute the graph ${\cal P}(G)$, enumerate all maximal independent sets $I$ of ${\cal P}(G)$, and verify for each $I\subseteq E_G$ whether $G$ is the square of the graph $H_I=(V_G,I)$. Out of those graphs $H_I$ that are square roots of $G$, return the one with maximum number edges; if no such graph $H_I$ has been found, then $G$ has no square roots. Correctness follows from Lemma \[lem:charact\]. Recall that ${\cal P}(G)$ can be computed in time $O(m^2)$. All the maximal independent sets of the $m$-vertex graph ${\cal P}(G)$ can be enumerated in time $O^*(3^{m/3})$ using the polynomial delay algorithm of Tsukiyama et al. [@TsukiyamaIAS77], since ${\cal P}(G)$ has at most $3^{m/3}$ maximal independent sets [@MoonM65]. Finally, recall that for each maximal independent set $I$, we can check in time $O(nm)$ whether $(H_I)^2=G$. Hence the overall running time of our algorithm is $O^*(3^{m/3})$. Open Problems {#s-con} ============= We conclude our paper with two open problems. First, is it also possible to construct an exact algorithm for [Minimum Square Root]{} that is better than the trivial exact algorithm? Second, recall that if $H$ is a square root of a connected $n$-vertex graph $G\neq K_n$, then $|E_G\setminus E_H|\geq n-2$ [@AingworthMH98]. Is it [[FPT]{}]{} to decide whether a connected $n$-vertex graph $G\neq K_n$ has a square root that can be obtained by removing at most $n-2+k$ edges, or equivalently, whether a connected $n$-vertex graph $G\neq K_n$ has a square root with at least $|E_G|-n+2-k$ edges, when parameterized by $k$? In particular, can it be decided in polynomial time whether a connected graph $G$ has a square root with *exactly* $|E_G|-|V_G|+2$ edges? [10]{} , [*Uniqueness of graph square roots of girth six*]{}, Electr. J. Comb., 18 (2011). , [*The difference between a graph and its square*]{}, Util. Math., 54 (1998), pp. 223–228. , [*Solving max-[*r*]{}-sat above a tight lower bound*]{}, Algorithmica, 61 (2011), pp. 638–655. , [*Sparse square roots*]{}, in WG 2013, vol. 8165 of Lecture Notes Comp. Sci., Springer, 2013, pp. 177–188. , [*Graph theory*]{}, vol. 173 of Graduate Texts in Mathematics, Springer, Heidelberg, fourth ed., 2010. , [*Parameterized complexity*]{}, Monographs in Computer Science, Springer-Verlag, New York, 1999. , [*Square-root finding problem in graphs, a complete dichotomy theorem*]{}, CoRR, abs/1210.7684 (2012). , [*Complexity of finding graph roots with girth conditions*]{}, Algorithmica, 62 (2012), pp. 38–53. , [*Parameterized Complexity Theory*]{}, Texts in Theoretical Computer Science. An EATCS Series, Springer-Verlag, Berlin, 2006. , [*Bipartite roots of graphs*]{}, ACM Transactions on Algorithms, 2 (2006), pp. 178–208. , [*Recognizing powers of proper interval, split, and chordal graph*]{}, SIAM J. Discrete Math., 18 (2004), pp. 83–102. , [*The square of a block graph*]{}, Discrete Mathematics, 310 (2010), pp. 734–741. height 2pt depth -1.6pt width 23pt, [*A good characterization of squares of strongly chordal split graphs*]{}, Inf. Process. Lett., 111 (2011), pp. 120–123. , [*Algorithms for square roots of graphs*]{}, SIAM J. Discrete Math., 8 (1995), pp. 99–118. , [*Computing square roots of trivially perfect and threshold graphs*]{}, Discrete Applied Mathematics, in press. , [*On cliques in graphs*]{}, Israel J. Math., 3 (1965), pp. 23–28. , [*Computing roots of graphs is hard*]{}, Discrete Applied Mathematics, 54 (1994), pp. 81–88. , [*The square root of a graph*]{}, J. Combinatorial Theory, 2 (1967), pp. 290–295. , [*Invitation to fixed-parameter algorithms*]{}, vol. 31 of Oxford Lecture Series in Mathematics and its Applications, Oxford University Press, Oxford, 2006. , [*The square of a tree*]{}, Bell System Tech. J, 39 (1960), pp. 641–647. , [*A new algorithm for generating all the maximal independent sets*]{}, SIAM J. Comput., 6 (1977), pp. 505–517. [^1]: Laboratoire d’Informatique Théorique et Appliquée, Université de Lorraine, 57045 Metz Cedex 01, France, `{manfred.cochefert, jean-francois.couturier, dieter.kratsch}@univ-lorraine.fr` [^2]: Department of Informatics, University of Bergen, PB 7803, 5020 Bergen, Norway, `petr.golovach@ii.uib.no` [^3]: School of Engineering and Computing Sciences, Durham University, Science Laboratories, South Road, Durham DH1 3LE, UK, `daniel.paulusma@durham.ac.uk` [^4]: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n. 267959. The research also has been supported by EPSRC (EP/G043434/1) and ANR Blanc AGAPE (ANR-09-BLAN-0159-03). A preliminary version of this paper appeared as an extended abstract in the proceedings of WG 2013 [@CochefertCGKP13]. [^5]: We restrict ourselves to connected graphs for simplicity. We may do this for the following reason. For disconnected $n$-vertex graphs with $\ell\geq 2$ connected components the natural parameter is $k=s-(n-\ell)$ instead of $k=s-(n-1)$. Because a square root of a graph is the disjoint union of square roots of its connected components, our [[FPT]{}]{} result for connected graphs immediately carries over to disconnected graphs if we choose as parameter $k=s-(n-\ell)$ instead.
--- abstract: 'We present Very Long Baseline Array (VLBA) observations of the Gigamaser galaxy TXS2226[-]{}184 at 1.3 and 5 GHz. These observations reveal the parsec-scale radio structure of this Seyfert galaxy with exceptionally luminous water maser emission. The source is found to be extended on scales of 10-100 pc with some embedded compact sources, but has no readily identifiable flat-spectrum active nucleus. This morphology resembles that of the nearby compact starburst galaxy Mrk 273, although no significant FIR emission has been detected to support the starburst scenario. The narrow (125 )  absorption in TXS2226[-]{}184 discovered with the VLA is also detected with the VLBA. This  absorption is distributed across the extended emission, probably co-spatial with the water masers. The broad (420 ) line seen by the VLA is not detected, suggesting that it arises from more extended gas which is absorbing the emission beyond the central tens of parsecs.' author: - 'G. B. Taylor, A. B. Peck, J. S. Ulvestad, & C. P. O’Dea' title: |   \   \ Exploring the Nucleus of the Gigamaser Galaxy TXS2226[-]{}184 --- Introduction ============ Galaxies with luminous water masers in their nuclear regions have been keenly sought after in recent years (e.g., Braatz, Wilson & Henkel 1997) following the discovery of the masers in close orbit around the nucleus of NGC 4258 (Miyoshi et al. 1995, Greenhill et al. 1995). This system provided some of the earliest and strongest evidence for the presence of a supermassive black hole in the nucleus and led to a direct distance measurement to NGC 4258 which has helped to refine the extragalactic distance scale (Herrnstein et al. 1999). The water maser emission in  was discovered by Koekemoer et al.(1995) using the Effelsberg telescope. This system hosts the most luminous known extragalactic H$_2$O maser source, a so-called “gigamaser”, with an isotropic luminosity in the 22 GHz line of 6100 L$_\odot$ (Koekemoer et al. 1995). The water maser emission from  is fairly broad, with a FWHM of 88 km s$^{-1}$, in contrast to most known extragalactic water masers with linewidths of only a few km s$^{-1}$. VLBI observations by Ball et al. (2004, in preparation) reveal that the masers are distributed in clumps that trace a disk oriented in position angle $-$65, about 30 degrees tilt away from the major axis of the galaxy. One blueshifted maser appears significantly offset from this disk, which may indicate that it is associated with the jet. Recent HST observations by Falcke et al. (2000) classify the galaxy as a highly inclined spiral and reveal a dust lane cutting across the nucleus. Falcke et al. (2000) also present VLA observations at 8.4 GHz showing that the radio emission is compact ($<$ 1), symmetric, and has an axis perpendicular to the dust lane. No larger-scale diffuse emission is present to the sensitivity limits of the NRAO VLA Sky Survey (NVSS) at about 2 mJy beam$^{-1}$ (Fig. 2; Condon et al. 1998). There is a fair amount of overlap in the occurance ($\sim$40%) between extragalactic H$_2$O maser systems and  found in absorption (Taylor et al. 2002). The  can be used to study the dynamics near the central engine of active galaxies, illuminating the process of accretion and jet propagation near a massive black hole. For this reason Taylor et al. (2002) undertook a study of the  gas in  with the VLA and discovered that it consists of two components – one with a width of 125 km s$^{-1}$, and another broader feature of width 420 km s$^{-1}$. Both velocity components are found toward the compact radio source in the nucleus of the galaxy, co-spatial within the uncertainties with the water masers. Taylor et al. suggested that the narrow line might be indicative of an interaction between the radio jet and the surrounding material. In this study we make use of VLBI observations that resolve the radio continuum in the central 0.5 kpc, and also spatially resolve the absorption. We use this information to better characterize the nature of this system and to determine the powering mechanism for the exceptionally luminous maser emission. Throughout this discussion, we assume H$_{0}$=71 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M$ = 0.27, and $\Omega_{\Lambda}$= 0.73, resulting in a linear to angular scale ratio of 0.496 kpc arcsecond$^{-1}$ [^1]. The VLBA Observations ===================== The observations were made with the National Radio Astronomy Observatory[^2] Very Long Baseline Array (VLBA) and Robert C. Byrd Green Bank Telescope (GBT) at a center frequency of 1386 MHz on 2002 December 14 and 15. A total of 8.4 hours were obtained on source using 256 channels across a 16 MHz band to provide a resolution of 14 . Both right and left circular polarizations were observed. Phase calibration was obtained by short (1 min) observations of the nearby (1.98 degrees distant), moderately strong (0.25 Jy) calibrator J2236[-]{}1706 every 3 minutes. Bandpass calibration was provided by observations of J2253+1608. Observations at 4982 MHz were carried out with the VLBA alone on 2002 December 16. A total of 2.2 hours were obtained on source with a 32 MHz bandwidth observing in right circular polarization only. Phase referencing was once again performed by switching to J2236[-]{}1706 every 3 minutes for 1 minute. We also checked the atmospheric coherence by observing J2236$-$1433 once every 38 minutes. Results ======= MERLIN Continuum Images ----------------------- Data at 5 GHz from observations of the  field were obtained from the Multi-Element Radio-Linked Interferometer Network (MERLIN) archive. These data consisted of two observing sessions, one on 1999 February 3, and another on 1999 March 3. In each session 6 antennas participated for a total of 4.2 and 4.9 hours on source in February and March respectively. Calibration of these data was performed in the standard fashion in AIPS using the nearby calibrator J2232[-]{}1659 observed every 7 minutes. Data from the two days were combined and then imaged using [Difmap]{}. The rms noise in the total intensity image is 0.085 mJy beam$^{-1}$. At the MERLIN resolution of 40 $\times$ 179 mas  is dominated by a compact core, but shows jet-like extensions to the northwest and southeast (Fig. 1). The compact core has a peak flux density at 5 GHz of 13.6 mJy. The orientation of the jets at 144 is in excellent agreement with the the elongation angle of 145 seen in the 22 GHz VLA observations (Taylor et al. 2002) with resolution 390 $\times$ 210 mas. In the MERLIN image we find a total of 31.1 $\pm$ 0.93 mJy compared to 32.2 $\pm$ 0.98 detected by the VLA (Taylor et al. 2002). A little less than half of this emission, 13.6 mJy, is in the compact core with a size of $<$45 mas. VLBA Continuum Images --------------------- Starting with the phase referenced VLBA+GBT image at 1.4 GHz from the standard calibration in AIPS, we performed phase self-calibration in [Difmap]{}. In Fig. 2 we show the 1.4 GHz continuum emission from at resolutions of 20 $\times$ 12 mas and 45 mas. Both images were made using natural weighting and applying a taper to downweight the longest spacings. Only 35 mJy, about half the flux density of the 73.3 mJy compact VLA core, is successfully recovered in these images. Besides a bright and resolved region of diameter 0.1 arcseconds, there is diffuse emission to the northwest along a position angle of $-$36. This agrees well with the VLA and MERLIN orientations on somewhat larger scales (see Fig. 1). The milliarcsecond-scale emission is broken up into clumps, most likely as a result of difficulties of the clean algorithm in recovering the extended emission. A multi-resolution clean was attempted in AIPS, but did not recover more flux density or produce a better image. The MERLIN image at 5 GHz shows diffuse emission on scales of $\sim$0.3 arcseconds that we do not resolve with the VLA, but probably over-resolve with the VLBA. The rms noise in the VLBA continuum image is 50 microJy/beam in the high resolution image and 100 microJy/beam in the 45 mas image owing to the heavier taper applied. The rms noise at 5 GHz is 100 microJy/beam, and the peak in the map is 500 microJy. While there is a suggestion of some flux density on the shortest baselines, no sources are reliably detected. From our phase referencing check source, J2236$-$1433, we estimate the coherence to be greater than 50%. We place an upper limit of 1 mJy beam$^{-1}$ as the strongest unresolved source that could go undetected at 5 GHz. An attempt to combine the VLBA and MERLIN data was made, but owing to the lack of signal on the VLBA baselines, this was unsuccessful. The  Absorption --------------- In Fig. 3 we show  spectra for four of the brightest regions in the high resolution image. The continuum level has been subtracted from the spectral line cube. The spectra have been Hanning smoothed and averaged to a velocity resolution of 23.98 . The absorption peaks in all spectra at 7500  in the heliocentric frame. In the region of highest SNR the absorption has a depth of 2.6 mJy, and a FWHM of 125 . There is a hint of an additional component around 7700  in all but the weakest profile. We have created an image of the integrated optical depth over the line by averaging 24 channels centered on the peak absorption using the cube with 45 mas spatial resolution (Fig. 4). We find a marginally significant increase in the optical depth from 0.15 $\pm$ 0.04 to 0.34 $\pm$ 0.08. Finally, we have searched for gradients in velocity by averaging in the direction perpendicular to the major axis of the source. To do this we rotated the cube spatially by $-$55 and then averaged over the source in declination. A position-velocity plot from this averaged cube is shown in Fig. 5. No significant gradient in velocity is seen. Location of the Neutral Hydrogen Gas ------------------------------------ Based on the VLA observations, Taylor et al. concluded that the $\sim$420  wide absorption feature with a depth of 5.6 mJy probably results from neutral material associated with the atomic and molecular torus thought to feed the active nucleus, and that the deeper $\sim$125  wide line in  could be indicative of an interaction between the radio jet and surrounding material. If the supposition about the nature of the broad line were correct, then a compact nucleus with at least 5.6 mJy should have been visible in the 1.4 GHz continuum image. The peak in the full resolution image (not shown) is only 2.4 mJy. From the absence of any broad-line absorption in our VLBI spectra we conclude that the broad line originates from emission on spatial scales of $\sim$0.3 arsec that are not probed by our VLBA observations. Similarly, a 1000  wide blue-shifted line seen in the WSRT spectrum towards 3C293 (Morganti et al. 2003) was not detected on the parsec-scale by Beswick et al. (2004). Another example of broad lines resolving out on the parsec-scale can be found in 4C12.50 (Morganti et al. 2004). Morganti et al. (2003) speculate that the broad line arises from outflow of gas on kiloparsec-scales, and the same situation could be responsible for the broad line in . The narrow (125 ) and deeper (12.3 mJy for the VLA)  line in has a similar width and velocity dispersion to the water masers and could originate from the same region. The lack of any significant velocity gradient in the  observations is consistent with this interpretation. Another possibility is that we are seeing multiple  clouds from throughout the spiral host galaxy along the line-of-sight toward the nucleus. In this case the  gas would not be associated with the nucleus or the water masers. The marginally significant change in  opacity across the source (see Fig. 4) argues against this possibility since the average  absorption from the host galaxy is unlikely to change on scales as small as 100 pc. Nature of the Continuum Emission and Water Masers ================================================== Observationally, extragalactic water masers can be separated into three categories based on their line widths, luminosities and distribution of masing regions. One class, with narrow linewidths of a few , and isotropic luminosities $\sim$100 L$_\odot$, is most frequently associated with the accretion disk around an AGN (e.g., NGC 4258 , Miyoshi et al. 1995). The second, possibly less common, class of masers is characterized by broader linewidths of $\sim$100 , and luminosities in the range 100-3000 L$_\odot$ (e.g., Mrk 348, Peck et al. 2003; NGC 1052, Claussen et al. 1998). The third, recently identified class are weaker, $<$10 L$_\odot$, masers with narrow linewidths (e.g., NGC 2146, Tarchi et al. 2002). The criteria for characterizing the different classes of masers are outlined in Peck et al (2004). Assuming that the masing region completely covers the radio emitting core at 22 GHz, Koekemoer et al. (1995) found that the required maser amplification in  is a factor of $\sim$20. If the maser covers less of the continuum source then the amplification must be even greater. In this section we consider three scenarios to explain the nature of the radio emission and the extragalactic water masers in . Case 1: A Compact Starburst --------------------------- The clumpy, steep-spectrum radio morphology of  on the parsec scale (Fig. 2) is reminiscent of the starburst galaxies Mrk 231 and Mrk 273 (Carilli, Wrobel, & Ulvestad 1998, Carilli & Taylor 2000), which also contain active galactic nuclei. Given the steep spectrum of the overall emission in , the individual components could be radio supernovae, or clusters of SNe. The H$_2$O emission and absorption likewise are found to occur on similar spatial scales in these prominent starburst systems. A significant problem with the starburst scenario is the low infrared luminosity. As listed in the NASA/IPAC Extragalactic Database (NED),  is not detected by IRAS at 12, 25 and 100 microns, and is only 309 $\pm$ 59 mJy at 60 microns. Given the distance to the source this corresponds to a power at 60 microns of 4.15 $\times$ 10$^{23}$ W/Hz. Scaling the starburst emission models of Rice et al. (1988) we get $L_{IR} = 7.5 \times 10^9 L_{\odot}$. This corresponds to a star formation rate of just 0.178 M$_{\odot}$/yr (Heckman et al. 1990), far below what is found in even the weakest starburst galaxies. One can also ask the question of how well this system obeys the well established radio-far IR correlation (Condon 1992). The integrated radio power at 1.4 GHz is $P_{1.4} = 1.01 \times 10^{23}$ W/Hz. We find a logarithmic ratio of FIR to radio power, Q=$-$2.3, compared to the average value of $+$2.3 $\pm$ 0.2 (Condon 1992). At present, given the anomalous nature of these findings, we cannot rule out the possibility that there is some error in the IRAS measurement, so we plan to undertake new IR observations to confirm the value. That we could be seeing individual supernovae in  is unlikely since Cas A would have a flux density of only $\sim$1 $\mu$Jy at the distance of , although young radio supernovae might have powers 100 – 1000 times that of Cas A (Weiler et al. 2002; Neff et al. 2004). However, we would expect them to have some reasonable flux at 5 GHz if they are young enough to have such high brightness temperature, so the lack of any detection at 5 GHz argues against the supernova hypothesis. The distribution of the radio emission perpendicular to the major axis is also not readily explained by SNe. It is possible that this source could be a “post-starburst” object in which the diffuse radio emission was created by a starburst. Koekemoer et al. (1995) refer to unpublished observations that suggest an optical continuum dominated by a “post-starburst” stellar population. Assuming minimum energy conditions apply, the fairly high field derived, $B_{me}$ = 630 $\mu$G, together with the observed spectral index of $-$0.66 between 1.4 and 5 GHz, implies an age of the radio source of only 14,000 years (Myers & Spangler 1985). If the radio source is considerably older then some reacceleration process must be at work. Over the $\sim$10$^8$ years since the starburst one might expect the radio source to have become much less collimated than we currently observe. We conclude that the small amount of star formation taking place in is unlikely to produce the majority of the radio continuum emission. It is also difficult to see how this low amount of star formation could provide the shocks necessary to create the conditions for masing. Case 2: Amplification of a Background AGN ----------------------------------------- The parsec-scale radio emission is elongated in the same direction as the kiloparsec-scale emission, and is perpendicular to the major axis of the host galaxy and the inner dust disk (Falcke et al. 2000). The symmetric structure and tight collimation of the outflow in suggests that the radio continuum is produced by jets oriented at a large angle to the line-of-sight propagating perpendicular to the accretion disk. Falcke et al. also find that the radio emission is aligned with H$\alpha$ + \[N II\] emission and suggest that this Narrow Line Region (NLR) is produced by the interaction between the radio jet and the interstellar medium (Falcke, Wilson & Simpson 1998). Koekemoer et al (1995) note that the optical spectrum of the nucleus has line ratios typical of a low-ionization nuclear emission region (LINER) spectrum. The optical spectrum and the presence of jets suggest the presence of an active nucleus. An examination of the radio continuum emission reveals no obvious compact, flat spectrum source readily identifiable with the nucleus. At 5 GHz we can use the limiting flux density of 1 mJy to put an upper limit on the core radio power at 5 GHz of $<$ 1.4 $\times$ 10$^{21}$ W/Hz. This core power is lower than all but a few low luminosity radio sources in the Complete Bologna Sky Survey (Giovannini et al. 2004) despite the fact that the radio power of $P_{0.365} = 2.72 \times\ 10^{23}$ W/Hz at 365 MHz is only about an order of magnitude below the average total power in a complete sample of radio galaxies. From this we can conclude that the core is extremely underluminous. This could imply subrelativistic ejection and/or an angle very close to the plane of the sky. Alternatively, the core could be heavily obscured by free-free absorption. Since the free-free absorption from an external screen decreases exponentially with frequency, further sensitive, high-frequency VLBI observations might detect an obscured, flat-spectrum nucleus. Such a heavily absorbed nucleus might also make a significant contribution to the total flux density at high frequencies. There is no indication of any flattening in the spectrum up to 22 GHz (Taylor et al. 2002), from which we place a limit on the contribution of any flat spectrum component of $<$10% or $<0.7$ mJy. Ball et al. (2004, in preparation) place a limit of $<$ 2 mJy on any compact continuum emission at 22 GHz. If the masers are limited to the central few parsecs around a low-power core then the amplification needed will be $>$600. Alternatively, the X-ray illumination provided by the AGN may create a disassociation region and power the maser directly (Neufeld, Maloney, & Conger 1994). Koekemoer et al. (1995) find that this model can account for the maser luminosity of  if the illuminated disk is 2.5 – 7.5 pc in radius. Ball et al. (2004, in preparation) find a disk of radius 5 pc, but do not rule out the possibility that some of the components are associated with the radio jet. Case 3: Shocks Driven by a Jet ------------------------------ As already established in the previous section, it is likely that there are bidirectional jets in . The peak radio flux density at 1.4 GHz in  is 3.6 mJy. The brightness temperature of this emission is 1.4 $\times 10^7$ K. This, and the steep spectrum of the integrated emission ($\alpha = -0.66$ from Taylor et al. 2002, where $S_\nu \propto \nu^\alpha$) clearly indicates non-thermal emission. A possible explanation for the luminous water masers in  is from shocks produced by a jet driving into a molecular cloud, though if this is to explain all the masers then the jet needs to bend through 60 degrees in order to line up with the jet seen on scales of 10-1000 pc (Ball et al. 2004, in preparation). The situation may be comparable to that in Mrk 348, a Seyfert galaxy at a distance of 62.5 Mpc which hosts an  megamaser (Peck et al. 2003). In Mrk 348, the maser appears to be caused by the radio jet impacting a molecular cloud within the central few parsecs of the galaxy. This interaction gives rise to an expanding bow-shock driven into the cloud which has a velocity between 135  and 0.5c in the direction of jet propagation, and between 35  and 300  at various points along the oblique edges (Peck et al. 2003). This shock generates a region of very high temperature, ($\le$10$^5$ K), which dissociates the molecular gas and to some extent shatters the dust grains expected to be present and/or evaporates their icy mantles. Immediately following this shock, H$_2$ begins forming on the surviving dust grains when the temperature has dropped to $\sim$1000 K, and this in turn provides sufficient heating to stabilize the temperature at $\sim$400 K, with gas densities of $\sim$10$^8$ cm$^{-3}$ (e.g., Mauersberger, Henkel & Wilson 1987; Elitzur 1995). In Mrk 348, the pc-scale radio jet is significantly brighter than that in , but it is possible that a weaker jet could result in stronger amplification than a powerful jet, given that it is the regions of oblique shocks, with velocities lower than 300 , which provide post-shock conditions leading to a significant volume of masing gas. This possibility has also been explored for another Seyfert galaxy, NGC 1068. A similar luminosity, linear radio emission extending over $\sim$1” is seen in NGC 1068 (Gallimore, Baum, & O’Dea 1996). In this well-studied source Gallimore et al. find a bending jet and local flattening of the spectral index which lends support to this model. One of the chief diagnostics of this type of maser is the shape and width of the emission line (Peck et al 2004), given that jet and accretion disk geometry can be very difficult to decipher at large distances using the distribution of  spots and  absorption, even with extremely high angular resolution. The FWHM of the maser emission in  is 88 , much closer to the 130  FWHM measured in Mrk 348 than to the narrower lines associated with masers known to arise in accretion disks. Nonetheless, we look forward to higher dynamic range VLBA and Expanded VLA images to assist in testing this model in . Conclusions =========== Our observations show that on parsec scales the continuum emission from an active nucleus in  is extremely weak. Most of the emission originates from a jet which extends over $\sim$100 pc.  absorption is detected against this jet with a marginal change in opacity, but no sign of rotation or outflow. The water masers may amplify either a weak nucleus, or extended jet emission. In either case, high amplification factors ($>600$) are needed. Given the extreme luminosity of the water masers in  and their broad line widths, it is tempting to ascribe this and similar systems (e.g., Mrk 348, NGC 1052) with jets that drive shocks into molecular clouds. Our high-resolution radio continuum observations are consistent with this picture. Identifications of more systems, and detailed studies of time-variability in the masers and continuum emission are needed. VLBI observations of the water masers will help to understanding the nature of this system, though the preliminary analysis indicates that the situation is complicated with both disk and jet masers present (Ball et al. 2004, in preparation). Single dish observations could also yield important clues by looking for rapid variations in the flux density of the maser line, and correlating them with changes in the continuum flux density to test the idea that the masers amplify the background continuum. We thank Y. Pihlstroem and an anonymous referee for insightful suggestions. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, Caltech, under contract with NASA. This research has also made use of NASA’s Astrophysics Data System Abstract Service. Ball, G. H., Greenhill, L. J., Moran, J. M., Henkel, C., & Zaw, I. 2004, in preparation Beswick, R. J., Peck, A. B., Taylor, G. B., & Giovannini, G. 2004, MNRAS, in press Braatz, J. A., Wilson, A. S. & Henkel, C. 1996, ApJS, 106, 51 Braatz, J. A., Wilson, A. S. & Henkel, C. 1997, ApJS, 110, 321 Carilli, C. L., Wrobel, J. M., & Ulvestad, J. S. 1998, AJ, 115, 928 Carilli, C.L., & Taylor, G.B. 2000, ApJ, 532, L95 Claussen, M. J., Diamond, P. J., Braatz, J. A., Wilson, A. S. and Henkel, C. 1998, ApJL, 500, 129 Condon, J. J. 1992, ARA&A, 30, 575 Condon, J.J., Cotton, W.D., Greisen, E.W., Yin, Q.F., Perley, R.A., Taylor, G.B., & Broderick, J.J. 1998, AJ, 115, 1693 Elitzur, M. 1995, RMxAC, 1, 85 Falcke, H., Wilson, A. S., & Simpson, C. 1998, ApJ, 502, 199 Falcke, H., Wilson, A.S., Henkel, C., Brunthaler, A., & Braatz, J.A. 2000, ApJ, 530, L13 Gallimore, J. F., Baum, S. A. and O’Dea, C. P. 1996 ApJ, 464, 198 Giovannini, G., Taylor, G.B., Cotton, W.D., Feretti, L., Lara, L., & Venturi, T. 2004, in preparation Greenhill, L. J., Henkel, C., Becker, R., Wilson, T. L. & Wouterloot, J. G. A. 1995, A&A, 304, 21 Heckman, T. M., Armus, L., & Miley, G. K. 1990, ApJS, 74, 833 Herrnstein, J. R., et al. 1999, Nature, 400, 539 Koekemoer, A. M., Henkel, C., Greenhill, L.J., Dey, A., van Breugel, W., Codella, C., & Antonucci, R. 1995, Nature, 378, 697 Mauersberger, R., Henkel, C. & Wilson, T. L. 1987, , 173, 352 Morganti, R., Osterloo, T. A., Emonts, B. H. C., van der Hulst, J. M., & Tadhunter, C. M. 2003, ApJ, 593, L69 Morganti, R., et al. 2004, A&A, in press Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P. & Inoue, M. 1995, Nature, 373, 127 Myers, S.T., & Spangler, S.R. 1985, ApJ, 291, 52 Neff, S.G., Ulvestad, J.S., & Jeno, S. H. 2004, ApJ, in press Neufeld, D. A. & Maloney, P. R., & Conger, S. 1994, ApJ, 436, 127 1994, ApJ, 436, 669 Peck, A. B., Tarchi, A., Henkel, C, et al. 2004, [*in preparation*]{} Peck, A. B., Henkel, C., Ulvestad, J. S. et al. 2003, ApJ, 590, 149 Rice, W., Lonsdale, C.J., Soifer, B.T., Neugebauer, G., Kopan, E.L., Lloyd, L.A., de Jong, T., & Habing, H.J. 1988, ApJS, 68, 91 Tarchi, A., Henkel, C., Peck, A.B., Nagar, N., Moscadelli, L., & Menten, K. M. 2003, in “The Neutral ISM in Starburst Galaxies”, eds. S. Alto et al., astro-ph/0309446 Taylor et al. 2002, ApJ, 574, 88 Weiler, K. W., Panagia, N., Montes, M. J., & Sramek, R. A. 2002, ARA&A, 40, 387 [^1]: Derived using E.L. Wright’s cosmology calculator at http://www.astro.ucla.edu/ wright/CosmoCalc.html. [^2]: The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation.
--- abstract: | We consider the Rosenfeld-Gröbner algorithm for computing a regular decomposition of a radical differential ideal generated by a set of ordinary differential polynomials in $n$ indeterminates. For a set of ordinary differential polynomials $F$, let $M(F)$ be the sum of maximal orders of differential indeterminates occurring in $F$. We propose a modification of the Rosenfeld-Gröbner algorithm, in which for every intermediate polynomial system $F$, the bound $M(F){\leqslant}(n-1)!M(F_0)$ holds, where $F_0$ is the initial set of generators of the radical ideal. In particular, the resulting regular systems satisfy the bound. Since regular ideals can be decomposed into characterizable components algebraically, the bound also holds for the orders of derivatives occurring in a characteristic decomposition of a radical differential ideal. We also give an algorithm for converting a characteristic decomposition of a radical differential ideal from one ranking into another. This algorithm performs all differentiations in the beginning and then uses a purely algebraic decomposition algorithm. address: - | University of Western Ontario\ Department of Computer Science\ London, Ontario, Canada N6A 5B7 - | Moscow State University\ Department of Mechanics and Mathematics\ Leninskie gory, Moscow, Russia, 119992 - | University of Western Ontario\ Department of Computer Science\ London, Ontario, Canada N6A 5B7 - | North Carolina State University\ Department of Mathematics\ Raleigh, NC 27695-8205, USA author: - Oleg Golubitsky - Marina Kondratieva - Marc Moreno Maza - Alexey Ovchinnikov bibliography: - 'rg.bib' title: Bounds for algorithms in differential algebra --- differential algebra ,characteristic sets ,radical differential ideals ,decomposition into regular components 12H05 ,13N10 ,13P10 Introduction ============ This paper is about constructive differential algebra. We study algorithms dealing with algebraic differential equations. Many different problems can be addressed to this topic. One can, for instance, test membership to a radical differential ideal, compute the Kolchin dimensional polynomial. The kind of algorithms we are dealing with are decomposition algorithms for radical differential ideals. Generally, there are two such algorithms, although they have variations. The Ritt-Kolchin algorithm computes a prime decomposition of a radical differential ideal, where each prime component is represented by its characteristic set. This algorithm is based on important results in differential algebra (see [@Rit; @Kol]), such as the Basis Theorem, the Prime Decomposition Theorem for radical differential ideals, the differential version of the Hilbert Theorem of Zeroes, and the Rosenfeld Lemma. It also relies on the solution of the so-called factorization problem: given an autoreduced set, determine whether the corresponding algebraic saturated ideal is prime and, if it is not, find two polynomials outside of the ideal whose product belongs to the ideal. Due to the complexity of the factorization problem, it was desirable to avoid it, which was done in the Rosenfeld-Gröbner algorithm proposed in [@Bou1]. Instead of decomposing a given radical differential ideal into prime components, this algorithm represents it as an intersection of regular differential ideals, also introduced in [@Bou1]; the correctness of the algorithm, in addition to the above-mentioned theorems, is provided by the Lazard Lemma, which states that regular ideals are radical. Different proofs of this lemma can be found in [@Bou2; @Mor; @Fac; @BLM06]. The Rosenfeld-Gröbner algorithm is the first decomposition algorithm in differential algebra that has been actually implemented upto our knowledge. It forms an integral part of the [diffalg]{} package in the computer algebra system Maple. Updates of this package are available at [http://www-sop.inria.fr/cafe/Evelyne.Hubert/diffalg/]{}. A more efficient implementation of this algorithm in C language can be found at the website [http://www.lifl.fr/$\sim$boulier/BLAD/]{}. Various improvements of the Rosenfeld-Gröbner algorithm have been proposed in [@Bou2; @Fac; @Dif; @Imp; @Und]. They all avoid the factorization problem and for this reason are called factorization-free methods in differential algebra. However, no theoretical bound for the computational complexity of any of these algorithms is known. We make the first step towards the goal of estimating this complexity: we bound the orders of differential polynomials appearing in the computations. The main results of this work are proven only for the [**ordinary**]{} case. We consider the following [**two**]{} bounding problems. The [**first**]{} problem is to bound the orders of all intermediate polynomials and the output of the Rosenfeld-Gröbner algorithm. In order to obtain such a bound in Proposition \[p:finalalgorithm\], we have modified this algorithm (see Algorithms \[RGBound\] and \[RGBoundRI\]) a little bit. It would be good to have a bound that would tell us how many times we need to differentiate the original system in the beginning of the algorithm, so that the rest of the computation can be performed by a purely algebraic decomposition algorithm. Since for algebraic decomposition algorithms complexity estimates are known (see [@Agnes]), such a bound would yield a complexity estimate for the differential decomposition as well. In this paper, however, we do not provide such a bound and, moreover, conjecture that it would have solved the Ritt problem [@Rit]. We leave the discovery of such bound and/or the proof of this conjecture for future research. Nevertheless, for the [**second**]{} type of the algorithms we are looking at in this paper we obtain such a bound. Namely, we can tell how many times one needs to differentiate elements of a [*given characteristic set*]{} of a characterizable differential ideal w.r.t. [*one*]{} differential ranking, in order to obtain a characteristic [*decomposition*]{} of this ideal w.r.t. [*another*]{} ranking. In other words, we give a bound for the conversion algorithm (Algorithm \[a:conversionalgorithm\]) for a characterizable ideal from one ranking to another (see [@Kaehler; @PARDI; @DGW] for other conversion algorithms applicable to prime differential ideals). We emphasize that the input ideal does not have to be characterizable w.r.t. the target ranking. We show how to obtain its new characteristic decomposition by first differentiating the input characteristic set and then applying only algebraic operations (i.e., a purely algebraic decomposition algorithm). The paper is organized as follows. We give an introduction into differential algebra in Section \[sec:diffalgintro\]. Then we describe the original Rosenfeld-Gröbner algorithm in Section \[sec:rg\]. Section \[sec:modifiedRG\] is devoted to the bound on the orders of derivatives computed by a modified version of the Rosenfeld-Gröbner algorithm. After that, we show how to transform a characteristic set of a characterizable differential ideal into a characteristic decomposition of this ideal w.r.t. another differential ranking. We first do this for prime differential ideals (Section \[sec:prime\]) and then treat the characterizable case in Section \[sec:characterizable\]. Definitions and notation {#sec:diffalgintro} ======================== Differential algebra studies systems of polynomial partial differential equations from the algebraic point of view. The approach is based on the concept of differential ring introduced by Ritt. Recent tutorials on the constructive theory of differential ideals are presented in [@BouChaptire; @BouThese; @Dif; @Sit]. A differential ring is a commutative ring with the unity endowed with a set of derivations $\Delta = \{\delta_1,\ldots,\delta_m\}$, which commute pairwise. The case of $\Delta = \{\delta\}$ is called [*ordinary*]{}. If $R$ is an ordinary differential ring and $y \in R$, we denote $\delta^ky$ by $y^{(k)}$. Construct the multiplicative monoid $\Theta = \left\{\delta_1^{k_1}\delta_2^{k_2}\cdots\delta_m^{k_m}\;\big|\; k_i {\geqslant}0\right\}$ of [*derivative operators*]{}. Let $Y=\{y_1,\ldots,y_n\}$ be a set whose elements are called [*differential indeterminates*]{}. The elements of the set $\Theta Y=\{\theta y\;|\;\theta\in\Theta,\;y\in Y\}$ are called [*derivatives*]{}. Derivative operators from $\Theta$ act on derivatives as $\theta_1(\theta_2y_i) = (\theta_1\theta_2)y_i$ for all $\theta_1,\theta_2 \in \Theta$ and $1 {\leqslant}i {\leqslant}n$. The ring of [*differential polynomials*]{} in differential indeterminates $Y$ over a differential field ${{\mathbf k}}$ is a ring of commutative polynomials with coefficients in ${{\mathbf k}}$ in the infinite set of variables $\Theta Y$ (see [@Kol; @Pan; @Rit]). This ring is denoted ${{\mathbf k}}\{y_1,\dots,y_n\}$ or ${{\mathbf k}}\{Y\}$. We consider the case of $\operatorname{char}{{\mathbf k}}=0$ only. An ideal $I$ in ${{\mathbf k}}\{Y\}$ is called [ *differential*]{}, if for all $f\in I$ and $\delta\in\Delta$, $\delta f\in I$. We denote differential polynomials by $f, g, h, \ldots$ and use letters $I, J, \p$ for ideals. Let $F \subset {{\mathbf k}}\{y_1,\dots,y_n\}$ be a set of differential polynomials. For the differential and radical differential ideal generated by $F$ in $k\{y_1,\dots,y_n\}$, we use notations $[F]$ and $\{F\}$, respectively. We need the notion of reduction for algorithmic computations. First, we introduce a [*ranking*]{} on the set of derivatives. A ranking [@Kol] is a total order $>$ on the set $\Theta Y$ satisfying the following conditions for all $\theta\in\Theta$ and $u,v\in\Theta Y$: 1. $\theta u {\geqslant}u,$ 2. $u {\geqslant}v \Longrightarrow \theta u {\geqslant}\theta v.$ Let $u$ be a derivative, that is, $u = \theta y_j$ for a derivative operator $$\theta = \delta_1^{k_1}\delta_2^{k_2}\cdots\delta_m^{k_m} \in \Theta$$ and $1{\leqslant}j {\leqslant}n$. The [*order*]{} of $u$ is defined as $$\operatorname{ord}u=\operatorname{ord}\theta=k_1+\ldots+k_m.$$ If $f$ is a differential polynomial, $f\not\in{{\mathbf k}}$, then $\operatorname{ord}f$ denotes the maximal order of derivatives appearing effectively in $f$. A ranking $>$ is called [*orderly*]{} iff $\operatorname{ord}u > \operatorname{ord}v$ implies $u > v$ for all derivatives $u$ and $v$. A ranking $>_{el}$ is called an [*elimination*]{} ranking iff $y_i >_{el} y_j$ implies $\theta_1y_i >_{el} \theta_2y_j$ for all $\theta_1, \theta_2 \in \Theta$. Let a ranking $<$ be fixed. The derivative $\theta y_j$ of the highest rank appearing in a differential polynomial $f \in {{\mathbf k}}\{y_1,\dots,y_n\} \setminus {{\mathbf k}}$ is called the [*leader*]{} of $f$. We denote the leader by ${\mathop{\rm ld}\nolimits}f$ or ${{\bf u}}_f$. The indeterminate $y_j$ is called the [*leading variable*]{} of $f$ and denoted by ${\mathop{\rm lv}\nolimits}f.$ Represent $f$ as a univariate polynomial in ${{\bf u}}_f$: $$f = {{\bf i}}_f {{\bf u}}_f^d + a_1 {{\bf u}}_f^{d-1} + \ldots + a_d.$$ The monomial ${{\bf u}}_f^d$ is called the [*rank*]{} of $f$ and is denoted by ${\mathop{\rm rk}\nolimits}f.$ Extend the ranking relation on derivatives variables to ranks: $u_1^{d_1}>u_2^{d_2}$ iff either $u_1>u_2$ or $u_1=u_2$ and $d_1>d_2$. The polynomial ${{\bf i}}_f$ is called the [*initial*]{} of $f$. Apply any $\delta \in \Delta$ to $f$: $$\delta f = \frac{\partial f}{\partial {{\bf u}}_f}\delta {{\bf u}}_f + \delta {{\bf i}}_f {{\bf u}}_f^d + \delta a_1 {{\bf u}}_f^{d-1}+\ldots + \delta a_d.$$ The leader of $\delta f$ is $\delta {{\bf u}}_f$ and the initial of $\delta f$ is called the [*separant*]{} of $f$, denoted ${{\bf s}}_f$. If $\theta\in\Theta\setminus\{1\}$, then $\theta f$ is called a [*proper derivative*]{} of $f$. Note that the initial of any proper derivative of $f$ is equal to ${{\bf s}}_f$. We say that a differential polynomial $f$ is [*partially reduced*]{} w.r.t. $g$ iff no proper derivative of ${{\bf u}}_g$ appears in $f$. A differential polynomial $f$ is [*algebraically reduced*]{} w.r.t. $g$ iff $\deg_{{{\bf u}}_g} f<\deg_{{{\bf u}}_g}g$. A differential polynomial $f$ is [*reduced*]{} w.r.t. a differential polynomial $g$ iff $f$ is partially and algebraically reduced w.r.t. $g$. Consider any subset $\mathbb{A} \subset {{\mathbf k}}\{y_1,\ldots,y_n\}\setminus {{\mathbf k}}$. We say that $\mathbb{A}$ is autoreduced (respectively, algebraically autoreduced) iff each element of $\mathbb{A}$ is reduced (respectively, algebraically reduced) w.r.t. all the others. Every autoreduced set is finite [@Kol Chapter I, Section 9] (but an algebraically autoreduced set in a ring of differential polynomials may be infinite). For autoreduced sets we use capital letters $\mathbb{A, B, C,}$ …and notation $\mathbb{A} = A_1,\ldots,A_p$ to specify the list of the elements of $\mathbb{A}$ arranged in order of increasing rank. We denote the sets of initials and separants of elements of $\mathbb{A}$ by ${{\bf i}}_\mathbb{A}$ and ${{\bf s}}_\mathbb{A}$, respectively. Let $H_A={{\bf i}}_\mathbb{A}\cup {{\bf s}}_\mathbb{A}$. Let $S$ be a finite set of differential polynomials. Denote by $S^\infty$ the multiplicative set containing $1$ and generated by $S$. Let $I$ be an ideal in a commutative ring $R$. The [*saturated ideal*]{} $I:S^\infty$ is defined as $\{a \in R\:|\:\exists s \in S^\infty: sa \in I\}$. If $I$ is a differential ideal then $I:S^\infty$ is also a differential ideal (see [@Kol]). Consider two polynomials $f$ and $g$ in ${{\mathbf k}}\{y_1,\ldots, y_n\}$. Let $I$ be the differential ideal generated by $g$. Applying a finite number of pseudo-divisions, one can compute a [*differential partial remainder*]{} $f_1$ and a [*differential remainder*]{} $f_2$ of $f$ w.r.t. $g$ such that there exist $s \in S_g^\infty$ and $h \in H_g^\infty$ satisfying $sf \equiv f_1$ and $hf \equiv f_2 \mod I$ with $f_1$ and $f_2$ partially reduced and reduced w.r.t. $g$, respectively (see [@Fac] for definitions and the algorithm for computing remainders). We denote by $\operatorname{\sf d-rem}(f,{{\mathbb A}})$ the differential remainder of a polynomial $f$ w.r.t. an autoreduced set ${{\mathbb A}}$. Let $\mathbb{A} = A_1,\ldots,A_r$ and $\mathbb{B} = B_1,\ldots,B_s$ be (algebraically) autoreduced sets. We say that $\mathbb{A}$ has lower rank than $\mathbb{B}$ if - there exists $k {\leqslant}r, s$ such that $\operatorname{rk}A_i$ = $\operatorname{rk}B_i$ for $1 {\leqslant}i < k,$ and $\operatorname{rk}A_k < \operatorname{rk}B_k$, - or if $r > s$ and $\operatorname{rk}A_i = \operatorname{rk}B_i$ for $1 {\leqslant}i {\leqslant}s$. We say that $\operatorname{rk}\mathbb{A} = \operatorname{rk}\mathbb{B}$ iff $r=s$ and $\operatorname{rk}A_i = \operatorname{rk}B_i$ for $1 {\leqslant}i {\leqslant}r$. The following notion of a characteristic set in [*Kolchin’s sense*]{} in characteristic zero is crucial in our further discussions. It was first introduced by Ritt for prime differential ideals, and then extended by Kolchin to arbitrary differential ideals. [@Kol page 82] An autoreduced subset of the lowest rank in a set $X\subset {{\mathbf k}}\{Y\}$ is called a [*characteristic set*]{} of $X$. We call these sets [*Kolchin characteristic sets*]{} to avoid confusion with other notions, e.g., in [@Fac; @Dif] characteristic sets are used in Kolchin’s sense and in some other senses. A characteristic set in Kolchin’s sense exists for any set $X\subset{{\mathbf k}}\{Y\}$ due to the fact that every family of autoreduced sets contains one of the least rank (see [@Kol]). As it is mentioned in [@Kol Lemma 8, page 82], in the case of $\operatorname{char}k = 0$, a set $\mathbb{A}$ is a characteristic set of a proper differential ideal $I$ iff each element of $I$ reduces to zero w.r.t. $\mathbb{A}$. Moreover, the leaders and the correspondent degrees of these leaders of any two characteristic sets of $I$ coincide. [@Fac Definition 2.6] A differential ideal $I$ in ${{\mathbf k}}\{y_1,\ldots,y_n\}$ is said to be [*characterizable*]{} if there exists a characteristic set $\mathbb{A}$ of $I$ in Kolchin’s sense such that $I = [\mathbb{A}]:H_\mathbb{A}^\infty.$ We call any such characteristic set $\mathbb{A}$ a [*characterizing*]{} set of $I$. Characterizable ideals are radical [@Fac Theorem 4.4]. Rosenfeld-Gröbner algorithm for the ordinary case {#sec:rg} ================================================= A system of ordinary differential equations and inequalities ${{\mathbb A}}=0,H\neq 0$, where ${{\mathbb A}},H\subset{{\mathbf k}}\{Y\}$, is called regular (see [@Bou1]), if $A$ is autoreduced, $H$ is partially reduced w.r.t. ${{\mathbb A}}$, and $H\supseteq H_{{\mathbb A}}$, where $H_{{\mathbb A}}$ is the set of initials and separants of elements of ${{\mathbb A}}$ (in the partial differential case it is also required that the set ${{\mathbb A}}$ is coherent, but in the ordinary case this condition holds for any autoreduced set ${{\mathbb A}}$). For a regular system ${{\mathbb A}},H$, the differential ideal $[{{\mathbb A}}]:H^\infty$ is also called regular. Every regular ideal is radical (see [@Bou1]), and, according to the Rosenfeld Lemma, $f\in [{{\mathbb A}}]:H^\infty$ if and only if the partial remainder of $f$ w.r.t. ${{\mathbb A}}$ belongs to the algebraic ideal $({{\mathbb A}}):H^\infty$. The Rosenfeld-[Gröbner]{} algorithm proposed in [@Bou1; @Bou2] computes a regular decomposition of a given radical differential ideal $\{F\}$, i.e., a representation $$\{F\}=\bigcap_{i=1}^k[{{\mathbb A}}_i]:H_i^\infty,$$ where $[{{\mathbb A}}_i]:H_i^\infty$ are regular differential ideals. We begin with the following version of the Rosenfeld-Gröbner algorithm. It is very similar to the original algorithm presented in [@Bou1], except for the fact that we are in the ordinary case and need not deal with coherence. We also note that some of the regular systems computed by the version of the algorithm presented here may correspond to unit ideals; this can be checked later on by means of Gröbner basis computations as in [@Bou1] or via polynomial GCD computations modulo regular chains as in [@Bou3]. Finally, we follow the suggestion given in [@Dif Improvements, page 73]: it is recommended to reduce the multiplicative set $H$ of initials and separants. If it turns out that one of them reduces to zero, then the corresponding saturated component contains $1$ and therefore need not be considered. We implement these ideas in Algorithm \[RGconv\]. \[RGconv\][Rosenfeld-Gröbner]{}$(F_0,H_0)$\ ------------- ---------------------------------------------------------------------- [Input:]{} finite sets of differential polynomials $F_0,H_0$ and a differential ranking [Output:]{} a finite set $T$ of regular systems such that $\qquad\qquad\{F_0\}:H_0^\infty=\bigcap\limits_{({{\mathbb A}},H)\in T}[{{\mathbb A}}]:H^\infty\;$ ------------- ---------------------------------------------------------------------- \ ------------------------------------------------------------------------------------------------------------ [   ]{}$T:=\varnothing$, $\;\;\;U:=\{(F_0,H_0)\}$ [   ]{}[**while**]{} $U\neq\varnothing$ [**do**]{} [   ]{}[   ]{}Take and remove any $(F,H)\in U$ [   ]{}[   ]{}${{\mathbb C}}:=$ characteristic set of $F$ [   ]{}[   ]{}$\bar F:=\operatorname{\sf d-rem}(F\setminus{{\mathbb C}},{{\mathbb C}})\setminus\{0\}$ [   ]{}[   ]{}$\bar H:=\operatorname{\sf d-rem}(H,{{\mathbb C}})\cup H_{{\mathbb C}}$ [   ]{}[   ]{}[**if**]{} $\bar F\cap{{\mathbf k}}=\varnothing$ [**and**]{} $0\not\in\bar H$ [**then**]{} [   ]{}[   ]{}[   ]{}[**if**]{} $\bar F=\varnothing$ [**then**]{} $T:=T\,\cup\,\{({{\mathbb C}},\bar H)\}$ [   ]{}[   ]{}[   ]{}[   ]{}[**else**]{} $U:=U\cup\{(\bar F\cup{{\mathbb C}},\bar H)\}$ [   ]{}[   ]{}[   ]{}[**end if**]{} [   ]{}[   ]{}[**end if**]{} [   ]{}[   ]{}$U:=U \cup\{(F\cup\{h\},H)\;|\;h\in H_{{\mathbb C}},\;h\not\in{{\mathbf k}}\cup H\}$ [   ]{}[**end while**]{} [   ]{}[**return**]{} $T$ ------------------------------------------------------------------------------------------------------------ Given a set $F$ of differential polynomials, the Rosenfeld-Gröbner algorithm at first computes a characteristic set ${{\mathbb C}}$ of $F$, i.e., an autoreduced subset of $F$ of the least rank. It may happen that ${\mathop{\rm lv}\nolimits}{{\mathbb C}}\subsetneq{\mathop{\rm lv}\nolimits}F$ (for example, take $F=\{x+y,y\}$ w.r.t. a ranking such that $x>y$). In other words, inclusion $F_1\subset F_2$ does not imply that for the corresponding characteristic sets ${{\mathbb C}}_1$ and ${{\mathbb C}}_2$, we have ${{\mathbb C}}_1\subseteq{{\mathbb C}}_2$. We need the latter property, in order to obtain the bound, so we are going to relax the requirement that ${{\mathbb C}}$ is autoreduced. A subset ${{\mathbb C}}$ of ${{\mathbf k}}\{Y\}\setminus{{\mathbf k}}$ is called a weak d-triangular set [@Dif Definition 3.7], if the set of its leaders ${\mathop{\rm ld}\nolimits}{{\mathbb C}}$ is autoreduced. In the ordinary case, ${{\mathbb C}}$ is a weak d-triangular set if and only if the leading differential indeterminates ${\mathop{\rm lv}\nolimits}f$, $f\in{{\mathbb C}}$, are all distinct. A partially autoreduced weak d-triangular set is called d-triangular [@Dif Definition 3.7]. For a polynomial $f$ and a weak d-triangular set ${{\mathbb C}}$, the pseudo-remainder $\operatorname{\sf d-rem}(f,{{\mathbb C}})$ is defined via [@Dif Algorithm 3.13]. We will replace the reduction of $F$ w.r.t. an autoreduced set in the Rosenfeld-Gröbner algorithm by that w.r.t. a weak d-triangular set. We note that the version of the algorithm presented in [@Dif Section 6] (Algorithms 6.8, 6.10, and 6.11) also computes differential pseudo-remainders w.r.t. weak d-triangular sets. Since the output regular systems must be partially autoreduced, at the very end, partial autoreduction of the weak d-triangular set ${{\mathbb C}}$ via [@Dif Algorithm 6.8] is carried out. Alternatively, one could perform partial autoreduction every time a weak d-triangular set is updated. In the following section, we show how to perform this autoreduction, as well as computation of differential pseudo-remainders, so that the inequality $$M(F\cup H){\leqslant}(n-1)!M(F_0\cup H_0)$$ is preserved (see formula below). Modified Rosenfeld-[Gröbner]{} algorithm {#sec:modifiedRG} ======================================== For a set of differential polynomials $F$, let $m_i(F)$ be the maximal order of the differential indeterminate $y_i\in Y$ occurring in $F$. If $y_i$ does not occur in $F$, we set $m_i(F)=0$. Let $$\begin{aligned} \label{BoundDefinition} M(F)=\sum_{i=1}^n m_i(F).\end{aligned}$$ We propose a modification of the Rosenfeld-[Gröbner]{} algorithm (see Algorithm \[RGBound\] below), in which for every intermediate system $(F,{{\mathbb C}},H)\in U$, the bound $$\begin{aligned} \label{formula:bound} M(F\cup{{\mathbb C}}\cup H){\leqslant}(n-1)!M(F_0\cup H_0)\end{aligned}$$ holds, where $F_0=0,H_0\neq 0$ is the input system of equations and inequalities corresponding to the radical differential ideal $\{F_0\}:H_0^\infty$. In the formula  we have a multiple $(n-1)!.$ If the number of variables is equal to $1$ or $2$ it disappears. In the case of $n=2$ Ritt proved the Jacobi bound for $|F_0| = 2$ and empty $H_0$ by the direct computation and his result does not have any multiple either. Consider the intuition behind the case of $n = 3$ by looking at a particular example. Let $F_0 = x+y+z,$ $x'$ with the elimination ranking $x>y>z.$ Then $m_x = 1,$ $m_y = m_z = 0$ and $$M(F_0) = 1+0+0 = 1.$$ In order to find a characteristic set of the prime differential ideal $[F_0]$ we reduce $x'$ w.r.t. $x+y+z$ and get $y'+z'.$ The output consists of two polynomials: $${{\mathbb C}}= y'+z', x+y+z.$$ We have: $m_x({{\mathbb C}}) = 0,$ $m_y({{\mathbb C}}) = 1,$ and $m_z({{\mathbb C}}) = 1.$ Hence, $$M({{\mathbb C}}) = 0+1+1 =2 > 1.$$ But $(n-1)! = (3-1)! = 2! = 2$ and $2 {\leqslant}2\cdot 1.$ Final algorithm and proof of the bound {#sec:finalalgorithm} -------------------------------------- We are ready to present a modified version of the Rosenfeld-Gröbner algorithm that satisfies the bound. The only place where the orders of derivatives may grow is the pseudoreduction w.r.t. an autoreduced set ${{\mathbb C}}$. Of course, only the orders of non-leading differential indeterminates may grow, while the orders of the leading ones decrease as a result of reduction (or stay the same if the reduction turns out to be algebraic, but then the orders of non-leading indeterminates do not grow either). By associating different weights with leading and non-leading indeterminates, we will achieve that the weighted sum of their orders does not increase as a result of reduction. These weights come from the bound in the algorithm [Differentiate&Autoreduce]{}. If the set of leading indeterminates changes, so do the weights. However, if we estimate in advance the number of times the set of leading indeterminates can change throughout the algorithm, we can still obtain an overall bound on the orders. For the original Rosenfeld-Gröbner algorithm, it is not that easy to carry out such an estimate, because some indeterminates may disappear and reappear again among the leading indeterminates of the characteristic set ${{\mathbb C}}$. For example, Let $F = \{y+z,$ $x,$ $x^2+z\},$ with the elimination ranking $x>y>z.$ - We choose its characteristic set as ${{\mathbb C}}:= \{y+z,$ $x\}.$ - The leading variables of ${{\mathbb C}}$ are $\{y,x\}.$ - We put $\bar F := \operatorname{\sf d-rem}(F\setminus {{\mathbb C}}, {{\mathbb C}}) = \{z\}.$ - $F_\mathrm{new} := \bar F \cup {{\mathbb C}}= \{z,$ $y+z,$ $x\}.$ - As radical differential ideals: $$\left\{y+z,x,x^2+z\right\} = [z,y+z,x]:1^\infty\cap\left\{y+z,x,x^2+z,1\right\}.$$ - The new ${{\mathbb C}}=\{z,$ $x\}$ is computed from $F_\mathrm{new}$ and the leading variables have changed! - … - Finally, $$\left\{y+z,x,x^2+z\right\} = [z,y,x]:1^\infty = [z,y,x]$$ and we see that the leaders $y$ and $x$ have come back. Let $F = \{zy,$ $x,$ $x^2+z\},$ with the elimination ranking $x>y>z.$ - We choose its characteristic set as ${{\mathbb C}}:= \{zy,$ $x\}.$ - The leading variables of ${{\mathbb C}}$ are $\{y,x\}.$ - We put $\bar F := \operatorname{\sf d-rem}(F\setminus {{\mathbb C}}, {{\mathbb C}}) = \{z\}.$ - $F_\mathrm{new} := \bar F \cup {{\mathbb C}}= \{z,$ $zy,$ $x\}.$ - As radical differential ideals: $$\left\{zy,x,x^2+z\right\} = [z,zy,x]:z^\infty\cap\left\{zy,x,x^2+z,z\right\}.$$ - The new ${{\mathbb C}}=\{z,$ $x\}$ is computed from $F_\mathrm{new}$ and the leading variables have also changed! - But the first component is trivial: $1 \in [z,zy,x]:z^\infty.$ The first situation can be remedied by properly relaxing the requirement that ${{\mathbb C}}$ is autoreduced, while the second one can detected, after which further computations in this branch of the Rosenfeld-Gröbner algorithm are not necessary. As a result, we obtain an algorithm, in which, as long as an indeterminate appears among the leading indeterminates of the set ${{\mathbb C}}$, w.r.t. which we reduce, it will stay there until the end. As mentioned above, we are going to replace the computation of the characteristic set by that of a weak d-triangular subset. It is tempting to simply compute a weak d-triangular subset of the least rank, since this computation is inexpensive and it would give us the desired property that the leading indeterminates do not disappear. However, the termination of the algorithm is not guaranteed then. For example, take the system $F=\{x,xy\}$ in ${{\mathbf k}}\{x,y\}$, and let $x<y$. The weak d-triangular subset of $F$ of the least rank is $F$ itself. Thus, we obtain a component $\{x,xy\}:x^\infty=(1)$ and another component $\{x,xy,{{\bf i}}_{xy}\}$. However, ${{\bf i}}_{xy}=x$, hence we arrive at the same set $F$ that was given in the input, and the algorithm runs forever. The reason for the above behavior is that the initials of a weak d-triangular set ${{\mathbb C}}$, as opposed to an autoreduced set, need not be reduced w.r.t. ${{\mathbb C}}$. Thus by adding these initials we do not necessarily decrease the rank. The solution comes from the idea of [@Bou2 Section 5], [@Dif Algorithm 6.11], and [@Imp Algorithm 4.1] to construct the weak d-triangular set ${{\mathbb C}}$ gradually, so that each next polynomial $f$ to be added to ${{\mathbb C}}$ is reduced w.r.t. ${{\mathbb C}}$ (thus, we can also safely add the initial and separant of $f$ and guarantee that the rank decreases). In order to be able to construct the set ${{\mathbb C}}$ gradually, similarly to [@Dif], we store it as a separate component of the triples $(F,{{\mathbb C}},H)\in U$. The last modification that we are going to do is the replacement of the differential pseudo-reduction w.r.t. ${{\mathbb C}}$ by the algebraic pseudo-reduction w.r.t. $\B$, which is computed from ${{\mathbb C}}$ by Algorithm [Differentiate&Autoreduce]{}. As a result, we obtain Algorithm [RGBound]{}. [RGBound]{}\[RGBound\]$(F_0,H_0)$\ [l]{} [Input:]{} finite sets of differential polynomials $F_0\neq\varnothing$ and $H_0$,\ and a differential ranking\ [Output:]{} a finite set $T$ of regular systems such that\ $\qquad\qquad\{F_0\}:H_0^\infty=\bigcap\limits_{({{\mathbb A}},H)\in T}[{{\mathbb A}}]:H^\infty\;$ and\ $\qquad M({{\mathbb A}}\cup H){\leqslant}(n-1)!M(F_0\cup H_0)$ for $({{\mathbb A}},H)\in T.$\ \ [   ]{}$T:=\varnothing$, $\;\;\;U:=\{(F_0,\varnothing,H_0)\}$\ [   ]{}[**while**]{} $U\neq\varnothing$ [**do**]{}\ [   ]{}[   ]{}Take and remove any $(F,{{\mathbb C}},H)\in U$\ [   ]{}[   ]{}$f:=$ an element of $F$ of the least rank\ [   ]{}[   ]{}$D:=\{C\in{{\mathbb C}}\;|\;{\mathop{\rm lv}\nolimits}C={\mathop{\rm lv}\nolimits}f\}$\ [   ]{}[   ]{}$G:=F\cup D\setminus\{f\}$\ [   ]{}[   ]{}$\bar{{\mathbb C}}:={{\mathbb C}}\setminus D\cup\{f\}$\ [   ]{}[   ]{}$\B:=$[Differentiate&Autoreduce]{} $\left(\bar{{\mathbb C}}, \left\{m_y(G\cup\bar{{\mathbb C}}\cup H)\;|\;y\in{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}\right\}\right)$\ [   ]{}[   ]{}[**if**]{} $\B\neq\{1\}$ [**then**]{}\ [   ]{}[   ]{}[   ]{}$\bar F:={\mathop{\sf algrem}\nolimits}(G,\B)\setminus\{0\}$\ [   ]{}[   ]{}[   ]{}$\bar H:={\mathop{\sf algrem}\nolimits}(H,\B)\cup H_\B$\ [   ]{}[   ]{}[   ]{}[**if**]{} $\bar F\cap{{\mathbf k}}=\varnothing$ [**and**]{} $0\not\in\bar H$ [**then**]{}\ [   ]{}[   ]{}[   ]{}[   ]{}[**if**]{} $\bar F=\varnothing$ [**then**]{} $T:=T\,\cup\,\left\{\left(\B^0,\bar H\right)\right\}$\ [   ]{}[   ]{}[   ]{}[   ]{}[   ]{}[**else**]{} $U:=U\cup\left\{\left(\bar F,\B^0,\bar H\right)\right\}$\ [   ]{}[   ]{}[   ]{}[   ]{}[**end if**]{}\ [   ]{}[   ]{}[   ]{}[**end if**]{}\ [   ]{}[   ]{}[**end if**]{}\ [   ]{}[   ]{}[**if**]{} ${{\bf s}}_f\not\in{{\mathbf k}}$ [**then**]{}\ [   ]{}[   ]{}[   ]{}$U:=U \cup\{(F\cup\{{{\bf s}}_f\},{{\mathbb C}},H)\}$\ [   ]{}[   ]{}[   ]{}[**if**]{} ${{\bf i}}_f\not\in{{\mathbf k}}$ [**then**]{} $U:=U\cup\{(F\cup\{{{\bf i}}_f\},{{\mathbb C}},H)\}$ [**end if**]{}\ [   ]{}[   ]{}[**end if**]{}\ [   ]{}[**end while**]{}\ [   ]{}[**return**]{} $T$ In the proof of the bound, a key role is played by the quantity $M_Z(F)$, which is defined for a finite set $F$ of differential polynomials and a proper subset $Z\subsetneq Y$. Assume that $|Z|=k<n$. As before, for a differential indeterminate $y\in Y$, $m_y(F)$ denotes the highest order of a derivative of $y$ occurring in $F$, or zero, if $y$ does not occur in $F$. Then $$M_Z(F):=(n-k)\sum_{y\in Z} m_y(F)+\sum_{y\in Y\setminus Z} m_y(F).$$ We also recall the notation $$M(F)=\sum_{y\in Y} m_y(F).$$ \[p:finalalgorithm\] Algorithm \[RGBound\] is correct and terminates We prove the following invariants of the [**while**]{}-loop: - (I1) $\{F_0\}:H_0^\infty=\bigcap_{(F,{{\mathbb C}},H)\in U}\{F\cup{{\mathbb C}}\}:H^\infty\cap\bigcap_{({{\mathbb A}},H)\in T}[{{\mathbb A}}]:H^\infty$ - For all $(F,{{\mathbb C}},H)\in U$, - (I2) ${{\mathbb C}}$ is d-triangular, - (I3) $F\neq\varnothing$ is reduced w.r.t. ${{\mathbb C}}$ - (I4) $H_{{\mathbb C}}\subset H$, - (I5) Let $l=|{\mathop{\rm lv}\nolimits}{{\mathbb C}}|$. Then, if $l<n$, $$M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(F\cup{{\mathbb C}}\cup H){\leqslant}(n-1)\ldots(n-l)\cdot M(F_0\cup H_0),$$ otherwise $$M(F\cup{{\mathbb C}}\cup H){\leqslant}(n-1)!\cdot M(F_0\cup H_0).$$ The invariants hold for the initial triple $(F_0,\varnothing,H_0)$. Assuming that they hold at the beginning of an iteration of the [**while**]{} loop, we will show that the invariants also take place at the end of the iteration. Let $(F,{{\mathbb C}},H)$ be the triple taken and removed from $U$. Since $F\neq\varnothing$, we can compute an element $f\in F$ of the least rank. Then $f$, as an element of $F$, is reduced w.r.t. ${{\mathbb C}}$. Applying [@Dif Proposition 6.6], we have $$\{F\cup{{\mathbb C}}\}:H^\infty=\{F\cup{{\mathbb C}}\}:(H\cup H_f)^\infty\cap \{F\cup\{{{\bf i}}_f\}\cup{{\mathbb C}}\}:H^\infty\cap \{F\cup\{{{\bf s}}_f\}\cup{{\mathbb C}}\}:H^\infty.$$ We note that, since ${\mathop{\rm rk}\nolimits}{{\bf i}}_f<{\mathop{\rm rk}\nolimits}f$ and ${\mathop{\rm rk}\nolimits}{{\bf s}}_f<{\mathop{\rm rk}\nolimits}f$, polynomials ${{\bf i}}_f$ and ${{\bf s}}_f$ are, respectively, the elements of $F\cup\{{{\bf i}}_f\}$ and $F\cup\{{{\bf s}}_f\}$ of the least rank (and, to repeat, their ranks are less than the rank of the least element of $F$). Moreover, since in the last two triples $(F\cup\{{{\bf i}}_f\},{{\mathbb C}},H), (F\cup\{{{\bf s}}_f\},{{\mathbb C}},H)$ only the first component has changed, invariants I2–I5 are preserved for them. For the proof of invariant I1, it remains to show that $$\label{toshow} \{F\cup{{\mathbb C}}\}:(H\cup H_f)^\infty=\left\{\begin{array}{ll} \left[\B^0\right]:\bar H^\infty,& \bar F=\varnothing\\ \left\{\bar F\cup\B^0\right\}:\bar H^\infty,&{\rm otherwise}. \end{array}\right.$$ Given that ${{\mathbb C}}$ is d-triangular, the three assignments following the computation of $f$ ensure that $\bar{{\mathbb C}}$ is a weak d-triangular set of rank strictly less than ${{\mathbb C}}$, because the polynomial $f$ is reduced w.r.t. ${{\mathbb C}}$ and we throw away (from ${{\mathbb C}}$) all its elements with leading variables “in conflict” with the one of $f$. We note that $$G\cup\bar{{\mathbb C}}= (F\cup \D\setminus\{f\})\cup({{\mathbb C}}\setminus \D)\cup\{f\} = F\cup{{\mathbb C}}.$$ Since $H_{{\mathbb C}}\subset H$, we also have $H\cup H_f=H\cup H_{\bar{{\mathbb C}}}$. Therefore, $$\label{step1} \{F\cup{{\mathbb C}}\}:(H\cup H_f)^\infty=\left\{G\cup\bar{{\mathbb C}}\right\}:\left(H\cup H_{\bar{{\mathbb C}}}\right)^\infty.$$ Next, we use the properties of the set $\B$ ensured by Algorithm [Differentiate&Autoreduce]{}. Since $H_\B\subset H_{\bar{{\mathbb C}}}^\infty+\left[\bar{{\mathbb C}}\right]$, applying Lemma \[l:Hubert\] with $K=H_\B$, we obtain $$\label{step2} \left\{G\cup\bar{{\mathbb C}}\right\}:\left(H\cup H_{\bar{{\mathbb C}}}\right)^\infty= \left\{G\cup\bar{{\mathbb C}}\right\}:\left(H\cup H_{\bar{{\mathbb C}}}\cup H_{\B}\right)^\infty.$$ The inclusions $\B\subset\left[\bar{{\mathbb C}}\right]$ and $\bar{{\mathbb C}}\subset[\B]:H_\B^\infty$ imply that $$\label{step3} \left\{G\cup\bar{{\mathbb C}}\right\}:\left(H\cup H_{\bar{{\mathbb C}}}\cup H_{\B}\right)^\infty= \{G\cup\B\}:\left(H\cup H_{\bar{{\mathbb C}}}\cup H_{\B}\right)^\infty.$$ Using the fact that $H_{\bar{{\mathbb C}}}\subset \left(H_\B^\infty+[\B]\right):H_\B^\infty$ (see Algorithm \[DifferentiateAutoreduce\]) and applying Lemma \[l:Hubert\] with $K=H_{\bar{{\mathbb C}}}$, we get $$\label{step4} \{G\cup\B\}:\left(H\cup H_{\bar{{\mathbb C}}}\cup H_{\B}\right)^\infty= \{G\cup\B\}:(H\cup H_{\B})^\infty.$$ It follows from the definition of the algebraic pseudo-remainder ([algrem]{}) that $$\label{step5} \{G\cup\B\}:(H\cup H_{\B})^\infty= \{\bar F\cup\B\}:\bar H^\infty.$$ Indeed, $\{G\cup\B\}:(H\cup H_\B)^\infty = \{\bar F\cup\B\}:(H\cup H_\B)^\infty.$ Take now any $f\in \left\{\bar F\cup\B\right\}:(H\cup H_\B)^\infty$. There exists $h \in (H\cup H_\B)^\infty$ such that $h\cdot f \in \left\{\bar F\cup\B\right\}.$ If $\bar h$ is a remainder of $h$ w.r.t. $\B$ then there exists $h' \in H_\B^\infty$ with $h'h - \bar h \in (\B).$ Hence, $$\bar h f \in \left\{\bar F\cup\B\right\}$$ and $$f \in \left\{\bar F\cup\B\right\}:\bar H^\infty.$$ The reverse inclusion is done in a similar way. Since $\B\subset\left[\B^0\right]$, we obtain that $\left\{\bar F\cup\B\right\}:\bar H^\infty = \left\{\bar F\cup\B^0\right\}:\bar H^\infty$. The set $\B^0$ is d-triangular, its rank is equal to that of $\bar{{\mathbb C}}$, set $\bar H$ is partially reduced w.r.t. $\B^0$ and contains $H_\B^0$, and $\bar F$ is reduced w.r.t. $\B^0$. Moreover, if $\bar F=\varnothing$, we obtain the regular system $\left(\B^0,\bar H\right)$, which corresponds to the radical differential ideal $\left[\B^0\right]:\bar H^\infty=\left\{\bar F\cup\B^0\right\}:H_\B^\infty.$ Thus, we have proved and also have demonstrated that invariants I2–I4 hold for the triple $\left(\bar F,\B^0,\bar H\right)$. Termination of the algorithm is proved as follows. At each iteration of the [**while**]{}-loop, the triple $(F,{{\mathbb C}},H)\in U$ is replaced by at most three triples $\left(\bar F,\B^0,\bar H\right)$, $(F\cup\{{{\bf i}}_f\},{{\mathbb C}},H)$, and $(F\cup\{{{\bf s}}_f\},{{\mathbb C}},H)$. Define a relation $\prec$ on the set of all triples $(F,{{\mathbb C}},H)$ satisfying I2–I4: let $(F',{{\mathbb C}}',H')\prec (F,{{\mathbb C}},H)$ if and only if either ${\mathop{\rm rk}\nolimits}{{\mathbb C}}'<{\mathop{\rm rk}\nolimits}{{\mathbb C}}$, or ${{\mathbb C}}'={{\mathbb C}}$ and the element of the least rank in $F'$ is strictly less than that in $F$. Then $\prec$ is a lexicographic product of two well-orders, which is a well-order. We have shown that in the first triple we have ${\mathop{\rm rk}\nolimits}\B^0<{\mathop{\rm rk}\nolimits}{{\mathbb C}}$; in the last two triples the second component ${{\mathbb C}}$ remains the same, but the elements of the least rank of $F\cup\{{{\bf i}}_f\}$ and $F\cup\{{{\bf s}}_f\}$ are strictly less than the element of $F$ of the least rank. That is, each of the three triples is less than $(F,{{\mathbb C}},H)$ w.r.t. the well-order $\prec$. Therefore, all triples computed by the algorithm can be arranged in a ternary tree, in which $(F_0,\varnothing,H_0)$ is the root, and every path starting from the root is finite. Let $\lambda$ be the maximal length of such a path. Then the number of vertices in the tree does not exceed $\sum_{i=0}^\lambda 3^i$. Thus, the tree is finite, whence the algorithm terminates. Finally, we show that invariant I5 holds for the triple $\left(\bar F,\B^0,\bar H\right)$. We assume that $|{\mathop{\rm lv}\nolimits}{{\mathbb C}}|=l$. Two cases are possible: 1. ${\mathop{\rm lv}\nolimits}f\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}$. Then ${\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}={\mathop{\rm lv}\nolimits}{{\mathbb C}}$ and for any finite set of polynomials $K$, if $l<n$, we have $$\begin{aligned} \label{trans0} M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}(K)=M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(K). \end{aligned}$$ 2. ${\mathop{\rm lv}\nolimits}f\not\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}$. Then ${\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}={\mathop{\rm lv}\nolimits}{{\mathbb C}}\cup\{{\mathop{\rm lv}\nolimits}f\}$ and $|{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}|=l+1$. If $l+1<n$, we observe that $$\label{trans1} \begin{array}{l} M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}(K)=\\ \;\; =(n-l-1)\sum\limits_{y\in{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}m_y(K)+\sum\limits_{y\not\in{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}m_y(K)=\\ \;\; =(n-l-1)\sum\limits_{y\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+(n-l-1)\cdot m_{{\mathop{\rm lv}\nolimits}f}(K)+ \sum\limits_{y\not\in{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}m_y(K)=\\ \;\; =(n-l-1)\sum\limits_{y\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+(n-l-2)\cdot m_{{\mathop{\rm lv}\nolimits}f}(K)+ \\ \quad\quad +\left(m_{{\mathop{\rm lv}\nolimits}f}(K) + \sum\limits_{y\not\in{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}m_y(K)\right)=\\ \;\; =(n-l-1)\sum\limits_{y\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+\sum\limits_{y\not\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+ (n-l-2)\cdot m_{{\mathop{\rm lv}\nolimits}f}(K){\leqslant}\\ \;\; {\leqslant}(n-l)\sum\limits_{y\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+\sum\limits_{y\not\in{\mathop{\rm lv}\nolimits}{{\mathbb C}}}m_y(K)+ (n-l-2)\cdot M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(K)=\\ \;\; =(n-l-1)\cdot M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(K) \end{array}$$ (here we have used the fact that $m_{{\mathop{\rm lv}\nolimits}f}(K){\leqslant}M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(K)$). If ${\mathop{\rm lv}\nolimits}{{\mathbb C}}<n$ and $|{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}|=n$, we simply note that $$\label{trans2} M(K){\leqslant}M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(K).$$ Assume for simplicity that $${\mathop{\rm ld}\nolimits}\bar{{\mathbb C}}=\left\{y_1^{(d_1)},\ldots,y_k^{(d_k)}\right\},$$ where $k=l$ or $k=l+1$. Since all derivatives of $y_i,$ $1 {\leqslant}i {\leqslant}k,$ presented in $F\cup\B\cup H$ of order greater than $d_i$ can be found among ${\mathop{\rm rk}\nolimits}\B$, and since the elements of $\bar F$ and $\bar H\setminus H_\B$ are algebraic pseudo-remainders of $G$ and $H$ w.r.t. $\B$, we have $$\label{ineqR} m_i\left(\bar F\cup\B^0\cup\bar H\right) {\leqslant}\left\{ \begin{array}{ll} d_i,& 1{\leqslant}i{\leqslant}k\\ m_i(G\cup\B\cup H),& k<i{\leqslant}n. \end{array} \right.$$ Also, recall that $\B$ satisfies the inequality (see ) $$\label{ineqB} m_i(\B){\leqslant}m_i\left(G\cup\bar{{\mathbb C}}\cup H\right)+\sum\limits_{j=1}^k (m_j\left(G\cup\bar{{\mathbb C}}\cup H\right)-d_j),\ k<i{\leqslant}n.$$ Combining (\[ineqR\]) and (\[ineqB\]), we obtain that $$\begin{aligned} M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(\bar F\cup\B^0\cup\bar H\right) &= (n-k)\sum_{i=1}^kd_i + \sum_{i=k+1}^n m_i\left(\bar F\cup\B^0\cup\bar H\right) {\leqslant}\\ &{\leqslant}(n-k)\sum_{i=1}^kd_i + \sum_{i=k+1}^n m_i(G\cup\B\cup H) {\leqslant}\\ &{\leqslant}(n-k)\sum_{i=1}^kd_i + \sum_{i=k+1}^n m_i\left(G\cup\bar{{\mathbb C}}\cup H\right) +\\ &\quad\ + (n-k)\sum_{j=1}^k(m_j\left(G\cup\bar{{\mathbb C}}\cup H\right)-d_j)=\\ &= (n-k)\sum_{i=1}^km_i\left(G\cup\bar{{\mathbb C}}\cup H\right) + \sum_{i=k+1}^n m_i\left(G\cup\bar{{\mathbb C}}\cup H\right) =\\ &= M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(G\cup\bar{{\mathbb C}}\cup H\right)\end{aligned}$$ and if $k = n$ then $$M\left(\bar F\cup\B^0\cup\bar H\right) = \sum_{i=1}^n d_i + 0 = M\left(G\cup\bar{{\mathbb C}}\cup H\right)$$ because ${\mathop{\rm rk}\nolimits}\bar{{\mathbb C}}= {\mathop{\rm rk}\nolimits}\B^0.$ Thus, $$\label{ineqtrans1} \begin{array}{ll} M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(\bar F\cup\B^0\cup\bar H\right){\leqslant}M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(G\cup\bar{{\mathbb C}}\cup H\right),&k<n\\ M\left(\bar F\cup\B^0\cup\bar H\right){\leqslant}M\left(G\cup\bar{{\mathbb C}}\cup H\right),&k=n. \end{array}$$ Now, applying , , or with $K=G\cup\bar{{\mathbb C}}\cup H=F\cup{{\mathbb C}}\cup H$, we get $$\label{ineqtrans2} \begin{array}{rlr} M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(\bar F\cup\B^0\cup\bar H\right)&{\leqslant}M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(F\cup{{\mathbb C}}\cup H),&l=k<n\\ M_{{\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}}\left(\bar F\cup\B^0\cup\bar H\right)&{\leqslant}(n-l-1)M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}\left(\bar F\cup\B^0\cup\bar H\right){\leqslant}\\ & {\leqslant}(n-l-1)\cdot M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(F\cup{{\mathbb C}}\cup H),& l<k<n\\ M\left(\bar F\cup\B^0\cup\bar H\right)&{\leqslant}M\left(G\cup\bar{{\mathbb C}}\cup H\right)=\\ &= M(F\cup{{\mathbb C}}\cup H){\leqslant}\\ & {\leqslant}M_{{\mathop{\rm lv}\nolimits}{{\mathbb C}}}(F\cup{{\mathbb C}}\cup H),& l<k=n\\ M\left(\bar F\cup\B^0\cup\bar H\right)&{\leqslant}M\left(G\cup\bar{{\mathbb C}}\cup H\right)=\\ &= M(F\cup{{\mathbb C}}\cup H),&l=k=n. \end{array}$$ By taking into account the fact that invariant I5 holds for the triple $(F,{{\mathbb C}},H)$, we thus obtain this invariant for the triple $\left(\bar F,\B^0,\bar H\right)$. To conclude the proof of the bound for the output regular systems $\left(\B^0,\bar H\right)$, we note that it is already given by the invariant I5 when $k=n$, while in case $k<n$ we use inequality : $$M\left(\B^0\cup\bar H\right) {\leqslant}M_{{\mathop{\rm lv}\nolimits}\B^0}\left(\B^0\cup\bar H\right){\leqslant}(n-1)!\cdot M(F_0\cup H_0).$$ Reduction-independent algorithm ------------------------------- In Algorithm \[RGBound\] we had to be very careful in the reduction process. The idea was to emulate differential reductions by doing enough differentiations first and then applying purely algebraic reduction. We take care of the orders of derivatives in the first process and do not need to worry about them during the second purely algebraic step. Let us find out why such two-step procedure was necessary. If we reduce w.r.t. an arbitrary d-triangular set, the result of reduction depends on the choice of the reduction path. Consider the following differential chain $${{\mathbb C}}= x(x-1),\ (x-1)y,\ z+y+tx$$ with the elimination ranking $t<x<y<z$ and the differential polynomial $$f = z'+y'.$$ We can reduce $f$ w.r.t. ${{\mathbb C}}$ in many different ways and the remainders are very different: 1. $$\begin{CD} z'+y' @>{z'+y'+t'x+tx'}>> t'x+tx' @>{(2x-1)x'}>>t'(2x-1)x = 2t'x^2-t'x@>{}>> \\ @>{x^2-x}>>t'x =: f_1\\ \end{CD}$$ 2. $$\begin{CD} z'+y' @>{(x-1)y' + x'y}>> (x-1)z' - x'y @>{(x-1)y}>> (x-1)^2z'@>{}>> \\ @>{z'+y'+t'x+tx'}>>(x-1)^2(y'+t'x+tx')@>{}>> 0 =: f_2.\\ \end{CD}$$ We see that the remainder $f_1$ depends on the variable $t'$ that is not in both $f_2$ and ${{\mathbb C}}.$ So, the reason for these so different answers is that the set ${{\mathbb C}}$ has a non-invertible initial. Speaking informally, if ${{\mathbb C}}$ is partially autoreduced and its initials and separants are invertible, then the result of reduction is more or less uniquely determined. More precisely, one can show that all results of reduction of a polynomial w.r.t. a d-triangular set with invertible initials and separants lie in a fixed Nötherian ring of algebraic polynomials. In particular, if one of the results of reduction satisfies a certain bound on the order of its derivatives, then any other result of reduction will satisfy this bound as well. Since we are not in position of reducing w.r.t. a set with invertible initials and separants, we are going to state precisely and prove a slightly weaker statement. Within the scope of this section, let us call polynomial $g$ a [*differential remainder*]{} of polynomial $f$ w.r.t. ${{\mathbb C}}$, if $g$ is reduced w.r.t. ${{\mathbb C}}$ and there exists $h\in H_{{\mathbb C}}^\infty$ such that $$hf-g\in[{{\mathbb C}}]:H_{{\mathbb C}}^\infty.$$ \[p:SmallRing\] Let ${{\mathbb C}}$ be a coherent[^1] d-triangular set of differential polynomials, $f$ a differential polynomial, and $g$ a differential remainder of $f$ w.r.t. ${{\mathbb C}}$. Let $X$ be the set of derivatives present in ${{\mathbb C}}$ and $g$. Let $\bar g$ be another differential remainder of $f$ w.r.t. ${{\mathbb C}}$. Assume that $\bar g$ is not in ${{\mathbf k}}[X]$, i.e., it admits a representation $\bar g = a_ku^k + \ldots + a_0$, where $u\not\in X$ and $a_0,\ldots,a_k$ are free of $u$. Then $$\bar g - a_0 \in ({{\mathbb C}}):H_{{\mathbb C}}^\infty.$$ In particular, $a_0$ is also a differential remainder of $f$ w.r.t. ${{\mathbb C}}$. Since $g$ and $\bar g$ are differential remainders of $f$ w.r.t. $g$, they are both reduced w.r.t. ${{\mathbb C}}$, and there exist $h,\bar h\in H_{{\mathbb C}}^\infty$ such that $$h f - g \in [{{\mathbb C}}]:H_{{\mathbb C}}^\infty,\quad\bar h f - \bar g \in [{{\mathbb C}}]:H_{{\mathbb C}}^\infty.$$ Consider the differential polynomial $$\bar f := \bar h(hf-g) - h(\bar h f -\bar g) = h\bar g - \bar h g\in [{{\mathbb C}}]:H_{{\mathbb C}}^\infty.$$ Since ${{\mathbb C}}$ is a coherent d-triangular set, ideal $[{{\mathbb C}}]:H_{{\mathbb C}}^\infty$ is regular. The polynomial $\bar f$ is partially reduced w.r.t. ${{\mathbb C}}.$ Therefore, by the Rosenfeld Lemma $\bar f\in({{\mathbb C}}):H_{{\mathbb C}}^\infty$. We have $$\bar f = (h\cdot a_k)u^k +\ldots + (h\cdot a_0 - \bar h\cdot g)$$ with $\bar h\cdot g$ contributing only to $a_0,$ because it does not depend on $u.$ Since $u$ does not appear in ${{\mathbb C}},$ the fact that $\bar f\in({{\mathbb C}}):H_{{\mathbb C}}^\infty$ implies that every coefficient of $\bar f$ belongs to this ideal. In particular, $h\cdot a_k$ belongs to $({{\mathbb C}}):H_{{\mathbb C}}^\infty,$ whence $a_i \in ({{\mathbb C}}):H_{{\mathbb C}}^\infty,$ $1 {\leqslant}i {\leqslant}k.$ Thus, $$\bar g - a_0=a_ku^k+\ldots+a_1u \in ({{\mathbb C}}):H_{{\mathbb C}}^\infty.$$ We are going to apply the above Proposition as follows. Let ${{\mathbb C}}$ and $f$ be as in its statement. Suppose we know that there exists a differential remainder $g$ of $f$ w.r.t. ${{\mathbb C}}$ that satisfies a certain bound $b$ on the order of derivatives occurring in it. We emphasize that we do not need to know $g$, the fact of its existence is sufficient. Compute [*any*]{} differential remainder $\bar g$ of $f$ w.r.t. ${{\mathbb C}}$. Then, if $\bar g$ does not satisfy the bound $b$, it must contain a derivative $u$ that does not satisfy this bound. By Proposition \[p:SmallRing\], the free coefficient of $\bar g$, when viewed as a polynomial in $u$, is also a differential remainder of $f$ w.r.t. ${{\mathbb C}}$. Replace $\bar g$ by its free coefficient; continue such replacements until $\bar g$ satisfies the bound $b$. This yields an efficient procedure that computes a differential remainder satisfying the bound: [ Truncate]{}\[Tailpart\] $(f, \{p_i\})$\ ---------------------------------------------------------------------------------------------- [Input:]{} a differential polynomial $f$ and numbers $p_i{\geqslant}0$ [Output:]{} [**truncation**]{} of $f$, i.e., the sum of those terms of $f$ that belong to the polynomial ring $R={{\mathbf k}}[\ldots,y_i,\dots,y_i^{(p_i)},y_{i+1},\ldots]$ [   ]{}Let $f=\alpha_1+\ldots+\alpha_q$, where $\alpha_i$ are differential monomials [   ]{}$g:=0$ [   ]{}[**for**]{} $i:=1$ [**to**]{} $q$ [**do**]{} [   ]{}[   ]{}[**if**]{} $\alpha_i\in R$ [**then**]{} $g:=g+\alpha_i$ [   ]{}[**end for**]{} [   ]{}[**return**]{} $g$ ---------------------------------------------------------------------------------------------- We have proved the following \[Truncation\] Let ${{\mathbb C}}$ be a coherent d-triangular set of differential polynomials, and let $f$ be a differential polynomial. Let $p_i{\geqslant}m_i({{\mathbb C}})$, $i=1,\ldots,n$. Assume that there exists a differential remainder of $f$ w.r.t. ${{\mathbb C}}$, which contains no derivatives of differential indeterminate $y_i$ of order greater than $p_i$, $i=1,\ldots,n$. Let $g$ be any differential remainder of $f$ w.r.t. ${{\mathbb C}}$. Then [Truncate]{}$(g,\{p_i\})$ is a differential remainder of $f$ w.r.t. ${{\mathbb C}}$, in which the order of every differential indeterminate $y_i$ does not exceed $p_i$. We are going to modify Algorithm \[RGBound\], so that there is no necessity to perform differential pseudo-reduction in two steps, via prolongation and purely algebraic reduction. In the new Algorithm \[RGBoundRI\], it is assumed that procedure $\operatorname{\sf d-rem}$ computes any differential remainder in the above sense. The key idea is the following: whenever we find a differential remainder w.r.t. $\D$ that does not satisfy the expected bound $b$ (computed by Algorithm \[RGBoundRI\]), by Theorem \[Truncation\] we can simply truncate this remainder. In order to be able to apply Theorem \[Truncation\], we are going to prove the existence of a differential remainder satisfying $b$. In fact, we know that sets $\B$, $\bar F$, and $\bar H$ computed in Algorithm \[RGBound\] satisfy $b$; it remains to be shown that one can obtain differential remainders w.r.t. $\D$, given the elements of $\B$, $\bar F$, and $\bar H$. Note that we may assume ${\mathop{\rm rk}\nolimits}\D={\mathop{\rm rk}\nolimits}\bar{{\mathbb C}}$ (at the end of the [**for**]{}-loop), since otherwise all results of truncations are discarded by Algorithm \[RGBoundRI\]. To justify truncations in the [**for**]{}-loop of Algorithm \[RGBoundRI\], we consider $$\B = {\sf Differentiate\&Autoreduce}\left(\bar{{\mathbb C}}, \left\{m_y\left(\bar G\right)\:|\: y \in {\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}\right\}\right).$$ and show that, at the beginning of each iteration, there exist $B\in\B$ and $h\in H_\D^\infty$ such that $hB$ is a differential remainder of $C$ w.r.t. $\D$. This statement is a consequence of the following expanded invariant of the [**for**]{}-loop, which we are going to prove by induction on the number of iterations. Let $$\B_{<C}=\{f\in\B\;|\;{\mathop{\rm ld}\nolimits}f<{\mathop{\rm ld}\nolimits}C\},$$ $B = {\mathop{\sf algrem}\nolimits}(C,\B_{<C}),$ $E = \operatorname{\sf d-rem}(C,\D),$ and $D =$ [Truncate]{} $(E,b).$ Then $$\begin{aligned} h'\cdot C - hB \in [\D]:H_\D^\infty,\\ B\in[\D\cup\{D\}]:H_\D^\infty,\\ H_B \subset (H_D^\infty + [\D]):H_\D^\infty,\\\end{aligned}$$ for some $h,h' \in H_\D^\infty.$ The inductive base holds, since at the end of the first iteration we have $B=E=D=C$ and $\D=\{C\}$. For the inductive step, we have: $h_1\cdot C - B \in \left(\B_{<C}\right)$ for some $h_1 \in H_{\B_{<C}}^\infty.$ By the inductive assumption $[\B_{<C}] \subset [\D]:H_\D^\infty.$ Hence, $$h_1\cdot C - B \in [\D]:H_\D^\infty.$$ Also by the inductive assumption, $h_1 \in (H_\D^\infty + [\D]):H_\D^\infty.$ This means that there exist $h \in H_\D^\infty,$ $h' \in H_\D^\infty$ such that $h\cdot h_1 - h' \in [\D].$ Thus, $$h'\cdot C - h\cdot B \in [\D]:H_\D^\infty.$$ By definition of (algebraic) pseudo-remainder, we have $$B\in\left(\B_{<C}\cup\{C\}\right),\quad C\in (E+[\D]):H_\D^\infty.$$ By Lemma \[InitialBelongs\], taking into account the assumption ${\mathop{\rm rk}\nolimits}\D={\mathop{\rm rk}\nolimits}\bar{{\mathbb C}}$, we have: $$H_B\subset H_C\cdot H_{\B_{<C}}^\infty+\left(\B_{<C}\right),\quad H_C\subset (H_E+[\D]):H_\D^\infty.$$ By Proposition \[p:SmallRing\], $E\in D+(\D):H_\D^\infty$. By modifying slightly the proof of Lemma \[InitialBelongs\], we will show that this implies $H_E\subset H_D+(\D):H_\D^\infty$. Indeed, using the assumption ${\mathop{\rm rk}\nolimits}\D={\mathop{\rm rk}\nolimits}\bar{{\mathbb C}}$ (which holds at the end of the [**for**]{}-loop), we obtain ${\mathop{\rm rk}\nolimits}D={\mathop{\rm rk}\nolimits}C={\mathop{\rm rk}\nolimits}E$; since all leading differential indeterminates in $\bar{{\mathbb C}}$ are distinct, this, in particular, implies that $$v={\mathop{\rm ld}\nolimits}D={\mathop{\rm ld}\nolimits}E\not\in{\mathop{\rm ld}\nolimits}\D.$$ Now let $f_1,\ldots,f_k$ be any generators of the ideal $(\D):H_\D^\infty$, so that we have $$E-D=\sum_{i=1}^k \alpha_i f_i.$$ By viewing the above equality as one between two polynomials in $v$ and noting that $f_i$ do not involve $v$, we immediately obtain that ${{\bf i}}_E-{{\bf i}}_D\in (\D):H_\D^\infty$ and ${{\bf s}}_E-{{\bf s}}_D\in (\D):H_\D^\infty$. Combining the above statements, we obtain the required invariants at the end of the iteration: $$B\in\left(\B_{<C}\cup\{C\}\right)\subset ([\D]:H_\D^\infty\cup(E+[\D]):H_\D^\infty)\subset[\D\cup\{D\}]:H_\D^\infty$$ and $$H_B\subset H_C\cdot H_{\B_{<{{\mathbb C}}}}^\infty+\left(\B_{<C}\right)\subset (H_E+[\D]):H_\D^\infty+[\D]:H_\D^\infty \subset (H_D+[\D]):H_\D^\infty.$$ The truncations applied in Algorithm \[RGBoundRI\] to compute sets $\bar F$ and $\bar H$ are justified by showing that differential remainders of $G$ and $H\cup H_f$ w.r.t. $\D$ that satisfy the bound $b$ exist and can be similarly obtained from the elements of sets $\bar F$ and $\bar H$ computed by Algorithm \[RGBound\]. We omit these details. [RGBound-Reduction-Independent]{}\[RGBoundRI\]$(F_0,H_0)$\ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [Input:]{} finite sets of differential polynomials $F_0\neq\varnothing$ and $H_0$, and a differential ranking [Output:]{} a finite set $T$ of regular systems such that $\qquad\qquad\{F_0\}:H_0^\infty=\bigcap\limits_{({{\mathbb A}},H)\in T}[{{\mathbb A}}]:H^\infty\;$ and $\qquad M({{\mathbb A}}\cup H){\leqslant}(n-1)!M(F_0\cup H_0)$ for $({{\mathbb A}},H)\in T.$ [   ]{}$T:=\varnothing$, $\;\;\;U:=\{(F_0,\varnothing,H_0)\}$ [   ]{}[**while**]{} $U\neq\varnothing$ [**do**]{} [   ]{}[   ]{}Take and remove any $(F,{{\mathbb C}},H)\in U$ [   ]{}[   ]{}$f$ an element of $F$ of the least rank [   ]{}[   ]{}$D:=\{C\in{{\mathbb C}}\;|\;{\mathop{\rm lv}\nolimits}C = {\mathop{\rm lv}\nolimits}f\}$ [   ]{}[   ]{}$G:=F\cup D\setminus \{f\}$ [   ]{}[   ]{}$\bar{{\mathbb C}}:={{\mathbb C}}\setminus D\cup\{f\}$ [   ]{}[   ]{}$\bar G:=G\cup\bar{{\mathbb C}}\cup H$ [   ]{}[   ]{}$b := \left\{m_y\left(\bar G\right)\:\big|\:y\in {\mathop{\rm lv}\nolimits}\bar C\right\}\cup \left\{m_z\left(\bar G\right)+\sum\limits_{y\in{\mathop{\rm lv}\nolimits}\bar C}\left(m_y\left(\bar G\right)-m_y\left({\mathop{\rm ld}\nolimits}\bar{{\mathbb C}}\right)\right)\:\big|\: z \notin {\mathop{\rm lv}\nolimits}\bar{{\mathbb C}}\right\}$ [   ]{}[   ]{}$\D := \varnothing$ [   ]{}[   ]{}[**for**]{} $C \in \bar{{\mathbb C}}$ increasingly [**do**]{} [   ]{}[   ]{}[   ]{}$\D := \D \cup \left\{{\sf Truncate} \left(\operatorname{\sf d-rem}\left(C,\D\right),b\right) \right\}$ [   ]{}[   ]{}[**if**]{} ${\mathop{\rm rk}\nolimits}\D = {\mathop{\rm rk}\nolimits}\bar{{\mathbb C}}$ [**then**]{} [   ]{}[   ]{}[   ]{}$\bar F:=$ [Truncate]{} $\left(\operatorname{\sf d-rem}(G,\D)\setminus\{0\},b\right)$ [   ]{}[   ]{}[   ]{}$\bar H:=$ [Truncate]{} $\left(\operatorname{\sf d-rem}(H\cup H_f,\D)\cup H_{\D},b\right)$ [   ]{}[   ]{}[   ]{}[**if**]{} $\bar F\cap{{\mathbf k}}=\varnothing$ [**and**]{} $0\not\in\bar H$ [   ]{}[   ]{}[   ]{}[   ]{}[**then**]{} $U := U\cup\left\{\bar F, \D, \bar H \right\}$ [   ]{}[   ]{}[   ]{}[   ]{}[**else**]{} $T:=T\,\cup\,\{(\D,\bar H)\}$ [   ]{}[   ]{}[   ]{}[**end if**]{} [   ]{}[   ]{}[**end if**]{} [   ]{}[   ]{}[**if**]{} ${{\bf s}}_f\not\in{{\mathbf k}}$ [**then**]{} [   ]{}[   ]{}[   ]{}$U:=U \cup\{(F\cup\{{{\bf s}}_f\},{{\mathbb C}},H)\}$ [   ]{}[   ]{}[   ]{}[**if**]{} ${{\bf i}}_f\not\in{{\mathbf k}}$ [**then**]{} $U:=U\cup\{(F\cup\{{{\bf i}}_f\},{{\mathbb C}},H)\}$ [**end if**]{} [   ]{}[   ]{}[**end if**]{} [   ]{}[**end while**]{} [   ]{}[**return**]{} $T$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Algorithm \[RGBoundRI\] is correct and satisfies the bound. The proof of correctness, termination, and bound for Algorithm \[RGBoundRI\] is based on the same invariants of the [**while**]{}-loop that were used for Algorithm \[RGBound\]. The only new step we make is the [Truncate]{} algorithm whose application is justified above. Conclusions =========== By estimating the orders of derivatives, we have shown that, given a set of ordinary differential polynomials specifying a radical differential ideal $I$, one can construct a Nötherian ring of algebraic polynomials, in which the computation of a characteristic decomposition of $I$ is actually performed. This does not mean that the computation is completely algebraic: differentiations are allowed, but they never lead out of the constructed algebraic ring. For the problem of converting a characteristic decomposition of a radical differential ideal from one ranking to another, we have proposed an algorithm, which first differentiates the input polynomials sufficiently many times, and then performs the conversion completely algebraically, without using differentiation at all. The algorithm is applicable in the partial differential case, but the bound for the number of differentiations of the input polynomials is given for the ordinary case only. We conjecture that, if one can solve the first problem of computing a characteristic decomposition of a radical differential ideal from generators completely algebraically, i.e., by an algorithm that first differentiates the input polynomials sufficiently many times, and then computes the decomposition without using differentiations, then one can also solve the Ritt problem of computing an irredundant prime (or characteristic) decomposition of a radical differential ideal. We thank Michael F. Singer, François Boulier, William Sit, Évelyne Hubert, Evgeniy Pankratiev, and the referees for their important suggestions. [^1]: The adjective “coherent” makes the statement valid in presence of partial derivatives; in the ordinary case, it can be ignored.
--- abstract: 'The effect of turbulence on snow precipitation is not incorporated into present weather forecasting models. Here we show evidence that turbulence is in fact a key influence on both fall speed and spatial distribution of settling snow. We consider three snow fall events under vastly different levels of atmospheric turbulence. We characterize the size and morphology of the snow particles, and we simultaneously image their velocity, acceleration, and relative concentration over vertical planes about 30 m$^2$ in area. We find that turbulence-driven settling enhancement explains otherwise contradictory trends between the particle size and velocity. The estimates of the Stokes number and the correlation between vertical velocity and local concentration indicate that the enhanced settling is rooted in the preferential sweeping mechanism. When the snow vertical velocity is large compared to the characteristic turbulence velocity, the crossing trajectories effect results in strong accelerations. When the conditions of preferential sweeping are met, the concentration field is highly non-uniform and clustering appears over a wide range of scales. These clusters, identified for the first time in a naturally occurring flow, display the signature features seen in canonical settings: power-law size distribution, fractal-like shape, vertical elongation, and large fall speed that increases with the cluster size. These findings demonstrate that the fundamental phenomenology of particle-laden turbulence can be leveraged towards a better predictive understanding of snow precipitation and ground snow accumulation. They also demonstrate how environmental flows can be used to investigate dispersed multiphase flows at Reynolds numbers not accessible in laboratory experiments or numerical simulations.' author: - 'Cheng Li, Kaeul Lim, Tim Berk, Aliza Abraham, Michael Heisel, Michele Guala, Filippo Coletti, Jiarong Hong' title: Settling and Clustering of Snow Particles in Atmospheric Turbulence --- Introduction ============ The fall speed of snow, and frozen hydrometeors in general, is a crucial parameter for meteorological prediction [@Hong2004]. The spatio-temporal distribution of snow precipitation directly impacts the ground accumulation, which in turn influences the local hydrology, road conditions, vegetation development, avalanche danger, and mass balance of glaciers [@Lehning2008; @Scipion2013]. At a global level, the velocity at which ice and snow particles settle in the atmosphere is one of the most important determinant of climate sensitivity [@IPCC2014 Fifth assessment report]. Considering its importance, our understanding of snow particle settling is far from satisfactory, and the process remains poorly characterized [@Heymsfield2010a]. We will use the word settling as it is more common in fluid mechanics, although the atmospheric science literature often terms it sedimentation. Also, because we focus here on relatively small hydrometeors as opposed to large dendritic ones, we will generally refer to snow particles as opposed to snowflakes. A common approach is to parameterize the vertical velocity of the snow particles (or generic hydrometeor) $W_s$ as a power-law function of the characteristic diameter $d_s$ [@Locatelli1974]: $$W_s = a_w d_s^{b_w} \label{eq:eq1}$$ where $a_w$ and $b_w$ are empirical constants. A first difficulty lies in the specificity of the constants to the type of hydrometeors, which display a broad range of size, morphology, porosity, and riming that affect the balance between drag and gravity [@Pruppacher1997]. Fall speed relationships that include the object mass $m_s$ and frontal area $A_s$ enable the definition of a particle Reynolds number and drag coefficient, and show more generality [@Boehm1989; @Mitchell1996]. However, even these models are ultimately similar to equation \[eq:eq1\] in that they resort to a power-law dependence of $m_s$ and $A_s$ on $d_s$. Small snow particles and ice crystals (mm-sized or smaller) form the vast majority of the frozen precipitation in the atmosphere [@Pruppacher1997]. Field studies focused on these small particles report values of the coefficient $b_w \approx 0.25$ (although with significant variability, see @Locatelli1974 [@Tiira2016; @VonLerber2017]). From the fluid mechanics standpoint, this would seem a very weak relation between the fall speed and the diameter. Stokes drag implies $b_w = 2$, and while non-linear drag certainly affects the process, it is not expected to be dominant as the particle Reynolds number for those small hydrometeors is typically $\mathcal{O}(10)$. Important factors that contribute to the trend include: particle bulk density, which varies significantly with the level of riming and porosity [@Pruppacher1997]; particle anisotropy, which can be high for needles and crystal aggregates [@Dunnavan2019]; and in general the complexity of the snow particle morphology, especially for dendritic ice crystals, aggregates, and open geometries [@Westbrook2008; @Heymsfield2010a]. Morphological factors, however, are not expected to play major roles for small hydrometeors of compact shape, while the weak dependence with the diameter is still observed [@Tiira2016]. Therefore, it is evident that other environmental factors can influence the settling process besides the snow particle properties. The role of atmospheric turbulence on the snow fall speed has only recently been recognized. @Garrett2014 considered data from a field study and showed that, in high turbulence, the fall speed was seemingly insensitive to the snow particle diameter. They simultaneously measured hydrometeor morphology and fall speed using a multi-camera system, and found that unrealistic density estimates were obtained by assuming that the observed vertical velocity coincided with the terminal velocity in quiescent air. While it is intuitive that air turbulence would broaden the distribution of snow particle velocities (as recently confirmed, @Garrett2015), one may also expect the effectively Gaussian velocity fluctuations to cancel out, leaving the average fall speed unaffected. This view, however, does not account for well-known phenomena in particle-laden flows. When small heavy particles fall through turbulence, their mean settling velocity can be significantly altered compared to still-fluid conditions [@Nielsen1993; @Wang1993; @Balachandar2010]. The fall can be hindered, e.g., if weakly inertial particles are trapped in vortices [@Tooby1977], or if fast-falling particles are slowed down by non-linear drag [@Mei1991] or loiter in upward regions of the flow [@Good2012]. More often, however, turbulence is found to enhance the settling through a process known as preferential sweeping [@Maxey1987; @Wang1993]: inertial particles favour downward regions of the flow, i.e., they oversample fluid with vertical velocity fluctuations aligned with the direction of gravity. This effect is especially strong (and dominant over other mechanisms that hinder the fall) when the particles have an aerodynamic response time $\tau_p$ comparable to the Kolmogorov time scale $\tau_\eta$; that is, when the Stokes number $St \equiv \tau_p/\tau_\eta = \mathcal{O}(1)$. Laboratory experiments have shown that in this case the mean vertical velocity can see a multi-fold increase [@Aliseda2002; @Good2014; @Huck2018; @Petersen2019]. Recently, @Nemes2017 imaged and tracked snow particles in the atmospheric surface layer, observed a substantial increase in settling velocity, and suggested that preferential sweeping was at play. Another well-known behaviour exhibited by inertial particles in turbulence is the tendency to form clusters, especially when $St = \mathcal{O}(1)$ [@Eaton1994; @Monchaux2012; @Gustavsson2016]. Along with the ability of inertial particles to maintain significant relative velocity for vanishing separations, this effect is thought to enhance their collision rate [@Sundaram1997; @Bewley2013]. As such, clustering is expected to be consequential for a variety of natural phenomena, from atmospheric cloud dynamics [@Shaw2003; @Grabowski2013] to dust agglomeration in circumstellar nebulas [@Cuzzi2001]. Precipitating snow can be largely composed of aggregates formed by successive collisions of ice crystals [@Dunnavan2019]. Thus, if clustering of frozen hydrometeors does occur, it is likely to play an important role in determining their particle shape, size and fall speed. To date, there is no direct evidence that snow particles cluster in the atmosphere, nor the properties that such clusters may possess, and the impact turbulence may have on the evolution of frozen precipitation remains speculative. Here we present and analyse data from three field studies where settling snow is illuminated and imaged over vertical planes ${\sim} \: 30$ m$^2$, using previously introduced approaches [@Hong2014; @Nemes2017]. We characterize the snow particle velocities and accelerations across a broad range of atmospheric conditions, and show that turbulence plays a dominant role in determining both mean and variance of the snow fall speed. Moreover, we document for the first time the appearance of clusters in the snow spatial distribution, describe their multi-scale geometry and assess their settling velocity. The paper is organized as follows: the experimental setups to characterize the atmospheric conditions, snow particle properties, and large-scale velocity fields are described in §\[sec:sec2\]; the results in terms of snow particle size, velocity, acceleration, and concentration fields are reported in §\[sec:sec3\]; in §\[sec:sec4\] we draw conclusions and discuss future perspectives. Methodology {#sec:sec2} =========== Field experiment setups ----------------------- The data presented in the current study were acquired in three field deployments conducted at the Eolos Wind Energy Research Field Station in Rosemount, MN, between 2016 and 2019. We will refer to them as Jan2016, Nov2018, and Jan2019. The station features a meteorological tower instrumented with wind velocity, temperature, and humidity sensors installed at elevations ranging from 7 m to 129 m. Four of these elevations (10, 30, 80 and 129 m) are instrumented with Campbell Scientific CSAT3 3D sonic anemometers with a sampling rate of 20 Hz. Detailed descriptions of the field station and instrument specifications are provided in @Hong2014 and @Toloui2014. For each deployment, the spatial distribution and motion of the settling snow is captured using super-large-scale particle image velocimetry (PIV) and particle tracking velocimetry (PTV) described in @Hong2014 and @Nemes2017, respectively. The size and shape of snow particles are simultaneously obtained using an in-house digital in-line holographic (DIH) system introduced in @Nemes2017. A minimum of 15000 holograms is captured for each dataset. Because the DIH setup is located just meters away from the PIV/PTV field of view, spatial variability between both measurements is deemed negligible. The key information for the PIV/PTV and DIH systems in each deployment is summarized in table \[tab:tab1\], and further details on the setups are given in the following. All three sets have fairly constant snowfall and wind intensity. The PIV/PTV setup is similar to the one used in @Nemes2017. The illumination is provided by a 5-kW search light with a divergence $<0.3$ degrees and an initial beam diameter of 300 mm, shining on a curved mirror that redirects the beam vertically and expands it into a light sheet. The system is attached to a trailer for mobility in aligning the sheet with the wind direction and minimizing out-of-plane motion. The camera is placed on a tripod at a distance $L_{CL}$ from the light sheet with a tilt angle $\theta$ from the horizontal. The coordinate system (streamwise $x$, spanwise $y$, and vertical $z$, and the corresponding velocity components $u$, $v$, $w$) as well as the position and dimensions of the field of view (FOV) are illustrated in figure \[fig:fig1\]. --------- ----------- ----------- ------------------- ------------ ----------- ---------- ------------ ---------- Dataset Duration $z_{FOV}$ $H \times W$ Resolution $\theta$ $L_{CL}$ Resolution Volume (minutes) (m) (m$^2$) (mm/pixel) (degrees) (m) (/pixel) (cm$^3$) Jan2016 5 10.8 $7.1 \times 4.0$ 5.6 21.1 25 24 18.8 Nov2018 17 9.1 $8.4 \times 4.7$ 6.5 14.5 31 14 42 Jan2019 15 20.2 $14.7 \times 8.3$ 12.0 19.9 53 14 42 --------- ----------- ----------- ------------------- ------------ ----------- ---------- ------------ ---------- : Summary of key parameters of particle image velocimetry (PIV), particle tracking velocimetry (PTV) and digital inline holography (DIH) measurement setups for each deployment dataset used in the present paper (see figure \[fig:fig1\]). All PIV/PTV datasets have the same acquisition rate of 120 fps.[]{data-label="tab:tab1"} ![Schematic of the measurement setup used in the deployments. The field of view (FOV) has width $W$, height $H$, and is centered at an elevation $z_{FOV}$ (see table \[tab:tab1\]). The other symbols are defined in the text. []{data-label="fig:fig1"}](Figure1.png) Figure \[fig:fig2\] shows sample images used for PIV/PTV in each deployment, which span a broad range of imaging conditions and have different FOVs (figure \[fig:fig2\]a–\[fig:fig2\]c). For consistency, a sampling region of 7 m $\times$ 4 m (matching the Jan2016 FOV) with the same 32 pixel $\times$ 32 pixel PIV interrogation window (figure \[fig:fig2\]d–\[fig:fig2\]f) is used to analyse all the datasets. Based on previous parameterisation of the boundary layer at the Eolos site [@Heisel2018], the selected regions are located in the logarithmic layer, well above the roughness sublayer and possible snow saltation layer [@Guala2008]. The particle image density is similar for Jan2016 and Nov2018, and significantly higher for Jan2019. Accordingly, both PIV and PTV are applied to Jan2016 and Nov2018, while only PIV (which can deal with high particle image densities) is performed on Jan2019 dataset. PIV provides Eulerian velocity fields (for atmospheric measurements, see @Hong2014 [@Toloui2014; @Heisel2018]), which we use to measure the snow fall speed, while PTV provides Lagrangian trajectories, which we use to calculate snow particle accelerations [@Nemes2017]. Detailed information on the image and data processing (e.g., distortion correction, background subtraction, PIV cross-correlation, Lagrangian particle tracking) was previously reported [@Hong2014; @Toloui2014; @Nemes2017; @Dasari2018]. The same images are also used to estimate the relative snow particle concentration, as described in §\[sec:sec3.3\]. ![Samples of raw snow particle images at the full field of view (a-c) used for PIV/PTV, and close-up on the 32 $\times$ 32 pixels PIV interrogation window (d-f) from datasets Jan2016 (a, d), Nov2018 (b, e), and Jan2019 (c, f). []{data-label="fig:fig2"}](Figure2.png) Two versions of the DIH system are employed to characterize the snow particle size, shape, and number density. The earlier version is employed for Jan2016 and is described in detail in @Nemes2017. The later version, which has larger sampling volume and improved spatial resolution and data acquisition capabilities, is used for Nov2018 and Jan2019. It uses a diode laser (Roithner 5 mW, wavelength of 635 nm), a beam expander (Edmund Optics 9 mm plano-concave lens), and a collimating lens (Thorlabs 100 mm biconvex lens with anti-reflective coating) to generate a 50 mm beam. A CMOS camera (PtGrey Blackfly 2048 $\times$ 1536 pixels, 3.45 /pixel), mounting a Fujinon 25 mm f/1.4 lens, captures the holograms resulting from the interference of the light scattered by the snow particles and the collimated beam. The camera is connected to a data acquisition system (Raspberry Pi 3 - Model B), which interfaces with a laptop running FLIR SpinView software to control the camera and collect the images. Both versions of the DIH system are mounted about 2 m above ground level and allow the snow particles to settle through the sampling volume with minimal disturbance. @Nemes2017 describe the processing steps through which detailed two-dimensional projections of the snow particle silhouette are obtained from the holograms. Meteorological conditions and turbulence properties --------------------------------------------------- Simultaneous measurements from the meteorological tower sensors provide a statistical description of the turbulence conditions during the PIV/PTV and DIH measurements. As shown in figure \[fig:fig3\], the time series of the wind velocity ($u$) and fluctuation ($u'$) from the 10 m sonic anemometer (at an elevation comparable to the PIV/PTV FOV) indicate good stationarity for all datasets. Table \[tab:tab2\] summarizes the key meteorological and turbulence parameters. Atmospheric stability conditions are estimated based on both the bulk Richardson number $R_b$ and the Monin-Obukhov length $L_{OB}$: $$R_b = -|g|\Delta\overline{\theta_v} \Delta z / \left(\overline{\theta_v}\left[(\Delta U)^2 + (\Delta V)^2\right]\right)$$ $$L_{OB} = -U_\tau^3 \overline{\theta_v}/\kappa g \overline{w'\theta_v'}$$ ![Time series of the sonic anemometer data at elevation $z=10$ m showing streamwise wind velocity for Jan2016 (blue), Nov2018 (red), and Jan2019 (green). []{data-label="fig:fig3"}](Figure3.png) --------- ------- ----------- ------ -------------- ------- ---------- ------ ---------- ------------------ -------- ------------- -------------- -------------- Dataset $U$ $u_{rms}$ $RH$ $T$ $R_b$ $L_{OB}$ $L$ $\tau_L$ $\varepsilon$ $\eta$ $\tau_\eta$ $Re_\lambda$ $G\tau_\eta$ (m/s) (m/s) (%) ($^\circ C$) (-) (m) (m) (s) (cm$^2$s$^{-3}$) (mm) (ms) (-) (-) Jan2016 1.98 0.16 98.3 -0.1 0.03 811 4.9 30.6 8 1.29 126 938 0.003 Nov2018 1.55 0.38 94.3 -1.8 0.12 1007 3.4 8.9 19 1.04 83 3545 0.003 Jan2019 5.95 1.18 80.0 -16.0 0.03 -2643 14.6 12.9 290 0.49 20 9180 0.008 --------- ------- ----------- ------ -------------- ------- ---------- ------ ---------- ------------------ -------- ------------- -------------- -------------- : The meteorological and turbulence parameters obtained using the sonic anemometer at $z=10$ m for all three datasets. See the text for the definition of the symbols.[]{data-label="tab:tab2"} Here $g$ is the gravitational acceleration, $\theta_v$ is the virtual potential temperature (calculated using 1-Hz temperature, pressure, and relative humidity measurements), $U$ and $V$ are the average North and West wind velocity components, respectively, $U_\tau$ is the shear velocity, $\kappa$ is the von Kármán constant, prime indicates temporal fluctuations and the overbar indicates time-averaging. The mean velocity differences $\Delta U$ and $\Delta V$ are measured from the sonic anemometers at 10 m and 129 m ($\Delta z=119$ m). The length $L_{OB}$ and all other turbulence quantities reported in this section are evaluated from the 20 Hz sonic sensor at $z=10$ m. We approximate the surface turbulent virtual potential heat flux with the measured $\overline{w'\theta_v'}$. The friction velocity $U_\tau$ is estimated based on the Reynolds stresses [@Stull1988]: $$U_\tau = \left(\overline{u'w'}^2 + \overline{v'w'}^2\right)^{1/4}$$ For all deployments, $R_b \ll 1$ and $z/L_{OB} \ll 0.1$, indicating that the boundary layer flow within the measuring domain can be approximated as neutrally stratified [@Hoegstroem2002]. The integral time scale $\tau_L$ and length scale $L$ are calculated from the temporal auto-correlation function $\rho$: $$\rho(\tau) = \overline{u'(t)u'(t+\tau)}/\overline{u'(t)^2}$$ $$\tau_L = \int_{0}^{T_0} \rho(\tau) \mathrm{d}\tau$$ $$L = u_{rms} \tau_L$$ where $t$ is time, $\tau$ is the temporal separation, and $T_0$ is the first zero-crossing point of the auto-correlation function. Here and in the following, rms indicates root mean square fluctuation. The turbulent dissipation rate $\varepsilon$ for Nov2018 and Jan2019 is estimated using the second-order temporal structure function of the streamwise velocity fluctuations: $$D_{11}(\tau) \equiv \overline{\left[u'(t+\tau)-u'(t)\right]^2}$$ To yield better convergence of $D_{11} (\tau)$, the velocity time series are divided into two-minute windows with 50% overlap. Invoking Taylor hypothesis, the temporal separation is converted to a spatial separation $r=\tau U_{2\mathrm{min}}$, where $U_{2\mathrm{min}}$ is the mean velocity in each two-minute window. We then calculate $\varepsilon$ using the Kolmogorov prediction for the spatial second-order structure function in the inertial range: $$D_{11}(r) = C_2(\varepsilon r)^{2/3} \label{eq:eq2.8}$$ where C$_2$ is a constant close to 2 in high-Reynolds number turbulence [@Saddoughi1994]. The compensated structure functions in figure \[fig:fig4\] show good agreement with equation \[eq:eq2.8\] for a broad range of separation time scales in both Nov2018 and Jan2019 datasets. For Jan2016 the convergence of the structure function is less satisfactory and the dissipation is approximated from classic scaling arguments, i.e., $\varepsilon = u_{rms}^3/L$. We then obtain the Kolmogorov time and length scale, $\tau_\eta = \left(\nu/\varepsilon\right)^{1/2}$ and $\eta = (\nu^3/\varepsilon)^{1/4}$, respectively, where $\nu$ is the air kinematic viscosity. The Reynolds number $Re_\lambda = u_{rms}\lambda/\nu$ (where $\lambda = u_{rms}(15\nu/\varepsilon)^{1/2}$ is the Taylor microscale) spans a full decade across the three deployments, allowing us to investigate turbulence of vastly different intensity. The mean shear across the field of view $G=\partial U/\partial z$ is much smaller than the small scale velocity gradients, i.e. $G(\nu/\varepsilon)^{1/2} = G\tau_\eta \ll 1$, and we thus expect approximate small-scale isotropy [@Saddoughi1994]. This enables the comparison with previous laboratory experiments and simulations of particle-turbulence interactions performed in (nearly) homogeneous isotropic turbulence, consistent with the approach of @Nemes2017. ![Compensated second-order structure function of the streamwise velocity fluctuations calculated from the sonic anemometer for Nov2018 (red diamonds), and Jan2019 (green circles) at $z=10$ m. The dashed line indicates the inertial range prediction of equation \[eq:eq2.8\] with $C_2 = 2$. []{data-label="fig:fig4"}](Figure4.png) Results {#sec:sec3} ======= Snow particle size and settling velocity ---------------------------------------- Table \[tab:tab3\] provides a summary of the average size, aspect ratio, and concentration of the snow particles as measured by DIH. The particle size $d_s$ is quantified using the projected-area diameter, corresponding to the diameter of the circle with the same projected area as the particle image. The aspect ratio $s_2/s_1$ is defined as the ratio between minor and major axis of the ellipse fit to each particle image. From the particle number concentration (average particle count per unit volume), the volume fraction $\phi_V$ is estimated approximating each particle as a sphere of diameter $d_s$. --------- --------------------- ------------------------ --------------- ------------------------- Dataset Mean diameter $d_s$ Aspect ratio $s_2/s_1$ Number $\phi_V \times 10^{-7}$ (mm) (-) concentration (-) (m$^{-3}$) Jan2016 $1.09\pm 0.45$ $0.73 \pm 0.11$ 816 7.4 Nov2018 $0.65 \pm 0.41$ $0.65 \pm 0.16$ 1644 6.3 Jan2019 $0.39 \pm 0.23$ $0.57 \pm 0.17$ 56620 44 --------- --------------------- ------------------------ --------------- ------------------------- : Snow particle properties (mean and standard deviation) as measured by DIH for all three datasets.[]{data-label="tab:tab3"} The particle sizes decrease significantly from Jan2016 to Nov2018 to Jan2019, as illustrated by the probability density functions (PDFs) of $d_s$ (figure \[fig:fig5\]a). The distributions of the aspect ratio are similar for the three deployments and indicate relatively compact objects (figure \[fig:fig5\]b). This is confirmed by visual inspection of the DIH realizations: most detected hydrometeors are ice particles and crystals exhibiting moderate level of aggregation and relatively low shape complexity [@Garrett2012]. Given the limited elongation, the influence of the particle anisotropy on the motion dynamics is expected to be small [@Voth2017]. Consistently with the PIV/PTV images of figure \[fig:fig2\], the DIH data for Jan2016 and Nov2018 yields comparable particle concentration, while Jan2019 presents one order of magnitude higher values. ![PDFs of (a) size and (b) aspect ratio of the snow particles for Jan2016 (solid blue line), Nov2018 (dotted red line), and Jan2019 (dashed green line). (c) Samples of DIH realizations showing typical hydrometeors.[]{data-label="fig:fig5"}](Figure5.png) Table \[tab:tab4\] reports key statistics for the snow particle vertical velocity $w_s$ and acceleration $a_s$ (where available) obtained from PIV and PTV, respectively. Angle brackets denote spatio-temporal averaging of the Eulerian fields or ensemble average over all Lagrangian trajectories. We focus first on the vertical velocities, of which figure \[fig:fig6\] shows the PDFs for the three datasets. The distributions are approximately normal, with only Nov2018 displaying sizeable skewness. Jan2016 has a similar mean fall speed as Jan2019, but the latter has almost double $w_{s,rms}$, reflecting the increased spread of the velocity distribution, with some particles reaching upward velocities. Nov2018 shows a mean fall speed almost 60% higher than the other two cases, and an intermediate $w_{s,rms}$. A comparison between these velocity distributions and the size distributions in figure \[fig:fig5\]a is most interesting. The trends of both quantities are not reconcilable using classic velocity-diameter relationships; in particular, the fact that snow particles from Nov2018 fall much faster than in Jan2016, while being 40% smaller in average. Also, the much larger diameter variance in Jan2016 is at odds with its relatively narrow velocity distribution; vice versa, Jan2019 has the largest spread of velocities while having the narrowest size distribution. --------- ------------------------ ------------- -------- ----------------- ------------------ ------------- Dataset $W_s = \overline{w_s}$ $w_{s,rms}$ $Re_s$ $|W_s|/u_{rms}$ $\overline{a_s}$ $a_{s,rms}$ (m/s) (m/s) (-) (-) (m/s$^2$) (m/s$^2$) Jan2016 -0.68 0.21 54.9 4.25 -0.012 0.505 Nov2018 -1.09 0.37 54.5 2.87 -0.067 0.442 Jan2019 -0.71 0.55 23.1 0.60 N/A N/A --------- ------------------------ ------------- -------- ----------------- ------------------ ------------- : Snow particle properties as measured by DIH for all three datasets.[]{data-label="tab:tab4"} ![PDFs of the snow particle vertical velocity ($w_s$) as measured by PIV for Jan2016 (blue triangles), Nov2018 (red diamonds), and Jan2019 (green circles). Vertical solid, dotted, and dashed lines mark the mean settling velocity ($W_s$) corresponding to the three datasets, respectively. []{data-label="fig:fig6"}](Figure6.png) To explain these seemingly counterintuitive results, one may speculate that the snow particles in the datasets have significantly different densities, which could account for the incongruence between the distributions of sizes and fall speeds. However, widely accepted relations between hydrometeor mass $m$ and diameter, $m \sim d_s^{b_m}$ (obtained, e.g., by combined in situ imaging and weighing gauges; @Tiira2016 [@VonLerber2017]), do not support this view. For crystals and small aggregates in the present size range, the exponent $b_m$ is usually equal or larger than 2 [@Heymsfield2010; @Tiira2016; @VonLerber2017], implying that density (${\sim}\:m/d_s^3$) decreases less than linearly with size [@Heymsfield2004]. Neither can non-linear drag explain the behavior, because the snow particle Reynolds number $Re_s = |W_s|d_s/\nu$ is of the same order for the three cases (and almost the same for Jan2016 and Nov2018). We instead hypothesize that the present behavior is the result of the influence of air turbulence on snow particle settling. This is certainly consistent with the increasing variance of vertical velocities from Jan2016 ($Re_\lambda = 938$) to Nov2018 ($Re_\lambda = 3545$) to Jan2019 ($Re_\lambda = 9180$), as more intense turbulence is expected to result in larger spread of the snow particle velocities. Importantly, the differences in mean fall speed can also be explained by the ability of turbulence to enhance the settling velocity of inertial particles. Specifically, the larger fall speed in Nov2018 compared to Jan2016 may be the consequence of strong (or stronger) preferential sweeping in the former case. Likewise, stronger preferential sweeping in Jan2019 than in Jan2016 would explain why the former case shows the same mean fall speed as the latter, despite significant smaller snow particle sizes. Support to this hypothesis is lent by the particle acceleration data and the correlation between concentration and vertical velocity, presented in the following. Snow particle acceleration -------------------------- Figure \[fig:fig7\] shows the PDFs of the fluctuations of the horizontal acceleration for Nov2018 and Jan2016 as obtained by PTV, normalized by their root mean square values. These are compared to previous numerical and experimental studies of homogeneous turbulence laden with tracers [@Mordant2004] and inertial particles of known $St$ [@Ayyalasomayajula2006; @Bec2006]. The long exponential tails of the PDFs for all cases highlight the significant intermittency due to intense turbulence events [@LaPorta2001; @Voth2002], modulated by the inertia of the particles [@Bec2006; @Toschi2009]. As discussed in @Nemes2017, we can leverage the high sensitivity of the acceleration PDFs to $St$ [@Salazar2012] and their low sensitivity to $Re_\lambda$ and the specific flow configuration [@Volk2008; @Gerashchenko2008], to estimate the Stokes number of the snow particles. Without aiming for a precise value, we estimate $St = \mathcal{O}(0.1)$ for Jan2016 and $St = \mathcal{O}(1)$ for Nov2018. Particles with $St = \mathcal{O}(1)$ are known to display the strongest settling enhancement by preferential sweeping [@Wang1993; @Aliseda2002; @Petersen2019]. As such, these estimates of $St$ for the snow particles are consistent with the observation of increased fall speeds in Nov2018 compared to Jan2019. ![PDFs of horizontal component of snow particle accelerations for Jan2016 (blue triangles) and Nov2018 (red diamonds), compared to $St = 0$ from @Mordant2004 (dots), @Ayyalasomayajula2006 ($St = 0.09$, crosses; $St = 0.15$, plus signs) and @Bec2006 ($St = 0.16$, solid line; $St = 0.37$, dashed line; $St = 2.03$, dotted line). []{data-label="fig:fig7"}](Figure7.png) The value of the acceleration r.m.s. (table \[tab:tab4\]) provide a further indication that the snow particles captured in Jan2016 are less sensitive to preferential sweeping compared to those in Nov 2018. In the latter case, the acceleration variance normalized by Kolmogorov scaling is $a_0 = \overline{a_s'^2}/\left(\varepsilon^{3/2}\nu^{-1/2}\right) = 8.5$. This value is close to the expectation for tracers at such high $Re_\lambda$ [@Ishihara2007; @Ireland2016a], and it results from the action of two opposite effects: particle inertia (which reduces $a_0$ compared to tracers, @Bec2006, @Ireland2016a), and the effect of particle trajectories crossing the trajectories of fluid elements due to gravitational drift (which increases $a_0$, @Ireland2016b, @Mathai2016). The gravitational drift is measured by the ratio of the fall speed to the air velocity fluctuations, $|W_s|/u_{rms}$. The Jan2016 case has both lower particle inertia (according to our estimates of $St$) and larger gravitational drift through the turbulence (table \[tab:tab4\]), and indeed it sees a multifold increase of the non-dimensional variance, $a_0=37.9$. A similarly dramatic rise of $a_0$ was recently reported for both heavy particles [@Ireland2016b] and bubbles [@Mathai2016] in homogeneous turbulence, and was explained by the gravitational drift causing the particles to quickly decorrelate from the local turbulence structures, thus experiencing fast-changing fluid motions. This process limits the ability of the particles to obey preferential sweeping, and therefore this mechanism is expected to be less influential for Jan2016 than Nov2018. Snow particle concentration {#sec:sec3.3} --------------------------- The snow particle concentration fields provide further support to the previous arguments. Because the Jan2019 dataset does not allow for locating individual particles, we characterized the concentration using the local and instantaneous image intensity $I(x,y,t)$. This approach is based on the observation that scattered light intensity varies linearly with the particle number density $N$ for monodisperse particles [@Bernard2002] and with $N\overline{d_s}^2$ for polydisperse particles [@Raffel2018], and it is often used to measure relative concentration in particle-laden flows (e.g., @Lai2016). We thus calculate the relative concentration as $C^* = I/\overline{I}_{1\mathrm{min}}$, where $\overline{I}_{1\mathrm{min}}$ is the 1-minute moving average of the image intensity at each location. This normalization helps attenuate temporal fluctuations of the lighting conditions (e.g., due to the power fluctuation of the search light) and does not affect the observed trends. We also note that using particle counting for Jan2016 and Nov2018 leads to the same conclusions for those datasets. Figure \[fig:fig8\]a presents the PDF of $C^*$ for the three datasets, indicating different levels of spatio-temporal variability. Jan2016 displays an approximately Gaussian distribution (with a kurtosis of 3.5, closely matched to the kurtosis of 3 from a Gaussian distribution); while Jan2019 exhibits stretched exponential tails (kurtosis of 4.9), pointing to the significant likelihood of exceptionally low-/high-concentration events. The trend across the cases parallels that of $Re_\lambda$: the more intense the turbulence, the higher the variance and intermittency in the concentration fields. Figure \[fig:fig8\]b illustrates a sample $C^*$ field from Jan2019, for which the standard deviation of the concentration exceeds 10% of the mean. This representative snapshot clearly shows relatively dense zones, vertically elongated and interleaved with more dilute ones. We will characterize such spatial clustering in the next section. Here we stress that the more turbulent cases display stronger clustering, and that the latter is typically concurrent with settling enhancement by preferential sweeping [@Aliseda2002; @Baker2017; @Petersen2019; @Momenifar2020]. ![(a) PDF of snow particle relative concentration $C^*$. Black solid, dotted, and dashed lines correspond to the Jan2016 ($Re_\lambda = 938$), Nov2018 ($Re_\lambda = 3545$), and Jan2019 ($Re_\lambda = 9180$), respectively. (b) Instantaneous $C^*$ field from Jan2019. []{data-label="fig:fig8"}](Figure8.png) Direct evidence of preferential sweeping in the more turbulent datasets is provided by the correlation between the local concentration and the simultaneous vertical velocity. In figure \[fig:fig9\] we plot the PIV-based vertical velocity of the snow particles conditioned by the local concentration. Because the latter is available at every pixel, we use its spatial average in each PIV interrogation window. The level of correlation between concentration and fall speed is marginal for Jan2016, significant for Nov2018, and the strongest for Jan2019. Analogous trends were reported since the first demonstration of preferential sweeping by @Wang1993, and recently confirmed in various simulations and laboratory experiments (among others, @Aliseda2002 [@Baker2017; @Huck2018; @Petersen2019]). This is a strong indication that the relatively large settling velocity of Nov2018 and Jan2019 (compared to diameter-based expectations) is due to the snow particles preferentially sampling downward air flow regions. ![Ensemble-averaged snow particle settling velocity conditioned on value of the local relative concentration $C^*$ and normalized by the unconditional mean settling velocity. Symbols as in figure \[fig:fig6\]. []{data-label="fig:fig9"}](Figure9.png) Snow particle clustering ------------------------ In the above we have shown evidence that snow particles respond to air velocity fluctuations similarly to small inertial particles in turbulence. It is then of interest to characterize the appearance of one of the most striking effects of particle-turbulence interaction: spatial clustering. In the following we describe the properties of clusters identified using analogous approaches as in previous laboratory studies. While quantitative comparisons are thwarted by the uncertainty on the snow particles Stokes number, we show that the concentration fields display the hallmark features repeatedly reported in laboratory studies. We focus specifically on the Jan2019 case, which presents the most intense turbulence and the most inhomogeneous concentration. We follow @Aliseda2002 and identify clusters as connected regions where the concentration $C^*$ is above a prescribed threshold. This approach is standard in image object segmentation, and it has been applied to passive scalars, enstrophy, and velocity fluctuations to detect coherent flow structures in turbulence [@Catrakis1996; @Moisy2004; @Lozano-Duran2012; @Carter2018]. In order to select an objective threshold $C_\mathrm{thold}^*$, we analyse the percolation behaviour of the identified objects [@Moisy2004]. For higher values of the threshold only a few small clusters are detected, which grow in size and number up to a maximum as the threshold is lowered. They then start to merge, their number decreasing until a single macro-cluster occupies the entire domain. This process is illustrated in figure \[fig:fig10\], highlighting the chosen threshold which corresponds to the maximum number of identified clusters [@Lozano-Duran2012; @Carter2018]. We disregard those that touch the image border, as their full spatial extent could be underestimated. ![Average number of clusters per image as a function of relative concentration threshold $C^*_\mathrm{thold}$. The inset shows clusters in a binarized concentration field using the threshold that maximizes the number of detected clusters (vertical dashed line in the plot). []{data-label="fig:fig10"}](Figure10.png) Similar to previous imaging studies of particle-laden turbulence, we consider the PDF of the cluster area $A_c$ normalized by the Kolmogorov scale (figure \[fig:fig11\]a). The slope displays a marked change above a length scale corresponding to the light sheet thickness (below which the likelihood of imaging overlapping objects prevents their characterization). Larger clusters show a power-law decay of the area distribution over more than a decade. This suggests self-similarity between clusters of different sizes, pointing to their origin from turbulent eddies (which are also self-similar, @Moisy2004; @Baker2017). The power-law exponent is consistent with the previously reported value of -2 [@Monchaux2010; @Petersen2019]. The detected objects can reach linear dimensions of several meters. While this challenges the classic estimates of inertial particle clusters being $\mathcal{O}(10\eta)$ in size [@Eaton1994], there is mounting experimental evidence that the cluster size increases with the flow Reynolds number [@Sumbekova2017]. ![(a) PDF of snow particle cluster area normalized by Kolmogorov scaling, showing a power-law decay with an exponent close to -2 for sizes larger than the light sheet thickness (vertical dashed line). (b) Scatter plot of cluster perimeter versus square root of the cluster area, both normalized by Kolmogorov scaling. []{data-label="fig:fig11"}](Figure11.png) To further describe the cluster topology, figure \[fig:fig11\]b shows a scatter plot of their perimeters versus the square root of their areas. For small clusters, the data points follow a power law with unit exponent, as expected for regular two-dimensional objects. Such a trend is inherently impacted by the light sheet thickness. For larger ones (especially for sizes far larger than the light sheet thickness) the exponent is significantly higher, indicating a convoluted structure of the cluster borders. This trend was observed in several previous imaging studies of particle-laden turbulence [@Aliseda2002; @Monchaux2010; @Petersen2019] and is consistent with the view of inertial particle clusters as fractal sets [@Calzavarini2008]. Clusters of heavy particles settling in turbulence are known to be elongated and preferentially aligned with the vertical direction [@Woittiez2009; @Dejoan2013; @Ireland2016b; @Baker2017; @Petersen2019]. Figure \[fig:fig12\]a reports the PDF of the cluster aspect ratio (obtained by ellipse-fitting as for the snow particle images from DIH). The peak slightly below 0.5 is consistent with the laboratory study of @Petersen2019, who found the distribution to be robust for a range of physical parameters. The PDF of the angle $\theta_c$ made by the ellipse major axis with the horizontal (figure \[fig:fig12\]b) confirms a strong prevalence of vertically oriented clusters. The weak prevalence of less than 90 degree clusters is likely the consequence of the significant wind speed and mean shear. ![PDFs of (a) aspect ratio and (b) orientation angle of snow particle clusters. []{data-label="fig:fig12"}](Figure12.png) Finally, we consider the cluster fall speed $W_c$, obtained by averaging the vertical velocity at all locations belonging to a given cluster. This is then ensemble-averaged over all clusters of a certain size, normalized by the mean settling velocity $W_s$, and plotted against the cluster area (figure \[fig:fig13\]). Overall, clusters fall significantly faster than $W_s$, in keeping with the velocity-concentration correlation shown above. Also, there is an apparent trend of increasing fall speed with cluster size, as seen in laboratory experiments [@Huck2018; @Petersen2019]. While this could be merely due to the preferential sampling of downward flow regions, the sharp increase of $W_c$ above a certain cluster size suggests that a more complex interaction between the snow and the turbulent air may be taking place. ![Normalized cluster velocity as a function of cluster size. []{data-label="fig:fig13"}](Figure13.png) The estimated particle volume fraction for Jan2019 is above $10^{-6}$ (table \[tab:tab3\]) and it may approach $10^{-5}$ in the denser clusters. According to widespread criteria for gas-solid flows [@Elghobashi1994; @Balachandar2010], in this range of concentrations the dispersed phase is expected to exert a significant back-reaction on the carrier fluid (so-called two-way coupling). Therefore, the bigger clusters, by virtue of the simultaneous action of large numbers of particles, may be able to collectively drag air down with them, enhancing their fall speed beyond what granted by one-way-coupling mechanisms like preferential sweeping. Such an effect has been proposed to explain settling velocities observed at similar volume fractions in experiments [@Aliseda2002; @Huck2018] and numerical simulations [@Bosse2006; @Frankel2016]. In fact, the s-shape of the velocity versus concentration curve in figure \[fig:fig9\] resembles the recent results of @Huck2018, who explained their findings using collective particle effects. Conclusions {#sec:sec4} =========== The present results provide evidence that atmospheric turbulence affects not only the variance, but also the mean of the snow settling velocity. Specifically, we show that seemingly contradictory trends between snow particle diameters and fall speeds can be explained by the ability of turbulence to enhance settling. This effect is attributed to the preferential sweeping of particles into downward regions of the air flow, which is known to be most intense for particles with Stokes number $St = \mathcal{O}(1)$ based on Kolmogorov scaling. The effect depends therefore on the coupling between the hydrometeor properties and the atmospheric turbulence. This explanation is consistent with the observed acceleration distributions, from which we infer the range of $St$ for different datasets. We record large acceleration variance for the case with a fall speed substantially larger than the air velocity fluctuations. We deduce that, for these snow particles, the crossing trajectory effect (caused by gravitational drift) dominates over the preferential sampling effect (due to the particle inertia): these hydrometeors quickly drift away from the local turbulent structures and are not strongly clustered by them. On the other hand, preferential sweeping is strong when turbulence fluctuations are comparable to gravitational drift. The clearest demonstration of preferential sweeping is found in the vertical velocity conditioned by the local concentration: regions of high concentration display higher settling velocities. This might also imply a back-reaction of the particles on the flow through collective drag. Our estimates of the highest volume fractions are indeed above classic thresholds for two-way coupling, although one should exert caution in applying those to a flow laden with complex particles. We have also demonstrated that, in the cases where strong preferential sweeping is inferred, the concentration field is highly non-uniform. Clusters appear over a wide range of scales, displaying signature features identified in laboratory experiments and numerical simulations: power-law size distribution, fractal-like silhouette, vertical elongation, and large fall speed that increases with size. Taken together, these results confirm and extend the conclusion of @Nemes2017: the phenomenology of inertial particles in turbulence, built over decades of canonical flow studies, is largely applicable to the dynamics of snow settling in air. Presently, none of these well-known concepts are incorporated in weather forecasting models. Other recent studies, also imaging-based, demonstrated that those concepts are in fact directly applicable to atmospheric flows: within clouds, clustering of droplets was recently observed using airborne holographic instruments [@Beals2015; @Larsen2018], and elongated regions devoid of droplets were identified at a mountain-top station [@Karpinska2019]. Environmental flows give access to much larger ranges of scales than laboratory or numerical experiments, enabling the exploration of fundamental fluid mechanics questions. To our knowledge, the present field work represent the highest-Reynolds number flow measurements in which inertial particle clustering is observed and quantified. The fact that we observe very large clusters (up to the integral scales of the turbulence) lends support to recent claims that these clusters grow larger with $Re_\lambda$ [@Sumbekova2017], and that increasing $Re_\lambda$ extends the range of scales to which the particles respond [@Tom2019]. Of course, the non-canonical aspects of naturally occurring particle-laden flows have to be considered. In particular, the morphology of snow particles is expected to play an important role, especially for complex-shaped and elongated snow particles [@Westbrook2017]. Our understanding of the interaction of anisotropic particles with turbulence has seen tremendous progress in recent years [@Voth2017], and imaging studies capable of testing these dynamics in the field are warranted. This will require three-dimensional imaging at high spatial and temporal resolution, and will benefit from novel capabilities now available for Lagrangian tracking (see e.g., @Guala2008a and the recent review by @Discetti2018). An important aspect which the present study cannot directly address is represented by the hydrometeor collision rate, and the impact turbulence has on it. This is expected to be strongly related to the polydispersity of the snow particles. Polydispersity drives the classic (gravitational) mechanism by which larger particles fall faster than and collide with smaller ones [@Pruppacher1997]. Recent simulations, however, show that turbulence enhances the relative velocity of polydisperse particles also in the horizontal direction [@Dhariwal2018]. Fundamental studies in this area are needed to extend the applicability of particle-turbulence dynamics to environmental flows. Additionally, field studies are warranted to investigate the correlation between the level of collision-driven aggregation and atmospheric turbulence. Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge the support of the US National Science Foundation through grant NSF-AGS-1822192. We also thank engineers from St. Anthony Falls Laboratory, including James Tucker, James Mullin, Chris Ellis, Jeff Marr, Chris Milliren and Dick Christopher, for their assistance in the experiments. Declaration of Interests {#declaration-of-interests .unnumbered} ======================== The authors report no conflict of interest. [85]{} natexlab\#1[\#1]{} \#1[\#1]{} \#1[\#1]{} \#1[\#1]{}\#1[\#1]{}\#1[*\#1*]{} \#1[\#1]{}\#1[**\#1**]{} \#1[\#1]{} \#1[\#1]{} \#1[\#1]{}\#1[\#1]{}\#1[\#1]{}\#1[*\#1*]{} . , . .  (14). . , . .  (1), . .  (6256), . .  (-1), . . . .  (8), . .  (15), . .  (2), . . , . . , . . , . .  (1), . . , . .  (1), . . , . .  (4), . .  (12), . . , . .  (4), . . , . .  (11), . .  (18), . .  (11), . . , . . , . . . .  (1), . . , . .  (F3). .  (1), . . , . .  (9), . .  (10), . .  (8), . .  (1), . .  (1). .  (1), . . , . . . . , . . , . . , . .  (7), . .  (12), . .  (20). .  (7). .  (6), . .  (15), . . , . .  (2). . , . . , . .  (12), . . , . .  (3). .  (10), . . , . .  (1-4), . . , . . . . , . .  (6823), . . . . . . , . .  (8), . .  (3), . .  (1), . . . .  (2), . . , . .  (9). .  (5), . . , . .  (15), . . , . .  (14-17), . . , . . , . . , . .  (634), . .  (15), . .  (7), .
**Adiabatic groupoid, crossed product by ${\mathbb{R}}_+^*$ and Pseudodifferential calculus** [by Claire Debord and Georges Skandalis]{} Laboratoire de Mathématiques, UMR 6620 - CNRS -4pt Université Blaise Pascal, BP [**8002**]{} -4pt F-63171 Aubière cedex, France -4pt claire.debord@math.univ-bpclermont.fr Institut de Math[é]{}matiques de Jussieu, (UMR 7586), Université Paris Diderot (Paris 7) -4pt UFR de Mathématiques, [CP]{} [**7012**]{} - Bâtiment Sophie Germain -4pt 5 rue Thomas Mann, 75205 Paris CEDEX 13, France -4pt skandalis@math.univ-paris-diderot.fr **Abstract** We consider the crossed product $G_{ga}$ of the natural action of ${\mathbb{R}}_+^*$ on the adiabatic groupoid $G_{ad}$ associated with any Lie groupoid $G$. We construct an explicit Morita equivalence between the exact sequence of order $0$ pseudodifferential operators on $G$ and (a restriction of) the natural exact sequence associated with $G_{ga}$. As an important intermediate step, we express a pseudodifferential operator on $G$ as an integral associated to a smoothing operator on the adiabatic groupoid $G_{ad}$ of $G$. Introduction ============ Smooth groupoids are intimately linked to pseudodifferential calculi. Indeed, to every smooth groupoid is naturally associated a pseudodifferential calculus and therefore an analytic index. Furthermore many pseudodifferential calculi have been shown to be the ones associated to naturally defined groupoids. The groupoid approach may then give a natural geometric insight to these calculi and the corresponding index theorems. See [@DebordLescure] for an overview on the subject. The use of groupoids in relation with index theory may be traced back to [@ConnesLNM] where A. Connes introduced the longitudinal pseudodifferential calculus on a foliation $(M,F)$. He thus constructs an extension $\Psi^*(M,F)$ of the foliation $C^*$-algebra $C^*(M,F)$ which gives rise to an exact sequence $$0\to C^*(M,F)\to \Psi^*(M,F)\to C(S^*F)\to 0$$ It appeared quite naturally that Connes’ construction only used the (longitudinal) smooth structure of the holonomy groupoid and could therefore be extended to any smooth (longitudinally) groupoid ([[*cf.*]{} ]{}[@MonthPie; @NWX; @LMN]). Some previously defined pseudodifferential calculi were recognized as being the ones associated with natural groupoids as for instance the groupoid defined by B. Monthubert in [@MonthT1; @MonthT2] was shown to be suitable for R.B. Melrose’s $b$-calculus [@MelroseLivre]. Moreover, it appeared in a work of J-M. Lescure ([@Lescure]) that this calculus is the natural calculus associated to conical pseudo-manifolds. In [@ConnesNCG], A. Connes showed that the analytic index on a compact manifold can in fact be described in a way not involving (pseudo)differential operators at all, just by using a construction of a deformation groupoid, called the “tangent groupoid”. This idea was used in [@HilsumSk], and extended in [@MonthPie] to the general case of a smooth groupoid, where the authors associated to every smooth groupoid $G$ an *adiabatic groupoid*, which is obtained applying the “deformation to the normal cone” construction to the inclusion $G^{(0)}\to G$ of the unit space of $G$ into $G$. The groupoid constructed in this way is the union $G_{ad}=G\times {\mathbb{R}}^*\cup {\mathfrak{A}}G\times \{0\}$ endowed with a natural smooth structure, where ${\mathfrak{A}}G$ is the total space of the algebroid of $G$ [[i.e.]{} ]{}of the normal bundle to the inclusion $G^{(0)}\to G$. They then showed that the connecting map of the corresponding exact sequence $$0\to C_0({\mathbb{R}}_ +^*)\otimes C^*(G)\to C^*(G_{ad}^+)\to C^*({\mathfrak{A}}G)\simeq C_0({\mathfrak{A}}^* G)\to 0\eqno (1)$$ is the analytic index, where $G_{ad}^+=G\times {\mathbb{R}}^*_+\cup {\mathfrak{A}}G\times \{0\}$ is the restriction of $G_{ad}$ over $G^{(0)}\times {\mathbb{R}}_+$. In the present paper, extending ideas of Aastrup, Melo, Monthubert and Schrohe [@AMMS], we go one step further in this direction, showing that the (order $0$) pseudodifferential operators on a smooth groupoid can also be described as convolution operators by smooth functions on a suitable groupoid. The groupoid that we use is the crossed product of the adiabatic groupoid by the natural action of the group ${\mathbb{R}}_+^*$. Since exact sequence (1) is equivariant with respect to this action, we find an exact sequence $$0\to C^*(G)\otimes {\mathcal{K}}\simeq\Big(C_0({\mathbb{R}}_+^*)\otimes C^*(G)\Big)\rtimes {\mathbb{R}}_+^*\to C^*(G_{ad}^+)\rtimes {\mathbb{R}}_+^*\to C_0({\mathfrak{A}}^* G)\rtimes {\mathbb{R}}_+^*\to 0.$$ The algebra $C_0({\mathfrak{A}}^* G)\rtimes {\mathbb{R}}_+^*$ naturally contains $C(S^*{\mathfrak{A}}G)\otimes {\mathcal{K}}$ as an ideal. The main result of this paper is that the corresponding exact sequence $$0\to C^*(G)\otimes {\mathcal{K}}\to J(G)\rtimes {\mathbb{R}}_+^*\to C(S^*{\mathfrak{A}}G)\otimes {\mathcal{K}}\to 0$$ is related to the exact sequence of pseudodifferential operators $$0\to C^*(G)\to \Psi^*(G)\to C(S^*{\mathfrak{A}}G)\to 0$$ via a Morita equivalence. Our groupoid is the generalization of the one constructed in [@AMMS] for the case of the ordinary pseudodifferential calculus on a compact manifold $M$. It was shown there that the algebra associated to this groupoid is isomorphic to the algebra of Green operators (of order $0$ and class $0$) in the Boutet de Monvel calculus ([[*cf.*]{} ]{}[@Grubb; @Schrohe]) which in turn is known to be Morita equivalent to $\Psi^*(M)$. The proof in [@AMMS] is somewhat indirect, using Voiculescu’s theorem to prove that two exact sequences coincide. Our proof here is much more direct and therefore extends immediately to the general groupoid case: we explicitly construct a bimodule ${\mathcal{E}}$ which is a Morita equivalence between the algebras $\Psi^*(G)$ and $J(G)\rtimes {\mathbb{R}}_+^*$. As an important intermediate step, we express a pseudodifferential operator on $G$ as an integral associated to a smoothing operator on the adiabatic groupoid $G_{ad}$ of $G$ (Theorem \[pseudosintegrales\]). We should point out that J-M Lescure had previously observed that pseudodifferential operators on ${\mathbb{R}}^n$ arise as integrals of some functions on the tangent groupoid of ${\mathbb{R}}^n$ (private communication). Next, we show that the $C^*$-algebra $J(G)\rtimes {\mathbb{R}}_+^*$ is stable and therefore isomorphic to $\Psi^*(G)\otimes {\mathcal{K}}$, and the Hilbert module ${\mathcal{E}}$ is isomorphic to Kasparov’s absorbing module ${\mathcal{H}}_{\Psi^*(G)}$. To that end, we use the fact that positive order pseudodifferential operators define regular operators which was established by S. Vassout in [@chief]. Actually, using pseudodifferential calculus of complex order and complex powers also from [@chief], we construct an isomorphism of $J(G)$ onto a crossed product $\Psi^*(G)\rtimes {\mathbb{R}}$ which intertwines the action $\alpha $ of ${\mathbb{R}}_+^*$ on $J(G)$ with the dual action on the crossed product $\Psi^*(G)\rtimes {\mathbb{R}}$. In a forthcoming paper ([@DSBM]), we will actually show that the algebra $J(G)\rtimes {\mathbb{R}}_+^*$ is not only isomorphic, but in fact *equal* to the algebra of Green operators - generalized to any Lie groupoid. The paper is organized as follows: - In the first section, we recall the construction of the deformation to the normal cone and the space of Schwartz functions on it. We also characterize the space of those functions whose Fourier transform vanishes at infinite order at $0$. - In the second section we recall some definitions concerning Lie groupoids, and in particular the construction of the associated “adiabatic” groupoid and its crossed product by the canonical action of ${\mathbb{R}}_+^*$. - Finally, in the last two sections, we construct the bimodule which is used for the Morita equivalence: in the third section we construct the smooth module ${\mathcal{E}}^\infty$, whose $C^*$-completion the Hilbert $C^*$-module ${\mathcal{E}}$ is shown to indeed define the desired Morita equivalence in section 4. **Remark on the general setting.** *For the simplicity of our exposition, we will assume that the groupoids involved are smooth and Hausdorff. We will further in some places reduce to the case where the unit space is compact. Of course one easily extends our constructions and results to more general settings: longitudinally smooth groupoids on manifolds with corners, continuous family groupoids ([[*cf.*]{} ]{}[@Paterson]), non Hausdorff case ([[*cf.*]{} ]{}[@ConnesSurvey]), and even groupoids associated with singular foliations ([[*cf.*]{} ]{}[@AndrSk1])... To proceed, one only has to consider the appropriate spaces of functions on the groupoids - that appear in the above cited papers.\ Furthermore, in what follows, $C^*(G)$ may be either the reduced or full $C^*$-algebra of the groupoid $G$. The choice is left to the reader!* **Acknowledgements.** The first author would like to thank Jean-Marie Lescure for several illuminating discussions and his valuable help for the understanding of pseudodifferantial operators. She would also like to thank the CNRS and the IMJ for hosting her during the spring term and especially the chief of the operator algebra team for the wonderful work conditions offered. Schwartz spaces on deformations to normal cones =============================================== The adiabatic groupoid which is the main ingredient in our construction is a special case of a geometric object called *deformation to the normal cone.* Deformation to a normal cone {#DNC} ---------------------------- In this section, we recall this construction and define some function spaces on it: the associated space of functions of Schwartz decay *alla* P. Carrillo-Rouse ([@CR]) and some subspaces which will be needed in the sequel of the paper. Let $M_0$ be a smooth compact submanifold of a smooth manifold $M$ with normal bundle ${\mathcal{N}}$. As a set, the deformation to the normal cone is $D(M_0,M)=M\times {\mathbb{R}}^*\cup {\mathcal{N}}\times \{0\}$. In order to recall its smooth structure, we fix an exponential map, which is a diffeomorphism $\theta$ from a neighborhood $V'$ of the zero section $M_0$ in ${\mathcal{N}}$ to a neighborhood $V$ of $M_0$ in $M$. We may cover $D(M_0,M)$ with two open sets $V\times {\mathbb{R}}^*$ and $W={\mathcal{N}}\times \{0\}\cup V\times {\mathbb{R}}^*$: we endow $D(M_0,M)$ with the smooth structure for which the map $\Theta:(x,X,t)\mapsto ( \theta(x,tX),t)$ (for $t\ne 0$) and $\Theta:(x,X,0)\mapsto (x,X,0)$ is a diffeomorphism from $W'=\{(x,X,t)\in {\mathcal{N}}\times {\mathbb{R}};\ (x,tX)\in V'\}$ to $W$. The group ${\mathbb{R}}_+^*$ acts smoothly on $D(M_0,M)$: for $t\in {\mathbb{R}}_+^*$ put $\alpha_t(z,\lambda)=(z,t\lambda)$ for $z\in M$ and $\lambda \in {\mathbb{R}}^*$ and $\alpha _t(x,U,0)=(x,\frac{U}t,0)$ for $x\in M_0$ and $U\in {\mathcal{N}}_x$. Schwartz decay on ${\mathbb{R}}_+^*\times V$ {#renee} -------------------------------------------- If $V$ is a smooth non necessarily compact manifold, define ${\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(V))$ to be the space of smooth functions $t\mapsto f_t$ from ${\mathbb{R}}_+^*$ to $C_c^\infty(V)$ such that all the $f_t$ have support on a given compact subset of $V$ and $t\mapsto f_t$ has rapid decay with respect to all natural norms of $C_c^\infty(V)$. Equivalently $(f_t)$ is such that the function $g:V\times {\mathbb{R}}\to {\mathbb{C}}$ defined by $g(z,t)=f_{\frac{t}{1-t}}(z)$ for $0<t<1$ and $g(z,t)=0$ otherwise is smooth with compact support. We will also consider ${\mathcal{S}}({\mathbb{R}}^*;C_c^\infty(V))$ which is the space of smooth functions $t\mapsto f_t$ from ${\mathbb{R}}^*$ to $C_c^\infty(V)$ such that all the $f_t$ have support on a given compact subset of $V$ and $t\mapsto f_t$ has rapid decay with respect of all norms at $0$ and $\pm\infty$. Schwartz functions on a vector bundle {#Schwartzfibre} ------------------------------------- There is a description of the space of Schwartz functions on the total space of a vector bundle in [@CR]. Here are two equivalent ways to defining this algebra in a way that it is obvious it is independent of choices of charts. Let $E$ be a smooth real vector bundle over a smooth compact manifold $M$. 1. Consider $E$ as an open subspace of the bundle of spheres $S_E$ where $M_\infty$ is the set of points at infinity. The algebra ${\mathcal{S}}(E)$ is the space of smooth functions on $S_E$ vanishing at any point of $M_\infty$ as well as all its derivatives. 2. For $t\in {\mathbb{R}}_+^*$, let $\beta_t$ denote the map $(x,\xi)\mapsto (x,t\xi)$ from $E$ to $E$. Then define ${\mathcal{S}}_\beta(E)$ to be the space of functions of the form $z\mapsto \int_0^{+\infty}f_t(\beta_t(z))\,dt$ where $f\in {\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(E))$. The fact that ${\mathcal{S}}_\beta(E)={\mathcal{S}}(E)$ is quite obvious. Indeed if $f\in {\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(E))$, then one defines $g\in C_c^\infty(S_E\times {\mathbb{R}})$ by $g(z,t)=f_{\frac{t}{1-t}}(\beta_{\frac{t}{1-t}}(z))$ if $0<t<1$ and $z\in E$ and $g(z,t)=0$ elsewhere. Then one may integrate it and obtain that $z\mapsto \int_0^{+\infty}f_t(\beta_t(z))\,dt$ is smooth on $S_E$ and vanishes as well as all its derivatives on $M_\infty$. Conversely, we have to show that the map $\varphi _\beta:{\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(E))\to {\mathcal{S}}(E)$ is onto, where $\varphi _\beta(f):z\mapsto \int_0^{+\infty}f_t(\beta_t(z))\,dt$. Choose a metric on $E$; if $g\in {\mathcal{S}}(E)$, we may put $f(z,t)=\frac{h(\|z\|^2+t^2)}tg(\beta_{t^{-1}}(z))$ where $h\in C_c^{\infty}({\mathbb{R}})$ vanishes near $0$ and $\int_0^{+\infty}h(s^2)\frac {ds}s=1$. Then $\varphi_\beta(f)(z)=\int_0^{+\infty}h(t^2(1+\|z\|^2))g(z)\,\frac{dt}t=g(z)$. The Fourier transform is an isomorphism of ${\mathcal{S}}_\beta(E)$ with ${\mathcal{S}}_{\beta^*}(E^*)$. We will use the following rather easy result: \[J(G)1\] Let $f=(f_t)_{t\in {\mathbb{R}}}\in {\mathcal{S}}(E\times {\mathbb{R}})$ 1. For all $g\in C_c^{\infty}(E)$ the function $F:M\times {\mathbb{R}}\to {\mathbb{C}}$ defined by $F(x,0)=\hat f_0(x,0) g(x,0)$ and $F(x,t)=t^{-p}\int_{E_x} g(x,U)f_t\circ \theta (x,U)\,dU$ for $t\ne 0$ is smooth on $M\times {\mathbb{R}}$. 2. The following are equivalent: 1. For all $g\in {\mathcal{S}}(E)$ the function $t\mapsto \int_E g(x,U)f (x,\frac Ut,t)\,dx\,dU$ vanishes as well as all its derivatives at $0$. 2. For all $g\in {\mathcal{S}}(E)$ the (smooth) function $(x,t)\mapsto \int_{E_x} g(x,U)f (x,\frac Ut,t)\,dU$ vanishes as well as all its derivatives on $M\times \{0\}\subset M\times {\mathbb{R}}$. 3. The function $(x,\xi,t)\mapsto \hat f_t(x,\xi)$ vanishes as well as all its derivatives on $M\times \{0\}$ sitting in $E^*\times {\mathbb{R}}$ as zero section. Parseval’s formula yields  $t^{-p}\int_{E_x} g(x,U)f (x,\frac Ut,t)\,dU=c\int _{E_x^*}\hat g(x,-\xi)\hat f_t(x,t\xi)\,d\xi$ (where $c$ is a suitable constant and $p$ is the dimension of $E$). 1. The function $(x,\xi,t)\mapsto \hat g(x,-\xi)\hat f_t(x,t\xi)$ lies in ${\mathcal{S}}(E^*\times {\mathbb{R}})$, thus (a) follows. 2. (ii)$\Rightarrow$(i) is obvious. Conversely if (i) is satisfied, writing $\int_{E_x} g(x,U)f (x,\frac Ut,t)\,dU=t^kh_k(x)+o(t^k)$, and applying (i) to $g_1(x,U)=g(x,U)\overline{h_k(x)}$, we find $h_k=0$; thus by induction, we get (ii). We may write the Taylor expansion $\hat f_t(x,t\xi)=\sum_{j=0}^kt^ja_j(x,\xi)+t^{k+1}R(x,\xi,t)$ where $a_j$ are polynomials in $\xi$ (of degree $\le j$). It follows that $\int_{E_x} g(x,U)f (x,\frac Ut,t)\,dU=o(t^{k+p})$ if and only if $a_j(x,\xi)=0$, whence (ii)$\iff$(iii). It is natural in this proposition to consider $f_t(x,U)$ as densities on $E_x$ rather than functions and therefore introduce a factor $t^{-p}$ in the integrals (i) and (ii). This factor of course has no incidence on the proposition... Trivial deformation to the normal cone {#actionAlpha} -------------------------------------- We consider here $D(M,E)$ where the manifold $M$ is sitting as zero section in the total space $E$ of a vector bundle over $M$. Then $D(M,E)=E\times {\mathbb{R}}$. Here, ${\mathbb{R}}_+^*$ acts on $E\times {\mathbb{R}}$ by $\alpha_t(x,U,\lambda)=(x,\frac{U}{t},t\lambda)$. For $f\in {\mathcal{S}}({\mathbb{R}}_+^*;C_c^{\infty}(E\times {\mathbb{R}}))$, put $\varphi_\alpha(f)(z)=\int_0^{+\infty}f_t(\alpha_t(z))\,dt$. The image ${\mathcal{S}}_\alpha (E\times {\mathbb{R}})$ of $\varphi_\alpha$ is the set of $g\in {\mathcal{S}}(E\times {\mathbb{R}})$ such that the function $(x,U,t)\mapsto \|tU\|$ is bounded on the support of $g$. Indeed, if $f\in {\mathcal{S}}({\mathbb{R}}_+^*;C_c^{\infty}(E\times {\mathbb{R}}))$, one checks immediately the support requirement and it is quite easy to check locally that $\varphi_\alpha(f)\in {\mathcal{S}}(E\times {\mathbb{R}})$. Conversely, let $g\in {\mathcal{S}}(E\times {\mathbb{R}})$ with the support requirements; take $\chi\in C_c^\infty({\mathbb{R}})$ equal to $1$ near $0$ and $h\in C_c^{\infty}({\mathbb{R}})$ a function which vanishes near $0$ and satisfies $\int_0^{+\infty}h(s^2)\frac {ds}s=1$ as previously. We may set $f_1(x,U,\lambda,t)=\frac{h(\lambda ^2+t^{2})}tg(x,tU,\frac{\lambda}{t})(1-\chi(\frac{\lambda^2}{t^2}))$ and $f_2(x,U,\lambda,t)=\frac{h(\|U\|^2+t^{-2})}tg(x,tU,\frac{\lambda}{t})\chi(\frac{\lambda^2}{t^2})$.\ Note that for any $a>0$, the function $t\mapsto \frac{1}tg(x,tU,\frac{\lambda}t)$ obviously belongs to ${\mathcal{S}}({\mathbb{R}}_+^*;C_c^{\infty}(E_{\geq a}\times {\mathbb{R}}_{\geq a}))$ where $E_{\geq a}=\{U\in E \ ; \|U\|\geq a\}$ and ${\mathbb{R}}_{\geq a}={\mathbb{R}}\setminus ]-a,a[$. For small enough $|\lambda|$ either $h(\lambda ^2+t^{2})=0$ or $\chi(\frac{\lambda^2}{t^2})=1$ thus $f_1$ vanishes. For big enough $|\lambda|$ or $t$, $h(\lambda ^2+t^{2})=0$. Moreover $f_1$ has rapid decay when $t\rightarrow 0$. Finally, $f_1$ belongs to ${\mathcal{S}}({\mathbb{R}}_+^*;C_c^{\infty}(E\times {\mathbb{R}}))$. Similarly $f_2$ vanishes for $t$ near $ 0 $ or for small enough $\|U\|$ and $t\rightarrow \infty$. Anyway $f_2$ has rapid decay when $t\rightarrow \infty$ and thus $f_2$ belongs to ${\mathcal{S}}({\mathbb{R}}_+^*;C_c^{\infty}(E\times {\mathbb{R}}))$. One can easily check that $\varphi _\alpha (f_1)+\varphi _\alpha (f_2)=g$. Schwartz functions on a deformation to the normal cone ------------------------------------------------------ In the same way as for bundles, we define ${\mathcal{S}}_\alpha(D(M_0,M))$ to be the set of integrals $\varphi _\alpha(f):z\mapsto \int_0^{+\infty}f_t(\alpha_t(z))\,dt$ where $t\mapsto f_t$ is in ${\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(D(M_0,M)))$. Now, let $\theta :V'\to V$ be an “exponential map” which is a diffeomorphism of a (relatively compact) neighborhood $V'$ of the $0$ section $M_0$ in ${\mathcal{N}}$ onto a tubular neighborhood $V$ of $M_0$ in $M$. We obtain a diffeomorphism $\Theta:W'\to W$ where $W'=\{(x,U,t)\in {\mathcal{N}}\times {\mathbb{R}};\ (x,tU)\in V'\}$ and $W=V\times {\mathbb{R}}^*\cup {\mathcal{N}}\times \{0\}$. Since $D(M_0,M)=M\times {\mathbb{R}}^*\cup W$, it follows that $C_c^\infty (D(M_0,M))=C_c^\infty (M\times {\mathbb{R}}^*)+C_c^\infty (W)$, and since both $M\times {\mathbb{R}}^*$ and $W$ are invariant by $\alpha$, it follows that ${\mathcal{S}}_\alpha(D(M_0,M))$ is the sum of ${\mathcal{S}}({\mathbb{R}}^*;C_c^\infty (M))$ obtained as $\varphi_\alpha({\mathcal{S}}({\mathbb{R}}_+^*;C_c^\infty(M\times {\mathbb{R}}^*)))$ and $\{f\in C^\infty(W);\ f\circ \Theta\in {\mathcal{S}}_\alpha({\mathcal{N}}\times {\mathbb{R}})\}$ (where $f\circ \Theta$ is extended by $0$ outside $W'$). We denote by ${\mathfrak{J}}_0( M_0,M)$ the subspace ${\mathcal{S}}({\mathbb{R}}^*;C_c^\infty (M))$ of ${\mathcal{S}}_\alpha(D(M_0,M))$. \[ectoire\] For $f\in {\mathcal{S}}_\alpha(D(M_0,M))$ and $t\in {\mathbb{R}}$, we denote by $f_t:z\mapsto f(z,t)$ (with $z\in M$ if $t\ne 0$ and $z\in {\mathcal{N}}$ if $t=0$). Note that if $f\in {\mathcal{S}}_\alpha(D(M_0,M))$ and $\psi \in C^\infty (M)$ vanishes in the neighborhood of $M_0$, then the map $(z,t)\mapsto \psi (z)f(z,t)$ for $t\ne 0$ extends to a smooth map on $M\times {\mathbb{R}}$ vanishing at infinite order on $M\times \{0\}$. As a direct consequence of Proposition \[J(G)1\] we have: \[margot\] Let $\theta :V'\to V$ be an exponential diffeomorphism as above and $\chi\in C_c^\infty(V)$ equal to $1$ near $M_0\subset M$. Let $f\in {\mathcal{S}}_\alpha(D(M_0,M))$ 1. For all $g\in C_c^{\infty}({\mathcal{N}})$ the function $F:M_0\times {\mathbb{R}}\to {\mathbb{C}}$ defined by $F(x,0)=\hat f_0(x,0) g(x,0)$ and $F(x,t)=t^{-p}\int_{{\mathcal{N}}_x} g(x,U)f_t\circ \theta (x,U)\,dU$ for $t\ne 0$ is smooth on $M_0\times {\mathbb{R}}$.\[margota\] 2. The following are equivalent:\[gique\] 1. For all $g\in C_c^{\infty}(M)$ the function $t\mapsto \int_M g(x)f (x,t)\,dx$ vanishes as well as all its derivatives at $0$. 2. For all $g\in C_c^{\infty}({\mathcal{N}})$ the function $(x,t)\mapsto \int_{{\mathcal{N}}_x} g(x,U)f_t\circ \theta (x,U)\,dU$ vanishes as well as all its derivatives on $M_0\times \{0\}\subset M_0\times {\mathbb{R}}$. 3. The function $(x,\xi,t)\mapsto \widehat {(\chi f_t)\circ \theta }(x,\xi)$ vanishes as well as all its derivatives on $M_0\times \{0\}$ sitting in ${\mathcal{N}}^*\times {\mathbb{R}}$ as zero section. In particular, it follows that conditions [(ii)]{} and [(iii)]{} do not depend on the choice of $\theta$.\ We denote by ${\mathfrak{J}}(M_0,M)\subset {\mathcal{S}}_\alpha(D(M_0,M))$ the set of functions satisfying the above equivalent conditions. For every $f\in {\mathcal{S}}_\alpha(D(M_0,M))$, the function $(1-\chi )f_t$ has rapid decay when $t\to 0$. Replacing $f$ by $\chi f$, we may assume that $f\in {\mathcal{S}}_\alpha(D(M_0,W))$. The result follows from Proposition \[J(G)1\] since ${\mathcal{S}}_\alpha(D(M_0,W)){\buildrel {\theta^*}\over \longrightarrow}{\mathcal{S}}_\alpha(D(M_0,W'))$ is an isomorphism. A family of semi-norms {#seminormes} ---------------------- We define a family $N_{k,\ell,j,m}$ of semi-norms on smooth functions on ${\mathbb{R}}^n\times {\mathbb{R}}^p\times {\mathbb{R}}$, for $k\in {\mathbb{N}}^n$, $\ell\in {\mathbb{N}}^p$, $j\in {\mathbb{N}}$ and $m\in {\mathbb{Z}}$. Put $$N_{k,\ell,j,m}(f)=\sup_{(x,\xi,t)\in {\mathbb{R}}^n\times {\mathbb{R}}^p\times {\mathbb{R}}}(\|\xi\|^2+t^2)^{m/2}\left |\frac{\partial ^{|k|+|\ell|+j}f}{\partial x^k\partial \xi ^\ell \partial t^j}(x,\xi,t)\right|.$$ We now use the notation introduced in previous subsection. Assume first that $M_0$ is compact. Fix a finite open cover $({\cal O}_i)_{i\in I}$ of $M_0$ by subsets diffeomorphic to ${\mathbb{R}}^n$ over which the normal bundle is trivialized. Using a partition of the identity adapted to $({\cal O}_i)_{i\in I}$, we obtain a family of semi-norms $N_{k,\ell,j,m}^i$ on $C^\infty({\mathcal{N}}^* \times {\mathbb{R}})$ and thus, via the Fourier transform and the map $\Theta:W'\to W$ defined above, a family $\widetilde N_{k,\ell,j,m}^i$ of semi-norms on the space $C_c^\infty(W)$ defined by $$\widetilde N_{k,\ell,j,m}^i(f)=N_{k,\ell,j,m}^i(\widehat {f\circ \Theta})$$ (where $f\circ \Theta$ is extended by $0$ outside $W'$). Finally, if $M_0$ is not compact, we define similarly a family of semi-norms using a locally finite cover $({\cal O}_i)_{i\in I}$. *The action of ${\mathbb{R}}_+^*$*\[actionR\] The action of ${\mathbb{R}}_+^*$ on $D(M_0,M)$ leads to an action by automorphisms $u\mapsto \alpha_u$ on ${\mathcal{S}}_\alpha(D(M_0,M))$ and its subspaces ${\mathfrak{J}}(M_0,M)$ (see prop. \[margot\]) and ${\mathfrak{J}}_0(M_0,M)$ ($={\mathcal{S}}({\mathbb{R}}^*;C_c^\infty (M))$). Note that $\widehat {\alpha_u(f)\circ \Theta}(x,\xi,t)=u^p \widehat {f\circ \Theta}(x,u\xi,ut)$ and therefore, the semi-norms $\widetilde N_{k,\ell,j,m}^i$ are multiplied by a suitable power of $u$ by this action. The groupoids ============= Main notation ------------- Let us recall some standard constructions and notation on groupoids. #### Densities. If $M$ is a smooth manifold, $E$ is a real vector bundle of dimension $p$ on $M$ and $q\in {\mathbb{R}}$, we denote by $\Omega ^q(E)=|\Lambda ^pE^*|^q$ the $q$-density bundle on $E$: for $x\in M$, $\Omega^q_xE$ is the set of maps $\psi:\Lambda^pE_x\setminus\{0\}\to {\mathbb{R}}$ such that $\psi(\lambda X)=\vert \lambda \vert^q \psi(X)$. A $q$-density is a section of $\Omega^q(E)$ and the product of a $q$-density with a $q'$-density leads to a $(q+q')$-density. There is a natural isomorphism $\Omega^q(E\oplus E')\to \Omega ^q(E)\otimes \Omega^q(E')$. The positivity of a density makes sense, and thus the bundle of densities is an oriented real line bundle which is therefore trivial(izable). It is however sometimes important to keep track of the natural normalizations they give rise to. We put $\Omega^p(M)=\Omega^p(TM)$. The main use of $1$-densities is that their integral over $M$ makes sense. The natural action of diffeomorphism takes into account the Radon-Nykodym derivative and therefore, there is a unique linear form $\int_M : C_c^\infty(M;\Omega^1(M)) \rightarrow {\mathbb{R}}$ which agrees in local coordinates with the Lebesgue integral. In this way one associates to a submersion $p:M\to M_1$ and a vector bundle $E$ on $M_1$ a natural map $p_!:C_c^{\infty}(M;p^*E\otimes \Omega^{1} (\ker dp))\to C_c^{\infty}(M_1;E)$ obtained by integrating one densities along the fibers of $p$. #### Source, range, algebroid. When ${\mathcal{G}}\rightrightarrows {\mathcal{G}}^{(0)}$ is a Lie groupoid with source $s$ and range $r$, we denote ${\mathcal{G}}_x:= s^{-1}(x)$ and ${\mathcal{G}}^x:=r^{-1}(x)$ for any $x\in {\mathcal{G}}^{(0)}$. The set of composable elements is ${\mathcal{G}}^{(2)}=\{(\gamma,\gamma');\ s(\gamma)=r(\gamma')\}$; the product $(\gamma,\gamma')\mapsto \gamma \gamma' $ is a smooth submersion $p:{\mathcal{G}}^{(2)}\to {\mathcal{G}}$. The *$s$-vertical tangent bundle* is the tangent space to the $s$-fibers, that is $T_s {\mathcal{G}}:= \ker ds= \underset{x\in {\mathcal{G}}^{(0)}}{\cup} T{\mathcal{G}}_x$. The $r$-vertical tangent bundle $T_r {\mathcal{G}}:= \ker dr$ is defined similarly. Recall that the restriction of $T_s {\mathcal{G}}$ to the set ${\mathcal{G}}^{(0)}$ of units identifies with the total space ${\mathfrak{A}}{\mathcal{G}}$ of the Lie algebroid of ${\mathcal{G}}$, which can also be thought of as the normal bundle to the inclusion ${\mathcal{G}}^{(0)}\to {\mathcal{G}}$. As usual, we will denote by $T^*_s {\mathcal{G}}$ and ${\mathfrak{A}}^* {\mathcal{G}}$ the corresponding dual bundles. #### The $*$-algebra of a groupoid. Let ${\mathcal{G}}$ be a Lie groupoid. As explained in [@ConnesSurvey], the natural $*$-algebra is obtained using half densities, namely $C_c^{\infty}({\mathcal{G}};\Omega^{1/2}(\ker ds\oplus \ker dr))$ endowed with the following operations: Involution. : The map $\kappa:\gamma\mapsto \gamma^{-1}$ exchanges $r$ and $s$ and therefore it acts naturally on $\Omega^{1/2}(\ker ds\oplus \ker dr)$; also, the bundle $\Omega^{1/2}(\ker ds\oplus \ker dr)$ has a real structure, [[i.e.]{} ]{}there is a natural complex conjugation $\omega \mapsto \overline\omega $ of this bundle. The adjoint of $f\in C_c^{\infty}({\mathcal{G}};\Omega^{1/2}(\ker ds\oplus \ker dr))$ is defined by $f^*(\gamma)=\kappa_*\Big(\overline{f(\gamma^{-1})}\Big)$. Product. : If $f,g\in C_c^{\infty}({\mathcal{G}};\Omega^{1/2}(\ker ds\oplus \ker dr))$, then the restriction of $f\otimes g$ to ${\mathcal{G}}^2$ is a section of the bundle $\Omega ^{1/2}(p^*T_r {\mathcal{G}}\oplus \ker dp\oplus \ker dp\oplus p^*T_s {\mathcal{G}})=p^*\Omega ^{1/2}(\ker dr\oplus \ker ds)\otimes \Omega^1\ker dp$, and by integration along the fibers of $p$ we obtain $f\ast g\in C_c^{\infty}({\mathcal{G}};\Omega^{1/2}(\ker ds\oplus \ker dr))$. From now on, we just write $C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ instead of $C_c^{\infty}({\mathcal{G}};\Omega^{1/2}(\ker ds\oplus \ker dr))$. The $C^*$-algebra of the groupoid ${\mathcal{G}}$ is a completion of the $*$-algebra $C_c^{\infty}({\mathcal{G}};\Omega^{1/2}))$. The reduced $C^*$-algebra : is obtained as the completion of $C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ by the family of representations $(\lambda_x)_{x\in {\mathcal{G}}^{(0)}}$, where $\lambda_x$ is the representation by left convolution on $L^2({\mathcal{G}}_x)$ (which is the completion of $C_c^\infty ({\mathcal{G}}_x;\Omega^{1/2}{\mathcal{G}}_x)$). The full $C^*$-algebra : is obtained as the completion of $C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ by the family of all continuous representations ([[*cf.*]{} ]{}[@Renault; @KhoshSkand]). #### Pseudodifferential operators on Lie groupoids Let ${\mathcal{G}}$ be a Lie groupoid and let $\theta :V'\to V$ be an “exponential map” which is a diffeomorphism of a (relatively compact) neighborhood $V'$ of the $0$ section ${\mathcal{G}}^{(0)}$ in ${\mathfrak{A}}{\mathcal{G}}$ (considered as the normal bundle to the inclusion ${\mathcal{G}}^{(0)}\subset {\mathcal{G}}$) onto a tubular neighborhood $V$ of ${\mathcal{G}}^{(0)}$ in ${\mathcal{G}}$. We assume that $r(\theta(x,U))=x$ for $x\in {\mathcal{G}}^{(0)}$ and $U\in {\mathfrak{A}}_x {\mathcal{G}}$. Le $m\in {\mathbb{Z}}$. A classical pseudo-differential operator of order $m$ is a multiplier $P=P_0+K$ of the $*$-algebra $C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ where $K\in C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ and left multiplication by $P_0$ is given by an expression $$P_0\ast f(\gamma)=\int _{\xi \in {\mathfrak{A}}^*_{r(\gamma)} {\mathcal{G}}}\Big(\int_{\gamma_1\in {\mathcal{G}}^{r(\gamma)}} e^{i\langle \theta^{-1}(\gamma_1)|\xi\rangle} \varphi(r(\gamma),\xi)\chi(\gamma_1) f(\gamma_1^{-1}\gamma) \,d\gamma_1\Big)d\xi$$ where $\chi \in C_c^{\infty}(V)$ is a bump function satisfying $\chi=1$ on ${\mathcal{G}}^{(0)}$ and $\varphi\sim \sum_{k=0}^{+\infty}a_{m-k}$ is a polyhomogeneous symbol ($a_j(x,\xi)$ is homogeneous of order $j$ in $\xi$). We may then write $P_0(\gamma_1)=\int _{\xi \in {\mathfrak{A}}^*_{r(\gamma_1)} {\mathcal{G}}} e^{i\langle \theta^{-1}(\gamma_1)|\xi\rangle} \varphi(r(\gamma_1),\xi)\chi(\gamma_1) \,d\xi$ meaning [[*e.g.*]{} ]{}that, as a multiplier, $P_0$ is the limit when $R\to \infty$ of $P_0^R\in C_c^{\infty}({\mathcal{G}};\Omega^{1/2})$ where $$P_0^R(\gamma)=\int _{\xi \in {\mathfrak{A}}^*_{r(\gamma)} {\mathcal{G}};\ \|\xi\|\le R} e^{i\langle \theta^{-1}(\gamma )|\xi\rangle} \varphi(r(\gamma ),\xi)\chi(\gamma ) \,d\xi.$$ Recall that if the order $m$ of $P$ is strictly negative, then $P$ extends to an element of $C^*({\mathcal{G}})$ and if $m=0$, then $P$ extends to a multiplier of $C^*({\mathcal{G}})$. The map $P\mapsto a_0$ is well defined and extends to an onto morphism $\sigma_0:\Psi^*({\mathcal{G}})\to C(S^*{\mathfrak{A}}{\mathcal{G}})$ with kernel $C^*({\mathcal{G}})$ where $\Psi^*({\mathcal{G}})$ is the closure of the algebra of pseudodifferential operators of order $\le 0$ in the multiplier algebra of $C^*({\mathcal{G}})$ and $C(S^*{\mathfrak{A}}{\mathcal{G}})$ is the commutative $C^*$-algebra of continuous functions on the space $S^*{\mathfrak{A}}{\mathcal{G}}$ of half lines in ${\mathfrak{A}}^*{\mathcal{G}}$. In other words, we have an exact sequence of $C^*$-algebras $$0\to C^*({\mathcal{G}})\longrightarrow \Psi^*({\mathcal{G}}) {\buildrel{\sigma_0}\over\longrightarrow } C(S^*{\mathfrak{A}}{\mathcal{G}})\to0.$$ The adiabatic groupoid {#adiagrou} ---------------------- Let us start with a smooth groupoid $G \rightrightarrows G^{(0)}$ with source map $s$ and range map $r$ and denote by ${\mathfrak{A}}G$ its Lie algebroid: it is the normal bundle of the inclusion $G^{(0)}\to G$ as unit space. Its adiabatic groupoid is the deformation to the normal cone $G_{ad}=D(G^{(0)},G)$ as in section \[DNC\]. As a set, $G_{ad}=G\times {\mathbb{R}}^*\cup {\mathfrak{A}}G\times \{0\}$, and its set of objects $G_{ad}^{(0)}$ is $G^{(0)}\times {\mathbb{R}}$. The inclusions $q\mapsto (q,\lambda)$ are groupoid morphisms from $G$ to $G_{ad}$ for $\lambda \ne 0$, and from ${\mathfrak{A}}G$ to $G_{ad}$ for $\lambda=0$. We fix an everywhere positive smooth density $\omega$ on $G$. We then get a smooth density $\omega_{ad} $ on $G_{ad}$: for $t\ne 0$, $\omega_{ad,t} =|t|^{-p}\omega$. We will now on fix the density $\omega$ (and therefore $\omega_{ad}$) and consider all elements of the groupoid algebra $C_c^\infty (G_{ad})$ as functions. The action of ${\mathbb{R}}_+^*$ and the [gauge adiabatic groupoid]{} --------------------------------------------------------------------- The action of the group ${\mathbb{R}}^*_+$ on the deformation to a normal cone that we already used, is compatible with the groupoid structure of $G_{ad}=D(G^{(0)},G)$: $$\begin{array}{ccl} G_{ad}\times {\mathbb{R}}^*_+ & \rightarrow & G_{ad} \\ (\gamma,t;\lambda) & \mapsto & (\gamma,\lambda t) \mbox{ when } t\not=0 \\ (x,U,0,\lambda) & \mapsto & (x,\frac{1}{\lambda} U,0) \end{array}$$ The *[gauge adiabatic groupoid]{}* is the (smooth) groupoid obtained as a crossed product of this action (see [[*e.g.*]{} ]{}[@LG]). Namely ${G_{ga}}:= G_{ad} \rtimes {\mathbb{R}}^*_+ \rightrightarrows G^{(0)}\times {\mathbb{R}}_+$ with structural morphisms $$\begin{array}{ll} \mbox{source and target :} & s_{bi}(\gamma,t;\lambda)=(s(\gamma), t) \mbox{ and } r_{bi}(\gamma,t;\lambda)=(r(\gamma),t\lambda ) \mbox{ for } t\not= 0 \\ & s_{bi}(x,U,0;\lambda)=r_{bi}(x,U,0;\lambda)=(x,0) \\ \mbox{product :} & (\gamma,\lambda' t;\lambda)\cdot(\gamma ' , t; \lambda')=(\gamma \gamma',t;\lambda \lambda') \mbox{ for } t\not= 0 \\ & (x,U,0;\lambda)\cdot (x,U',0;\lambda')=(x,\lambda' U+U',0;\lambda \lambda')\ . \end{array}$$ 1. At the level of Lie algebroids, these constructions are very simple and natural. Let us denote $({\mathfrak{A}}G, \sharp,[\ ,\ ]_{{\mathfrak{A}}G})$ the Lie algebroid of $G$, with its corresponding anchor and bracket. The Lie algebroid of $G_{ad}$ is $({\mathfrak{A}}G \times {\mathbb{R}}, \sharp_{ad},[\ ,\ ]_{ad})$ where $\sharp_{ad}:{\mathfrak{A}}G \times {\mathbb{R}}\to TG^{(0)}\times T{\mathbb{R}}$ is defined by $\sharp_{ad}(x,U,t)=(\sharp(x,tU),(t,0))$ for $(x,U,t)$ in ${\mathfrak{A}}G \times {\mathbb{R}}$, and $[\ ,\ ]_{ad}$ is the Lie bracket which satisfies $[X,Y]_{ad}(x,t)=t[X,Y]_{{\mathfrak{A}}G}(x)$ where $X,\ Y$ are smooth (local) sections of ${\mathfrak{A}}G$. The Lie algebroid of ${G_{ga}}$ is $({\mathfrak{A}}G \times T{\mathbb{R}},\sharp_{bi},[\ ,\ ]_{bi})$ where $\sharp_{bi}(x,U,t,\lambda)=(\sharp(x,tU),(t,t\lambda))$ and $[\ ,\ ]_{bi}$ is the Lie bracket induced by $[(X,\tau),(Y,\sigma)]_{bi}(x,t)=t([X,Y]_{{\mathfrak{A}}G},[\tau,\sigma])$ where $X,\ Y$ are smooth (local) sections of ${\mathfrak{A}}G$ and $\tau,\ \sigma$ are smooth local vector fields on ${\mathbb{R}}$. 2. This construction immediately extends to the case where $G$ is only assumed to be longitudinally smooth ([[i.e.]{} ]{}a continuous family groupoid in the sense of A. Paterson [@Paterson]). 3. Also, if $(M,{\mathcal{F}})$ is a singular foliation in the sense of [@AndrSk1] generated by vector fields $(X_i)_{1\le i\le n}$, the *adiabatic foliation* was constructed in [@AndrSk3]. We may construct the *[gauge adiabatic foliation]{}* on $M\times {\mathbb{R}}$ to be the foliation generated by the vector fields $t(X_i\otimes 1)$ and $t (1\otimes \partial /\partial t)$ (where $t$ is the ${\mathbb{R}}$ coordinate in $M\times {\mathbb{R}}$). Schwartz algebra and module =========================== The Schwartz algebra of Carrillo-Rouse and the ideal ${\mathcal{J}}(G)$ ----------------------------------------------------------------------- Let $G$ be a Lie groupoid and $G_{ad}=D(G^{(0)},G)$ the corresponding adiabatic groupoid We will use the Schwartz space described above for a general deformation to the normal cone; this leads to a slight modification of the Schwartz algebra of P. Carrillo-Rouse ([[*cf.*]{} ]{}[@CR]). \[helene\] 1. *The ideal ${\mathcal{J}}_0(G)$ of functions with rapid decay at $0$* This is the space ${\mathfrak{J}}_0(G^{(0)},G)={\mathcal{S}}({\mathbb{R}}^*;C_c^\infty (G))$ defined above (section \[renee\] and \[ectoire\]). It consists of smooth half densities $f$ with compact support on the groupoid $G\times {\mathbb{R}}_+$ such that, for every $k\in {\mathbb{N}}$, the function $(\gamma,t)\mapsto t^{-k}f(\gamma,t)$ extends smoothly on $G\times {\mathbb{R}}_+$. 2. *The (modified) Schwartz algebra ${\mathcal{S}}_c(G_{ad})$ of Carrillo-Rouse* This is the algebra ${\mathcal{S}}_\alpha(D(G^{(0)},G))$ defined above (section \[ectoire\]). Its elements are sums $f+g$ where $f\in {\mathcal{J}}_0(G)$ and $g\in C^\infty(W;\Omega^{1/2})$ is such that $\widetilde N_{k,\ell,j,m}^i(g)<+\infty$ for all $i,k,\ell,j,m$ with $m\ge 0$. Note that there is a canonical groupoid morphism $G_{ad}\to G$ (the image of ${\mathfrak{A}}G\times \{0\}$ is $G^{(0)}\subset G$) and, by definition, the image under this morphism of the support of $f\in {\mathcal{S}}_c(G_{ad})$ is compact in $G$. On the other hand, unlike the original definition in [@CR], we had to drop here the conical support requirement. 3. *The ideal ${\mathcal{J}}(G)$* This is the space ${\mathfrak{J}}(G^{(0)},G)$ defined in prop. \[margot\]. Its elements are sums $f+g$ where $f\in {\mathcal{J}}_0(G)$ and $g\in C^\infty(W;\Omega^{1/2})$ is such that $\widetilde N_{k,\ell,j,m}^i(g)<+\infty$ for all $i,k,\ell,j,m$ with any $m\in {\mathbb{Z}}$. We now check that ${\mathcal{S}}_c(G_{ad})$ is indeed an algebra and ${\mathcal{J}}(G)$ an ideal. We begin with a remark: An element $f\in {\mathcal{S}}_c(G_{ad})$ is a family $(f_t)_{t\in {\mathbb{R}}}$, where $f_t\in C^\infty_c(G)$ for $t\ne0$ and $f_0\in {\mathcal{S}}({\mathfrak{A}}G)$. Note that, since $G\times {\mathbb{R}}^*$ is dense in $G_{ad}$, $f$ is determined by $(f_t)_{t\ne0}$. Roughly speaking, the definition implies that the support of $f_t$ concentrates around $G^{(0)}$ when $t$ goes to $0$. The space ${\mathcal{S}}_c(G_{ad})$ is a $*$-algebra: for $f,g\in {\mathcal{S}}_c(G_{ad})$, the families $(f_t^*)_{t\in {\mathbb{R}}}$ and $(f_t\ast g_t)_{t\in {\mathbb{R}}}$ belong to ${\mathcal{S}}_c(G_{ad})$. Moreover ${\mathcal{J}}_0(G)$ is a $*$-ideal of ${\mathcal{S}}_c(G_{ad})$. The function $(\gamma_1,\gamma_2)\mapsto f(\gamma_1)g(\gamma_2)$ defined on $G_{ad}^{(2)}=D(G^{(0)},G^{(2)})$ is an element of ${\mathcal{S}}_\alpha(D(G^{(0)},G^{(2)}))$. Since the composition $G^{(2)}\to G$ is a submersion, equivariant with respect to $\alpha$ (since $\alpha$ is an action of ${\mathbb{R}}_+^*$ by groupoid automorphisms), we find that integration along the fibers yields a continuous map ${\mathcal{S}}_\alpha(D(G^{(0)},G^{(2)}))\to {\mathcal{S}}_\alpha(D(G^{(0)},G))$. By continuity of the product of $C_c^\infty(G)$, we find that if $f\in {\mathcal{J}}_0(G)$ or $g\in {\mathcal{J}}_0(G)$, then $f\ast g\in {\mathcal{J}}_0(G)$. The assertions about the $*$-operations are obvious. \[convolution1\] Let $f\in {\mathcal{S}}_c(G_{ad})$. 1. For every $g\in C_c^{\infty}(G)$ the function $F:G\times {\mathbb{R}}\to {\mathbb{C}}$ defined by $F(\gamma,t)=f_t\ast g(\gamma)$ for $t\ne 0$ and $F(\gamma,0)=\hat f_0(r(\gamma),0) g(\gamma)$ is smooth. 2. \[convolution2\]We have $f\in {\mathcal{J}}(G)$ if and only if, for any $g\in C_c^\infty(G)$, the family $(f_t\ast g)_{t\in {\mathbb{R}}^*}$ is an element of ${\mathcal{J}}_0(G)$. Let $\theta :V'\to V$ be an “exponential map” which is a diffeomorphism of a (relatively compact) neighborhood $V'$ of the $0$ section $G^{(0)}$ in ${\mathfrak{A}}G$ onto a tubular neighborhood $V$ of $G^{(0)}$ in $G$. We assume that $r(\theta(x,U))=x$ for $x\in G^{(0)}$ and $U\in {\mathfrak{A}}_x G$.\ Let $\chi \in C_c^\infty (G)$ with support in $V$, such that $\chi(\gamma)=1$ for $\gamma$ near $G^{(0)}$. Then $((1-\chi )f_t)_{t\in {\mathbb{R}}}\in {\mathcal{J}}_0(G)$, whence $((1-\chi )f_t\ast g)_{t\in {\mathbb{R}}}\in {\mathcal{J}}_0(G)$ for all $g\in C_c^\infty(G)$.\ Furthermore, $$(\chi f_t\ast g)(\gamma)=\int_{{\mathfrak{A}}_{r(\gamma)} G} f_t\circ \theta^{-1}(r(\gamma),U)h(\gamma,U)\,dU,$$ where $h(\gamma,U)=\chi(\theta (r(\gamma),U))g(\theta (r(\gamma),U)^{-1}\gamma)\delta (r(\gamma),U)$ - here $\delta$ is a suitable Radon-Nykodym derivative.\ The Lemma follows now from Prop. \[margot\]. The space ${\mathcal{J}}(G)$ is a $*$-ideal of the algebra ${\mathcal{S}}_c(G_{ad})$. Follows immediately from Lemma \[convolution1\].\[convolution2\]). The smooth module ${\mathcal{E}}^\infty$ ---------------------------------------- The following rather technical Lemma uses the action $\alpha$ of ${\mathbb{R}}_+^*$ and the semi-norms defined in section \[seminormes\] and used in the Definitions \[helene\]. \[teknik\] Let $f,g\in {\mathcal{J}}(G)$. We assume that their support is small enough (in $G$) so that, for every $t,u\in {\mathbb{R}}^*$, the function $f_t\ast g_u$ has support in $V$. The function $u\mapsto (f_t\ast g_{tu})_{t\in {\mathbb{R}}}$ is a smooth map from ${\mathbb{R}}_+^*$ to ${\mathcal{J}}(G)$ with rapid decay as $u\to 0$ and as $u\to\infty$. More precisely, for every $i\in I$, $k\in {\mathbb{N}}^n$, $\ell\in {\mathbb{N}}^p$, $j\in {\mathbb{N}}$, $m\in {\mathbb{Z}}$ and $q\in {\mathbb{Z}}$, the function $u\mapsto u^q\widetilde N_{k,\ell,j,m}^i(f\ast\alpha _u(g))$ is bounded on ${\mathbb{R}}_+^*$. We may perform the construction of the adiabatic groupoid starting from the adiabatic groupoid $G_{ad}$! Since $g=(g_t)$ is an element of ${\mathcal{J}}(G)$, the function $(t,u)\mapsto \chi (t)\chi(u) g_{tu}$ is an element of ${\mathcal{J}}(G_{ad})$ where $\chi\in C_c^\infty({\mathbb{R}})$ is equal to $1$ near $0$. By Lemma \[convolution1\] applied to the groupoid $G_{ad}$, it follows that $(\chi(t)\chi(u)f_{t}\ast g_{tu})_{u\in {\mathbb{R}}_+^*}$ leads to an element of ${\mathcal{J}}_0(G_{ad})$ and thus has rapid decay when $u\to 0$, uniformly in $t$. In other words, the functions $u\mapsto u^{-q}\widetilde N_{k,\ell,j,m}^i(f\ast \alpha _u(g))$ are bounded for every $i\in I$, $k\in {\mathbb{N}}^n$, $\ell\in {\mathbb{N}}^p$, $j\in {\mathbb{N}}$, $m\in {\mathbb{N}}$ and $q\in {\mathbb{N}}$. In the same way, $u\mapsto \alpha _u(f)\ast g$ has rapid decay when $u\to 0$, whence, using the compatibility of the semi-norms with the action of ${\mathbb{R}}_+^*$, $u\mapsto f\ast \alpha ^{-1}_u(g)$ has rapid decay when $u\to 0$. From this, we deduce that the functions $u\mapsto u^{q}\widetilde N_{k,\ell,j,m}^i(f\ast \alpha _u(g))$ are bounded for every $i\in I$, $k\in {\mathbb{N}}^n$, $\ell\in {\mathbb{N}}^p$, $j\in {\mathbb{N}}$, $m\in {\mathbb{N}}$ and $q\in {\mathbb{N}}$, whence $q\in {\mathbb{Z}}$. In other words, $u\mapsto f\ast \alpha _u(g)$ has rapid decay from ${\mathbb{R}}_+^*$ to ${\mathcal{S}}_c(G_{ad})$. To see that it also has rapid decay as a function from ${\mathbb{R}}_+^*$ to ${\mathcal{J}}(G)$, it is enough to check that, for $h\in C_c^\infty(G)$, the map $u\mapsto f\ast \alpha _u(g)\ast h$ has rapid decay from ${\mathbb{R}}_+^*$ to ${\mathcal{J}}_0(G)$. But by Lemma \[convolution1\].\[convolution2\]), $(t,u)\mapsto f_t\ast g_{tu}\ast h$ is an element of ${\mathcal{S}}({\mathbb{R}}^2;C_c^\infty(G))$ and vanishes as well as all its derivatives when $t=0$ or $u=0$. \[pseudosintegrales\] For $f\in {\mathcal{J}}(G)$ and $m\in {\mathbb{N}}$, the operator $\displaystyle\int_0^{+\infty}\! t^m f_t\, \displaystyle\frac {dt}{t}$ is an order $-m$ pseudodifferential operator of the groupoid $G$ [[i.e.]{} ]{}an element of ${\mathcal{P}}_{-m}(G)$; its principal symbol $\sigma$ is given by $\sigma(x,\xi)=\displaystyle\int_0^{+\infty} t^m\hat f(x,t\xi,0)\displaystyle\frac {dt}{t}\cdot$ More precisely, there is a (classical) pseudodifferential operator $P$ with principal symbol $\sigma$ such that for every $g\in C_c^\infty (G)$, we have $P\ast g=\displaystyle\int_0^{+\infty}\! t^m f_t\ast g\, \displaystyle\frac {dt}{t}$ and $g\ast P=\displaystyle\int_0^{+\infty}\! t^mg\ast f_t\, \displaystyle\frac {dt}{t}\cdot$ Of course, if $f\in {\mathcal{J}}_0(G)$ then $\int_0^{+\infty} t^m f_t\,\frac{dt}t \in C_c^\infty (G)$. In particular, this gives the meaning of $\displaystyle\int_0^{+\infty}\! t^m f_t\ast g\, \displaystyle\frac {dt}{t}$ and $\displaystyle\int_0^{+\infty}\! t^m g\ast f_t\, \displaystyle\frac {dt}{t}$ (thanks to Lemma \[convolution1\]). We thus need only to treat the case of $f\in C_c^\infty(W;\Omega^{1/2})$ satisfying the above condition. We may trivialize the half densities using a positive half density $\omega $ on $G$; we then may write $f_t=t^{-p}h_t \omega$ where $p$ is the dimension of the fibers of $G$ and $h$ is the restriction to $G\times {\mathbb{R}}_+^*$ of a smooth function with compact support in $G_{ad}$. Such a function can be written as $h(\gamma,t)=\chi(\gamma)\chi'(t)\varphi\Big(\frac{\theta^{-1} (\gamma)}t,t\Big)$, where $\chi $ and $\chi'$ are bump-functions $\chi\in C_c^\infty(G)$ with support contained in $V$ which is equal to $1$ in a neighborhood of $G^{(0)}$, $\chi'\in C_c^\infty({\mathbb{R}}_+)$ equal to $1$ in a neighborhood of $0$ and $\varphi\in C_c^\infty({\mathfrak{A}}G\times {\mathbb{R}}_+)$ such that $\hat \varphi$ vanishes as well as all its derivatives at points of the form $(x,0,0)$ with $x\in G^{(0)}$. Writing $\varphi(x,X,t)=(2\pi)^{-p}\int e^{i\langle X|\xi\rangle}\hat \varphi(x,\xi ,t)\,d\xi $, we find $$\begin{aligned} f_t(\gamma)&=&(2\pi t)^{-p}\chi(\gamma)\chi'(t)\omega \int e^{i\langle \frac{\theta^{-1} (\gamma)}t|\xi\rangle}\hat \varphi(x,\xi,t)\,d\xi\\ &=&(2\pi )^{-p}\chi(\gamma)\chi'(t)\omega \int e^{i\langle \theta^{-1} (\gamma)|\xi\rangle}\hat \varphi(x,t\xi,t)\,d\xi.\end{aligned}$$ Therefore, we have an equality (as multipliers of $C_c^\infty (G;\Omega^{1/2})$), $$\int_0^{+\infty} t^mf_t(\gamma)\,\frac {dt}t=(2\pi)^{-p}\chi(\gamma)\omega\int e^{i\langle \theta^{-1} (\gamma)|\xi\rangle}a(x,\xi)\,d\xi,$$ where $$\begin{aligned} a(x,\xi)&=&\int_0^{+\infty} t^m\chi'(t)\hat \varphi(x,t\xi ,t)\,\frac {dt}t\cdot\end{aligned}$$ Taking derivatives of $a$ in $x$ gives the same type of expression; taking derivatives in $\xi$ increases $m$. An expression $c(x,\xi)=\int_0^{+\infty} (t\|\xi\|)^m\chi'(t)b(x,t\xi ,t)\,\frac {dt}t$ with $b$ having rapid decay at infinity and points $(x,0,0)$ is bounded. In other words, $a$ is a symbol of order $-m$ and type $(1,0)$ Now, given an expansion $\hat \varphi(x,\zeta,u)\sim \sum_{k=0}^\infty b_k(x,\zeta)u^k$, (for $u$ small) we find an expansion (for $\xi$ large), $a(x,\xi )\sim \sum_{k=0}^\infty a_{k+m}(x,\xi)$ where $a_{k+m}(x,\xi)=\int_0^\infty b_k(x,t\xi)t^{k+m}\,\frac {dt}t$ is homogeneous of degree $-k-m$ in $\xi$. Of course, the same computation works for $m\in {\mathbb{Z}}$ and even $m\in {\mathbb{C}}$ ([[*cf.*]{} ]{}[@chief] for pseudodifferential operators on groupoids with complex order). Note that Theorem \[pseudosintegrales\] shows that every element $P\in {\mathcal{P}}_0(G)$ can be written as an integral $P_f=\displaystyle\int_0^{+\infty}\! f_t\, \displaystyle\frac {dt}{t}$ with $f\in {\mathcal{J}}(G)$ (using the standard Borel’s Theorem-like techniques to construct $f$ such that $P-P_f\in C_c^\infty(G)$; of course if $P\in C_c^\infty(G)$, then $P=P_f$ where $f_t=\chi(t)P$ with an obvious choice of $\chi\in {\mathcal{S}}({\mathbb{R}}_+^*)$). \[ActionADroite\]Let $f=(f_t)_{t\in {\mathbb{R}}}\in {\mathcal{J}}(G)$ and $P\in {\mathcal{P}}_0(G)$. There is a (unique) element $h=(h_t)_{t\in {\mathbb{R}}}$ of ${\mathcal{J}}(G)$ such that $h_t=f_t\ast P$ for $t\in {\mathbb{R}}^*$. Moreover $\widehat {h_0}=\widehat {f_0}\sigma_0(P)$. Uniqueness follows from density of $G\times {\mathbb{R}}^*$ in $G_{ad}$. The family $(f_t\ast P)$ is smooth in $G\times {\mathbb{R}}^*$ and the image in $G$ of its support is compact. Of course, if $f\in {\mathcal{J}}_0(G)$ then $(f_t\ast P)\in {\mathcal{J}}_0(G)$, we may thus assume that $f$ has support in a neighborhood of $G^{(0)}$ in $G$ as small as we wish. In the same way, since $P$ is quasi-local, we may assume, thanks to Lemma \[convolution1\], that the support of $P$ is also contained in a suitable neighborhood of $G^{(0)}$ in $G$. Now, using Theorem \[pseudosintegrales\], we may write $P=\int g_u \frac{du}{u}$ with $g\in {\mathcal{J}}(G)$. By locality, we may assume that, for all $t,u\in {\mathbb{R}}^*$, the support of $f_t\ast g_u$ is contained in $V$. We find $h_t=f_t\ast P=\int_0^{+\infty}f_t\ast g_u\frac{du}{u}=\int_0^{+\infty}f_t\ast g_{tu}\frac{du}{u}$. The conclusion follows immediately from Lemma \[teknik\]. We found $h=\int_0^{+\infty} f\ast \alpha_u(g)\frac{du}{u}$. Evaluating at $t=0$, we find $h_0=\int_0^{+\infty}f_0\ast \alpha_u(g_{0})\frac{du}{u}$, and thus $\widehat{h_0}=\widehat {f_0}\int_0^{+\infty} \alpha_u(\widehat{g_{0}})\frac{du}{u}=\widehat {f_0}\sigma_0(P)$ (by Theorem \[pseudosintegrales\]). Lemma \[ActionADroite\] asserts that ${\mathcal{J}}(G)$ is endowed with a (right) ${\mathcal{P}}_0(G)$-module structure; we will denote this smooth module by ${\mathcal{E}}^{\infty}$. The Hilbert module ${\mathcal{E}}$ ================================== From now on, we will be interested in the restriction $G_{ad}^+$ of $G_{ad}$ to $G^{(0)}\times {\mathbb{R}}_+$. Let ${\rm ev_+}:C^*(G_{ad}) \rightarrow C^*(G_{ad}^+)$ be the morphism induced by the restriction map. Since $G^{(0)}\times {\mathbb{R}}_+$ is a closed saturated subspace of $G_{ad}$ and $G_{ad}^+$ is invariant under the action of ${\mathbb{R}}^*_+$, all the previous results obviously remain true when one replaces ${\mathcal{S}}_c(G_{ad})$, ${\mathcal{J}}(G)$ and ${\mathcal{J}}_0(G)$ by their image under ${\rm ev_+}$. For the simplicity of notations we keep the same notation for ${\mathcal{J}}(G)$ and ${\mathcal{J}}_0(G)$: they are the image under ${\rm ev_+}$ of the previously defined ${\mathcal{J}}(G)$ and ${\mathcal{J}}_0(G)$. Completion of ${\mathcal{E}}^\infty$ ------------------------------------ Let $\Psi^*(G)$ denote the $C^*$-algebra of pseudodifferential operators, [[i.e.]{} ]{}the norm closure of ${\mathcal{P}}_0(G)$ in the multiplier algebra of $C^*(G)$. Let also $\sigma_0:\Psi^*(G)\to C(S^*{\mathfrak{A}}G)$ be the principal symbol map. We have the exact sequence of $C^*$-algebras $$0\to C^*(G)\longrightarrow \Psi^*(G){\buildrel{\sigma_0}\over\longrightarrow } C(S^*{\mathfrak{A}}G)\to 0\eqno(2)$$ For $P\in \Psi^*(G)$, the function $\sigma_0(P)$ is thought of as a homogeneous function defined outside the zero section in ${\mathfrak{A}}G$. The elements of $C^*(G_{ad}^+)$ are families $(f_t)_{t\in {\mathbb{R}}_+}$ with $f_t\in C^*(G)$ for $t\ne 0$ and $f_0\in C^*({\mathfrak{A}}G)\simeq C_0({\mathfrak{A}}^* G)$. Put $J_0(G)=\{f\in C^*(G_{ad}^+);\ f_0 =0\}\simeq C_0({\mathbb{R}}_+^*;G)$. We have an exact sequence $$0\to J_0(G)\longrightarrow C^*(G_{ad}^+){\buildrel{{\rm ev}_0}\over\longrightarrow } C_0({\mathfrak{A}}^* G) \to 0.$$ Finally, put $J(G)\subset C^*(G_{ad}^+)=\{f\in C^*(G_{ad}^+);\ \forall x\in G^{(0)},\ \widehat {f_0}(x,0)=0\}$. \[essentialideal\] An element $f\in C^*(G_{ad}^+)$ is determined by the family $(f_t)_{t\in {\mathbb{R}}_+^*}$: the ideal $J_0(G)$ is essential in $C^*(G_{ad}^+)$. Let $x\in G^{(0)}$. Put $G_{ad,x}^+=\{(\gamma,t)\in G_{ad}^+;\ s(\gamma)=x\}=G_x\times {\mathbb{R}}_+^*\cup {\mathfrak{A}}_xG$. The map $(\gamma,t)\to t$ being a submersion, we obtain a continuous family $(H_{x,t})_{t\ge 0}$ of Hilbert spaces with $H_{x,0}=L^2({\mathfrak{A}}_xG)$ and $H_{x,t}=L^2(G_x)$ for $t>0$ as a completion of smooth half densities with compact support on $G^+_{ad,x}$. Let $f\in C^\infty_c(G_{ad}^+;\Omega^{1/2})$ and $g\in C^\infty_c(G^+_{ad,x};\Omega^{1/2})$; we have $f* g\in C^\infty_c(G^+_{ad,x};\Omega^{1/2})$. It follows that $t\mapsto \|f_t* g_t\|$ is continuous; by density, this remains true for $f\in C^*(G^+_{ad})$ and any continuous section $g=(g_t)$ of $(H_{x,t})$. But $\|f_0\|=\sup\{\|f_0* g_0\|,\ x\in G^{(0)};\ g\in (H_{x,t});\ \sup_{t} \|g_t\|\le 1\}$. It follows that $\|f_0\|\le \sup_{t>0} \|f_t\|$. In the following, we consider $C^*(G^+_{ad})$ in the multiplier algebra ${\mathcal{M}}(J_0(G))$ of $J_0(G)$. Since $J_0(G)=C_0({\mathbb{R}}_+^*)\otimes C^*(G)$, the algebras $C^*(G)$, and $\Psi^*(G)$ sit also in ${\mathcal{M}}(J_0(G))$. From lemmas \[convolution1\] and \[ActionADroite\], we immediately get. \[ActionADroite2\] 1. For $f\in C^*(G)$ and $g\in J(G)$ we have $f\ast g\in J_0(G)$. \[convolution2\] 2. For $P\in \Psi_0^*(G)$ and $f\in J(G)$ we have $f\ast P \in J(G)$ and $\widehat{(f\ast P)_0}=\widehat{f_0}\sigma_0(P)$. \[ActionADroite2b\] $\square$ \[strictConv\] For $f\in {\mathcal{J}}(G)$, the integral $\displaystyle\int_0^{+\infty}\! f_t\, \displaystyle\frac {dt}{t}$ of Theorem \[pseudosintegrales\] converges strictly ([[i.e.]{} ]{}in the topology of multipliers of $C^*(G)$). Let $g\in C_c^\infty(G)$. It follows from lemma \[convolution1\] that $\int_0^{+\infty} f_t\ast g\,\frac{dt}t$ converges in norm. Taking adjoints, it follows that $\int_0^{+\infty} g\ast f_t\,\frac{dt}t$ converges also in norm. From Theorem \[pseudosintegrales\], it follows that $\int_0^{+\infty} f_t\ast g\,\frac{dt}t=P\ast g$ where $P$ is a pseudodifferential operator of order $0$ and therefore extends to a multiplier of $C^*(G)$. Now, if $f$ is a positive element in ${\mathcal{J}}(G)$, it follows that $\langle g|\Big(\displaystyle\int_s^{+\infty}\! f_t\, \displaystyle\frac {dt}{t}\Big)\ast g\rangle \le \langle g|P\ast g\rangle $, therefore the family $\Big (\displaystyle\int_s^{+\infty}\! f_t\, \displaystyle\frac {dt}{t}\Big )_{s>0}$ is bounded. It follows that $\Big(\int_s^{+\infty} f_t\,\frac{dt}t\Big)\ast g$ converges to $P\ast g$ and $g\ast \Big(\int_s^{+\infty} f_t\,\frac{dt}t\Big)$ converges to $g\ast P$ for all $g\in C^*(G)$ (when $s\to 0$). Therefore $\displaystyle\int_0^{+\infty}\! f_t\, \displaystyle\frac {dt}{t}$ converges strictly. By the polarization identity, we find that $\int_0^{+\infty} g_t^*\ast h_t\,\frac{dt}t$ converges in the multiplier algebra of $C^*(G)$ for $g,h\in {\mathcal{J}}(G)$. Now, one can find $g,h\in {\mathcal{J}}(G)$ such that $f_0=g_0^*\ast h_0$ (take for instance $\hat g_0(x,\xi)=|\hat f_0(x,\xi)|^2+\exp(-\|\xi\|^2-\|\xi\|^{-2})^{1/4}$ and $\hat h_0(x,\xi)=\hat f_0(x,\xi)\hat g_0(x,\xi)^{-1}$). It follows, that there exists $f^1\in {\mathcal{J}}(G)$ with $f_t=tf^1_t+g_t^*\ast h_t$. Now, since $f^1\in C^*(G_{ad}^+)$, it follows that $\|f_1^t\|$ is bounded. Therefore, (using rapid decay at $\infty$) the integral $\int_0^{+\infty} f^1_t\,dt$ is norm convergent in $C^*(G)$. \[fullmodule\] There exists $f\in {\mathcal{J}}(G)$ such that $\int _0^{+\infty} f_t^*\ast f_t\frac {dt}{t}$ is an invertible element of $\Psi^*(G)$ and $\Big(1-\int _0^{+\infty} f_t^*\ast f_t\frac {dt}{t}\Big)\in C_c^\infty(G)$. Fix a smooth function $\psi:{\mathbb{R}}_+\to {\mathbb{R}}_+$ with support in $]1,2[$ such that $\int_0^{+\infty}\psi^2(t)\frac {dt}{t}=1$. Let $g\in {\mathcal{J}}(G)$ be such that $\hat g_0(x,\xi)=\psi(\|\xi\|)$ and $g_t=0$ for $t\ge 1$. The positive pseudodifferential operator $P=\int _0^{+\infty} g_t^*\ast g_t\frac {dt}{t}$ has principal symbol equal to $1$ by Theorem \[pseudosintegrales\]. By [@chief], there exists $Q\in {\mathcal{P}}_0(G)$ such that $1-Q^*PQ\in C_c^{\infty}(G)$. Using the exact sequence (2), it follows that there exists $b\in C^*(G)$ such that $b^*b+Q^*PQ\ge 1$. By density of $C_c^{\infty}(G)$ in $C^*(G)$, there exists $h\in C_c^{\infty}(G)$ such that $h^*\ast h+Q^*PQ$ is invertible. Taking $f_t=g_t\ast Q$ for $t\le 1$ and $f_t=\psi(t) h$ for $t \ge 1$, we find $\int _0^{+\infty} f_t^*\ast f_t\frac {dt}{t}=Q^*PQ+h^*\ast h$. We may now construct the main object of this section. \[auboisdormant\] There is a Hilbert $\Psi^*(G)$-module ${\mathcal{E}}$ containing ${\mathcal{J}}(G)$ as a dense subset, with the following operations: - For $f,g\in {\mathcal{J}}(G)\subset {\mathcal{E}}$, we have $\langle f|g\rangle =\int _0^{+\infty}f_t^*\ast g_t\frac{dt}t$. - For $f\in {\mathcal{J}}(G)\subset {\mathcal{E}}$ and $P\in {\mathcal{P}}_0(G)\subset \Psi^*(G)$, we have $f\ast P\in {\mathcal{J}}(G)\subset {\mathcal{E}}$ and $(f\ast P)_t=f_t\ast P$ for $t\not=0$. The module ${\mathcal{E}}$ is a full $\Psi^*(G)$ module. Clearly, for fixed $f$, the map $g\mapsto\langle f|g\rangle= \int _0^{+\infty}f_t^*\ast g_t\frac{dt}t$ is linear and ${\mathcal{P}}_0(G)$-linear; also $\langle g|f\rangle=\langle f|g\rangle^*$. Furthermore $\langle f|f\rangle$ is the strict limit of elements of $C^*(G)_+$; therefore $\langle f|f\rangle\in \Psi^*(G)_+$. For $f\in {\mathcal{J}}(G)$, we then may put $\|f\|_{\mathcal{E}}=\|\langle f|f\rangle\|_{C^*(G)}^{1/2}$. By the Cauchy-Schwarz inequality for $C^*$-modules, this defines a norm on ${\mathcal{E}}^\infty$. Now, using again the Cauchy-Schwarz inequality, for $f,g\in {\mathcal{J}}(G)$ we have $\|\langle f|g\rangle\|_{C^*(G)}\le \|f\|_{\mathcal{E}}\|g\|_{\mathcal{E}}$ and, for $P\in {\mathcal{P}}_0(G)$, since $\langle f\ast P|f\ast P\rangle =P^*\langle f|f\rangle P\le \|\langle f|f\rangle\|_{C^*(G)}P^*\ast P$, we find $\|f\ast P\|_{\mathcal{E}}\le \|f\|_{\mathcal{E}}\|P\|_{\Psi^*(G)}$. It follows that the scalar product and the right action of ${\mathcal{P}}_0(G)$ extend and endow ${\mathcal{E}}$ with the desired Hilbert $\Psi ^*(G)$-module structure. It follows from Lemma \[fullmodule\] that ${\mathcal{E}}$ is full - and in fact, that there exists $\zeta\in {\mathcal{E}}$ ($\zeta=f\langle f|f\rangle^{-1/2}$) such that $\langle \zeta|\zeta\rangle=1$. Computation of ${\mathcal{K}}({\mathcal{E}})$ --------------------------------------------- We now construct the desired natural isomorphism $J(G)\rtimes {\mathbb{R}}_+^*\to {\mathcal{K}}({\mathcal{E}})$. Note first, that if $f\in {\mathcal{S}}_c(G^+_{ad})$ and $g\in {\mathcal{J}}(G)$, $f\ast g\in {\mathcal{J}}(G)$; furthermore, this left action is ${\mathcal{P}}_0(G)$-linear and $\langle f\ast g|f\ast g\rangle=\int g_t^*\ast f_t^*\ast f_t\ast g_t\frac {dt}t\le \|f\|^2\langle g|g\rangle$ (where $\|f\|=\sup \|f_t\|$ is the norm of $f$ in $C^*(G^+_{ad})$ - this holds as well for the reduced and the full $C^*$-norm on $G$ and $G_{ad}$). Extending by continuity, we obtain a natural morphism $\pi_0:C^*(G^+_{ad})\to {\mathcal{L}}({\mathcal{E}})$. The action of ${\mathbb{R}}_+^*$ on $G^+_{ad}$ gives rise to a unitary action on ${\mathcal{E}}$ given by $(U_s(f))_t=f_{st}$ for $f\in {\mathcal{J}}(G)$ and $s,t\in {\mathbb{R}}_+^*$. The couple $(\pi_0,U)$ is an equivariant representation of $(C^*(G^+_{ad}),{\mathbb{R}}_+^*)$, and therefore gives rise to a representation of $C^*(G^+_{ad})\rtimes {\mathbb{R}}_+^*\simeq C^*({G_{ga}})$, [[i.e.]{} ]{}a morphism $$\pi:C^*(G^+_{ad})\rtimes {\mathbb{R}}_+^*\to {\mathcal{L}}({\mathcal{E}}) .$$ Put ${\mathcal{E}}_0={\mathcal{E}}C^*(G)$. It is a closed submodule of ${\mathcal{E}}$. Recall that if $J$ is a closed two sided ideal in a $C^*$-algebra $A$ and $E$ is a Hilbert-$A$-module, then $EJ=\{x\in E;\ \langle x|x\rangle \in J\}$ is a closed submodule. The quotient $E/EJ$-is the Hilbert $A/J$-module $E\otimes _AA/J$. We clearly[^1] have a short exact sequence $$0\to {\mathcal{K}}(EJ)\to {\mathcal{K}}(E)\to {\mathcal{K}}(E/EJ)\to0.$$ Note that if $J$ is an essential ideal of $A$, it follows that $EJ^\perp =\{0\}$. Indeed, for $x\in E$ non zero, there exists $b\in J$ such that $\langle x|x\rangle b\ne 0$, whence $\langle x|xb\rangle =\langle x|x\rangle b\ne 0$. \[bleue\] The morphism $\pi :J(G)\rtimes {\mathbb{R}}_+^*\to {\mathcal{L}}({\mathcal{E}})$ is an isomorphism from $J(G)\rtimes {\mathbb{R}}_+^*$ onto ${\mathcal{K}}({\mathcal{E}})$. Consider the exact sequences: $$\begin{array}{ccccccccc} 0&\to &J_0(G)\rtimes {\mathbb{R}}_+^*&\to &J(G)\rtimes {\mathbb{R}}_+^*&\to&(J(G)/J_0(G))\rtimes {\mathbb{R}}_+^*&\to &0\\ &&\pi \downarrow&&\pi \downarrow&&\overline\pi \downarrow&&\\ 0&\to &{\mathcal{K}}({\mathcal{E}}_0)&\to &{\mathcal{K}}({\mathcal{E}})&\to&{\mathcal{K}}({\mathcal{E}}/{\mathcal{E}}_0)&\to &0 \end{array}$$ We will show that: 1. $\pi$ induces an isomorphism from $J_0(G)\rtimes {\mathbb{R}}_+^*$ onto ${\mathcal{K}}({\mathcal{E}}_0)$; 2. $\pi(J(G)\rtimes {\mathbb{R}}_+^*)\supset {\mathcal{K}}({\mathcal{E}})$; 3. The map $\overline \pi$ induced by $\pi$ gives rise to an isomorphism from $(J(G)/J_0(G))\rtimes {\mathbb{R}}_+^* $ onto ${\mathcal{K}}({\mathcal{E}}/{\mathcal{E}}_0)$. The Theorem then follows by diagram chasing. We proceed with the proof of these facts: 1. According to Lemma \[convolution1\], the module ${\mathcal{E}}_0$ is the closure of ${\mathcal{J}}_0(G)$. It is therefore canonically isomorphic to $C_0({\mathbb{R}}_+^*)\otimes C^*(G)$ and, since ${\mathbb{R}}_+^*$ acts by translation in $C_0({\mathbb{R}}_+^*)$ the statement follows. 2. Let $f,g\in {\mathcal{J}}(G)$; by Lemma \[teknik\], the map $s\mapsto f\ast \alpha_s(g^*)$ has rapid decay and thus defines an element $\int f\ast \alpha_s(g^*) \lambda _s ds/s$ in $J(G)\rtimes {\mathbb{R}}_+^*$. A direct computation then shows that $\vartheta _{f,g}=\pi\Big(\int f\alpha_s(g^*) \lambda _s ds/s\Big)$ where $\vartheta _{f,g}$ is the usual “rank one” operator $h\mapsto f\langle g|h\rangle$ on ${\mathcal{E}}$. The result follows from density of ${\mathcal{J}}(G)$ in ${\mathcal{E}}$. 3. The quotient $\Psi_*(G)/C^*(G)$ is isomorphic via the principal symbol map $\sigma$ to $C(S^* {\mathfrak{A}}G)$. Theorem \[pseudosintegrales\] and Lemma \[ActionADroite\] give the computations of $\sigma (\langle f|g\rangle)$ and $\widehat{(f\ast P)_0}$ for $f,g\in {\mathcal{J}}(G)$ and $P\in {\mathcal{P}}_0(G)$. It follows that ${\mathcal{E}}/{\mathcal{E}}_0\simeq {\mathcal{E}}\otimes_\sigma C(S^* {\mathfrak{A}}G)$ is the Hilbert $C(S^* {\mathfrak{A}}G)$ module $C(S^*{\mathfrak{A}}G)\otimes L^2({\mathbb{R}}_+^*)$ obtained as completion of $C_c^{\infty}({\mathfrak{A}}^*G\setminus G^{(0)})$ with respect to the $C(S^* {\mathfrak{A}}G)$ valued scalar product given by $\langle f|g\rangle(x,\xi)=\int_0^{+\infty}\overline{f(x,t\xi)}g(x,t\xi)\frac {dt}{t}$ and right action $(fh)(x,\xi)=f(x,\xi)h(x,\frac{\xi}{\|\xi\|})$ (for $f,g\in C_c^{\infty}({\mathfrak{A}}^*G\setminus G^{(0)})$ and $h\in C^\infty(S^* {\mathfrak{A}}G)$). Moreover, left action of $J(G)/J_0(G)\simeq C_0({\mathfrak{A}}^*G\setminus G^{(0)})$ is given by pointwise multiplication and the action of ${\mathbb{R}}_+^*$ is by scaling. The result follows. It is worth noting that thanks to Lemma \[essentialideal\] and the amenability of ${\mathbb{R}}_+^*$, the ideal $J_0(G)\rtimes {\mathbb{R}}_+^*$ of $C^*({G_{ga}})=C^*(G^+_{ad})\rtimes {\mathbb{R}}_+^*$ is essential. In particular this remark gives an alternative proof of the injectivity of $\pi : J(G)\rtimes {\mathbb{R}}_+^*\to {\mathcal{L}}(E)$. Pseudodifferential operators as convolution kernels --------------------------------------------------- Using Theorem \[bleue\] together with Lemma \[fullmodule\], we can see $\Psi^*(G)$ as sitting as a corner in $C^*(G_{ga})$: Let $\zeta\in {\mathcal{E}}$ satisfying $\langle \zeta|\zeta\rangle=1$ (as in Theorem \[auboisdormant\]). Then $\vartheta_{\zeta,\zeta}\in {\mathcal{K}}(E)\subset C^*(G_{ga})$ is a projection and $\Psi^*(G)\simeq \vartheta_{\zeta,\zeta}C^*(G_{ga})\vartheta_{\zeta,\zeta}$. Actually, taking $f\in {\mathcal{E}}^\infty$ such that $\langle f|f\rangle$ is invertible and $1-\langle f|f\rangle\in C_c^\infty(G)$ (Lemma \[fullmodule\]), we can really see the elements of ${\mathcal{P}}_0(G)$ as convolution operators on $G_{ga}$: the map $\vartheta_{T(f),f}\mapsto \langle f|T(f)\rangle$ is an isomorphism from a corner of the convolution algebra of smooth functions with Schwartz decay on $G_{ga}$ onto a subalgebra of ${\mathcal{P}}_0(G)$, containing $\langle f|f\rangle {\mathcal{P}}_0(G)\langle f|f\rangle$. Note that for every $P\in {\mathcal{P}}_0(G)$, $P-\langle f|f\rangle\ast P\ast \langle f|f\rangle\in C_c^\infty (G)$. Finally $P=Q+R$ where $Q$ is (the image of) a smooth function on $G_{ga}$ and $R\in C_c^\infty(G)$. Stability of $\mathcal E$ ------------------------- We now prove that the module ${\mathcal{E}}$ is stable, [[i.e.]{} ]{}isomorphic to Kasparov’s universal module ${\mathcal{H}}_{\Psi^*(G)}$. We begin by recalling a few facts: 1. Recall first a few easy facts about Hilbert modules: 1. A Hilbert module $E$ is countably generated if and only if ${\mathcal{K}}(E)$ is $\sigma$-unital ([[i.e.]{} ]{}has a countable approximate unit). 2. For a countably generated Hilbert module $E$ over a $C^*$-algebra $B$, the following are equivalent: 1. the Hilbert $B$-module $E$ is stable [[i.e.]{} ]{}isomorphic to $\ell^2({\mathbb{N}})\otimes E$; 2. the $C^*$-algebra ${\mathcal{K}}(E)$ is stable [[i.e.]{} ]{}isomorphic to ${\mathcal{K}}\otimes {\mathcal{K}}(E)$; 3. there is a morphism $\psi:{\mathcal{K}}\to {\mathcal{L}}(E)$ such that $\psi({\mathcal{K}})E=E$, or equivalently $\psi({\mathcal{K}}){\mathcal{K}}(E)={\mathcal{K}}(E)$. Note that by Cohen’s theorem ([@Cohen; @HeRo] - see [[*e.g.*]{} ]{}[@Ped] for the case of $C^*$-algebras) there is no linear span or closure. Indeed, implications (i) $\Rightarrow $ (ii) $\Rightarrow $ (iii) are straightforward. If (iii) is satisfied, then $E=\bigoplus _{k} \psi(e_{kk})E$ and the various $\psi(e_{kk})E$ are isomorphic *via* $\psi(e_{jk})$ where $e_{jk}$ are matrix units for ${\mathcal{K}}$, and (i) follows. 3. For a countably generated Hilbert module $E$ over a $\sigma $-unital $C^*$-algebra $B$, the following are equivalent: 1. $E$ is stable and full; 2. $E\simeq {\mathcal{H}}_B$. Indeed (ii) $\Rightarrow $ (i) is obvious. Conversely, let $E^*$ be the Hilbert ${\mathcal{K}}(E)$-module ${\mathcal{K}}(E,B)$; if $E$ is stable, then ${\mathcal{K}}(E)$ is isomorphic as a ${\mathcal{K}}(E)$-module to $\ell^2\otimes {\mathcal{K}}(E)={\mathcal{H}}_{{\mathcal{K}}(E)}$ and therefore $E'\oplus {\mathcal{K}}(E)\simeq {\mathcal{K}}(E)$ for every countably generated Hilbert ${\mathcal{K}}(E)$-module $E'$; if $E$ is full, then $B\simeq E^*\otimes_{{\mathcal{K}}(E)} E$; finally, if (ii) is satisfied, then $${\mathcal{H}}_B\simeq {\mathcal{H}}_B\oplus E\simeq (({\mathcal{H}}_B\otimes _BE^*)\oplus {\mathcal{K}}(E))\otimes _{{\mathcal{K}}(E)}E\simeq {\mathcal{K}}(E)\otimes _{{\mathcal{K}}(E)}E\simeq E.$$ 2. Let ${\mathcal{G}}$ be a longitudinally smooth groupoid with compact space of objects ${\mathcal{G}}^{(0)}$. Every element of the algebroid of ${\mathcal{G}}$, [[i.e.]{} ]{}a section of ${\mathfrak{A}}{\mathcal{G}}$, defines a differential operator affiliated to ${\mathcal{G}}$, [[i.e.]{} ]{}a multiplier of ${\mathcal{G}}$ (once chosen a trivialization of the longitunal half densities on ${\mathcal{G}}$). Recall ([[*cf.*]{} ]{}[@chief]) that the closure of an elliptic pseudodifferential operator $D$ on ${\mathcal{G}}$ is a *regular* unbounded multiplier of $C^*({\mathcal{G}})$ and $(1+D^*\overline D)^{-1}$ is a strictly positive element of $C^*({\mathcal{G}})$; moreover, if $D$ is formally self-adjoint, then $\overline D$ is self-adjoint. In particular, if $(X_1,\ldots ,X_m)$ are elements spanning the algebroid of ${\mathcal{G}}$ as a $C^\infty({\mathcal{G}}^{(0)})$ module, then the closure of $\sum_i X_i^*X_i$ is such a self-adjoint elliptic operator whose spectrum is in ${\mathbb{R}}_+$ - [[i.e.]{} ]{}it is positive. Furthermore, if $f$ is a smooth everywhere positive function on ${\mathcal{G}}^{(0)}$, the operator $\sum_i X_i^*X_i+f=f^{1/2}\Big(\sum \tilde X_i^*\tilde X_i+1\Big)f^{1/2}$ is invertible where $\tilde X_i=f^{-1/2}X_i$ and its inverse $\Big(\sum_i X_i^*X_i+f\Big)^{-1}$ is a strictly positive element in $C^*({\mathcal{G}})$. 3. Recall that a regular self-adjoint positive multiplier $D$ of a $C^*$-algebra ${\mathcal{A}}$ with resolvent in ${\mathcal{A}}$ defines a morphism $\pi_D:f\mapsto f(D)$ from $C_0({\mathbb{R}}_+^*)$ to ${\mathcal{A}}$. Note that, for $t\in {\mathbb{R}}_+^*$, we have $\pi_{tD}=\pi_D\circ \lambda_t$ where $\lambda_t$ is the automorphism of $C_0({\mathbb{R}}_+^*)$ induced by the regular representation. Since $t\mapsto \frac{t}{t^2+1}$ is a strictly positive element of $C_0({\mathbb{R}}_+^*)$, it follows that $\pi_D(C_0({\mathbb{R}}_+^*)){\mathcal{A}}$ is the closure of $D(D^2+1)^{-1}{\mathcal{A}}$. \[afer\] Let $G$ be a Lie groupoid with compact $G^{(0)}$ and $G_{ad}$ its adiabatic groupoid; let $(Y_1,\ldots ,Y_m)$ span ${\mathfrak{A}}G$ as a module over $C^\infty(G^{(0)})$. Let $D_1=\Big(\sum_i Y_i^*Y_i+1\Big )^{1/2}$. There is a unique morphism $\psi:C_0({\mathbb{R}}_+^*)\to C^*(G_{ad}^+)$ such that 1. $ev_1\circ \psi=\pi_{D_1}$ where $ev_1:C^*(G_{ad})\to C^*(G)$ is evaluation at $1$. 2. $\alpha_u\circ \psi=\psi\circ \lambda_u$ for all $u\in {\mathbb{R}}_+^*$.\[propaferii\] Moreover, 1. $ev_0\circ \psi(f)=f\circ q^{1/2}\in C_0({\mathfrak{A}}^*G)$ where $q=\sum Y_i^2$ the $Y_i$’s being considered as functions on ${\mathfrak{A}}^* G$ (linear on each fiber).\[propafera\] 2. $\psi(C_0({\mathbb{R}}_+^*))\subset J(G)$ and $\psi(C_0({\mathbb{R}}_+^*))J(G)=J(G)$.\[propaferb\] If $\psi $ satisfies (i) and (ii), then $(\psi (f))_t=f(tD_1)$ for $t\ne 0$. This shows immediately uniqueness of $\psi$. Choose $u_0\ge 1$ and let ${\mathcal{G}}$ be the restriction of $G_{ad}$ to $[0,u_0]$. Let $(X_1,\ldots, X_m)$ be the canonical extension of $(Y_1,\ldots ,Y_m)$ to ${\mathfrak{A}}{\mathcal{G}}$. In particular for $u\ne 0$ we have $(X_i)_u=uY_i$. Put $D=\Big(\sum_i X_i^*X_i+h^2\Big )^{1/2}$, where $h$ is the function $(x,u)\mapsto u$ defined on ${\mathcal{G}}^{(0)}=G^{(0)}\times [0,u_0]$. Note also $D_1=\Big(\sum_i Y_i^*Y_i+1\Big )^{1/2}$ its evaluation at $1$. For $u\in ]0,u_0]$, since $D_u=uD_1$, we have $ev_u\circ \pi_D=\pi_{D_1}\circ \lambda_{u}$. Since the spectrum of $D_1$ is contained in $[1,+\infty[$, it follows that for $f\in C_0({\mathbb{R}}_+^*)$ and $u\in {\mathbb{R}}_+^*$, $\|\pi_{D_1}\circ \lambda_{u}(f)\|\le \sup \{|f(v)|;\ v\ge u\}$. In particular $\lim_{u\to+\infty} \pi_{D_1}\circ \lambda_{u}(f)=0$. It follows that there is an element $\psi_f\in C^*(G_{ad}^+)$ such that, for $u\ne 0$ we have $(\psi_f)_u=\pi_{D_1}\circ \lambda_{u}$ and $\psi_f$ restricted to $[0,u_0]$ is equal to $f(D)$. We thus get a homomorphism $\psi:f\mapsto \psi_f$ from $C_0({\mathbb{R}}_+^*)$ to $C^*(G_{ad})$. It satisfies (i), and since $\alpha_u\circ \psi(f)$ and $\psi\circ \lambda_u(f)$ coincide on ${\mathbb{R}}_+^*$, they are equal. Property (ii) follows. Now, $D_0^2=q$ so we get property (\[propafera\]). By uniqueness, the morphism $\psi $ does not depend on the choice of $u_0$. It follows from property (\[propafera\]) that, for all $f\in C_0({\mathbb{R}}_+^*)$, since $f(0)=0$, $ev_0\circ \psi(f)$ vanishes on $M\subset {\mathfrak{A}}^*G$, whence $\psi(f)\in J(G)$. Put $J'=\psi(C_0({\mathbb{R}}_+^*))J(G)$. As the restriction of $h$ to the $[\varepsilon,u_0]$ is invertible for every $u_0$ and $\varepsilon$, the restriction of $D(1+D^2)^{-1}$ to $[\varepsilon,u_0]$ is a strictly positive element; it follows that $J'$ contains the functions ${\mathbb{R}}_+^*\to C^*(G)$ with compact support. It therefore contains the ideal $C_0({\mathbb{R}}_+^*)\otimes C^*(G)$ of $C^*(G^+_{ad})$. As the quotient $C^*(G^+_{ad})/C_0({\mathbb{R}}_+^*)\otimes C^*(G)\simeq C_0({\mathfrak{A}}G^*)$ is abelian, it follows that the right ideal $J'$ is two sided. Finally $ev_0(J')=C_0({\mathfrak{A}}G^*\setminus M) $ from property (\[propafera\]) since $\frac {q^{1/2}}{1+q}$ is a strictly positive element of $C_0({\mathfrak{A}}G^*\setminus M) $. The $C^*$-algebra $J(G)\rtimes {\mathbb{R}}_+^*\simeq {\mathcal{K}}({\mathcal{E}})$ is stable and therefore the Hilbert $\Psi^*(G)$-module is stable, [[i.e.]{} ]{}isomorphic to $\ell^2\otimes \Psi^*(G)$. Indeed, by condition (\[propaferii\]) in prop. \[afer\], $\psi$ induces a morphism $\hat\psi$ from ${\mathcal{K}}=C_0({\mathbb{R}}_+^*)\rtimes _\lambda {\mathbb{R}}_+^*$ to $J(G)\rtimes _\alpha {\mathbb{R}}_+^*$ and by (\[propaferb\]), $\hat\psi({\mathcal{K}})\big(J(G)\rtimes _\alpha {\mathbb{R}}_+^*)=J(G)\rtimes _\alpha {\mathbb{R}}_+^*$. One can in fact show that $J(G)$ is isomorphic to a crossed product $\Psi^*(G)\rtimes_\beta {\mathbb{R}}$ in such a way that the ${\mathbb{R}}_+^*$ action on $J(G)$ is intertwined with the dual action $\hat \beta$. It follows that $J(G)\rtimes {\mathbb{R}}_+^*=\Psi^*(G)\otimes {\mathcal{K}}$ - we use the duality of ${\mathbb{R}}$ with ${\mathbb{R}}_+^*$ given by $(t,u)\mapsto u^{it}$. The action $\beta$ is given by $\beta_t(P)=D_1^{it}PD_1^{-it}$. The operator $D_1^{it}$ is pseudodifferential of complex order $it$ ([[*cf.*]{} ]{}[@chief]), therefore $\beta_t(P)$ is pseudodifferential of order $0$; it follows also from [@chief] that it has the same principal symbol as $P$. We may embed $\Psi^*(G)$ in the multiplier algebra of $J(G)$ setting $P.(f_u)_{u\in {\mathbb{R}}_+^*}=(P*f_u)_{u\in {\mathbb{R}}_+^*}$ and $(f_u)_{u\in {\mathbb{R}}_+^*}.P=(f_u*P)_{u\in {\mathbb{R}}_+^*}$ thanks to prop. \[ActionADroite2\].\[ActionADroite2b\]); furthermore, we have a one parameter group $(D^{it})_{t\in {\mathbb{R}}}$ in the multipliers of $J(G)$. As $D_u$ and $D_1$ are scalar multiples of each other, we find in this way a covariant representation of the pair $(\Psi^*(G),{\mathbb{R}})$. Associated to this covariant representation of $(\Psi^*(G),{\mathbb{R}})$ is a morphism from $\Psi^*(G)\rtimes {\mathbb{R}}$ into the multiplier algebra of $J(G)$, but since the image of $C^*({\mathbb{R}})\subset \Psi^*(G)\rtimes_\beta {\mathbb{R}}$ is contained in $J(G)$, we get a homomorphism $\varphi :\Psi^*(G)\rtimes_\beta {\mathbb{R}}\to J(G)$. Note that the image of $\Psi^*(G)$ is translation invariant, [[i.e.]{} ]{}invariant by the extension $\overline\alpha_u$ of $\alpha_u$ to the multiplier algebra, and that $\overline\alpha_u(D^{it})=u^{it}D^{it}$. This shows that $\varphi$ is an equivariant morphism from $(\Psi^*(G)\rtimes_\beta {\mathbb{R}},\hat\beta)$ to $(J(G),\alpha)$. Now $\beta_t$ restricts to an action of ${\mathbb{R}}$ on $C^*(G)$, and according to prop. \[ActionADroite2\].\[convolution2\]) it follows that $\varphi(C^*(G)\rtimes_\beta {\mathbb{R}})$ is contained in the ideal $C_0({\mathbb{R}}_+^*)\otimes C^*(G)$ of $J(G)$. As $D_1^{it}$ is a multiplier of $C^*(G)$, this crossed product is trivial. More precisely: if $t\mapsto w_t$ is a continuous homomorphism from a group $\Gamma$ to the multiplier algebra of a $C^*$-algebra $A$, the (full or reduced) crossed product $A\rtimes_{ad\, w}\Gamma$ is isomorphic to the (max or min) tensor product $A\otimes C^*(\Gamma)$ where $A$ and $\Gamma $ map to the multiplier algebra respectively by $a\mapsto a\otimes 1$ and $t\mapsto w_t\otimes \lambda_t$. It follows that $\varphi(C^*(G)\rtimes {\mathbb{R}})=C_0({\mathbb{R}}_+^*)\otimes C^*(G)$. Now, at the quotient level, the action $\beta$ becomes trivial on symbols: we thus obtain equality $\varphi((\Psi^*(G)\rtimes_\beta {\mathbb{R}})=J(G)$. [AA]{} , Boutet de Monvel’s Calculus and Groupoids I. J. Noncommut. Geom. **4 no. 3**, (2010), 313–329. , The holonomy groupoid of a singular foliation. J. Reine Angew. Math. **626**, (2009), 1–37. , The analytic index of elliptic pseudodifferential operators on a singular foliation. J. K-Theory **8 no. 3**, (2011), 363–385. , A Schwartz type algebra for the tangent groupoid. [*In*]{} $K$-theory and noncommutative geometry. EMS Ser. Congr. Rep., Eur. Math. Soc., Zürich, (2008), 181–199. , Factorization in group algebras, Duke Math. J. **26** (1959), 199–205. , [Sur la théorie non commutative de l’intégration. [*In*]{} Algèbres d’opérateurs]{}, Springer Lect. Notes in Math. [**725**]{}, [(1979)]{}, [19–143]{} , A survey of foliations and operator algebras. [*In*]{} Operator algebras and applications, Part I (Kingston, Ont., 1980), Proc. Sympos. Pure Math., **38**, Amer. Math. Soc., Providence, R.I., (1982), 521–628, , *Non commutative geometry,* [Academic Press, Inc.]{} [San Diego, CA]{} [(1994).]{} , Index theory and groupoids. [*In*]{} Geometric and topological methods for quantum field theory. Cambridge Univ. Press, Cambridge, (2010), 86–158. , Boutet de Monvel calculus using groupoids. In preparation. , *Functional calculus of pseudodifferential boundary problems.* Second edition. Progress in Mathematics, [**65**]{}, Birkhäuser Boston, Inc., Boston, MA, (1996). , *Abstract Harmonic Analysis, Volume II, Structure and Analysis on Compact Groups, Analysis on Locally Compact Abelian Groups*, 3rd printing, Springer, 1997. , Morphismes $K$-orientés d’espaces de feuilles et fonctorialité en théorie de Kasparov (d’après une conjecture d’A. Connes). Ann. Sci. École Norm. Sup. **20**, (1987), 325–390. , Regular representation of groupoid $C^*$-algebras and applications to inverse semigroups. J. Reine Angew. Math. [**546**]{}, (2002), 47–72. , Elliptic symbols, elliptic operators and Poincaré duality on conical pseudomanifolds. J. K-Theory [**4 no. 2**]{}, (2009), 263–297. , Pseudodifferential analysis on groupoids. Documenta Mathematica (electronic) **5**, (2000), 625–655. , Théorie de Kasparov équivariante et groupoïdes I. K-Theory, **16 no. 4**, (1999), 361–390. , *The Atiyah-Patodi-Singer index theorem.* Research Notes in Math. [**4**]{}, A K Peters, Ltd., Wellesley, MA, (1993). , Pseudodifferential calculus on manifolds with corners and groupoids. Proceedings of the AMS, [**127 no. 10** ]{}, (1999), 2871–2881. , Groupoids and pseudodifferential calculus on manifolds with corners. J. Funct. Anal. [**199 no. 1**]{}, (2003), 243–286. , Indice analytique et Groupoïdes de Lie. C.R. Acad. Sci. Paris, Sér. I, **325 no. 2**, (1997), 193–198. , Pseudodifferential operators on differential groupoids. Pacific J. Math. **189**, (1999), 117–152. , Continuous family groupoids. Homology Homotopy Appl. **2**, (2000), 89–104. , Factorization in $C^*$-algebras. Exposition. Math. **16** (1998), no. 2, 145–156. , [*A groupoid approach to $C^*$-algebras.*]{} Lecture Notes in Math. [**793**]{} (1980). , A short introduction to Boutet de Monvel’s calculus. [*In*]{} Approaches to singular analysis (Berlin, 1999), Oper. Theory Adv. Appl. [**125**]{}, Birkhäuser, Basel, (2001), 85–116. , Unbounded pseudodifferential Calculus on Lie groupoids, [J. Funct. Anal.]{} [**236**]{}, (2006), 161–200. [^1]: Considering for instance the case $E=H_A$.
--- abstract: 'Flow nonnormality induced linear transient phenomena in thin self-gravitating astrophysical discs are studied in the shearing sheet approximation. The considered system includes two modes of perturbations: vortex and (spiral density) wave. It is shown that self-gravity considerably alters the vortex mode dynamics – its transient (swing) growth may be several orders of magnitude stronger than in the non-self-gravitating case and 2-3 times larger than the transient growth of the wave mode. Based on this finding, we comment on the role of vortex mode perturbations in a gravitoturbulent state. Also described is the linear coupling of the perturbation modes, caused by the differential character of disc rotation. The coupling is asymmetric – vortex mode perturbations are able to excite wave mode ones, but not vice versa. This asymmetric coupling lends additional significance to the vortex mode as a participant in spiral density waves and shocks manifestations in astrophysical discs.' author: - | G. R. Mamatsashvili $^{1}$[^1] and G. D. Chagelishvili $^{1}$\ $^{1}$ E. Kharadze Georgian National Astrophysical Observatory, 2a Kazbegi Ave., Tbilisi 0160, Georgia date: 'Accepted 2007 July 26. Received 2007 July 18; in original form 2007 January 28' title: 'Transient growth and coupling of vortex and wave modes in self-gravitating gaseous discs' --- \[firstpage\] accretion, accretion discs – gravitation – hydrodynamics – instabilities – planetary systems: protoplanetary discs Introduction ============ The comprehension of the regular spiral structure of galaxies provided a powerful incentive for the investigation of dynamical processes and phenomena in astrophysical discs. Afterwards the study in this direction intensified and somewhat changed (emphases shifted) – in the 70s of the last century a powerful branch of astrophysical disc research was developed along with the X-ray astronomy. The latter revealed a strong energy release in discs – accretion of the disc matter onto its centre – that is ascribed to anomalous viscosity due to some sort of turbulence [@SS73; @Pr81]. In self-gravitating discs, turbulence and angular momentum transport process are commonly attributed to spiral density (SD) waves [@G01; @LR04; @LR05; @M05]. In other words, in the study of self-gravitating discs, attention is focused on the dynamical activity (possibility of amplification) of SD wave perturbations. On the other hand, in investigating turbulence and angular momentum transport in non-self-gravitating discs, the main emphasis is put on the dynamical activity of vortical perturbations [@LCC88; @GL99; @Lo99; @DSC00; @IK01; @Ta01; @Da02; @Ch03; @KB03; @Te03; @UR04; @Ye04; @AMN05; @Bo05; @JG05]. Interest to this type of perturbations is also connected with the idea that long-lived vortices in protoplanetary discs play an important role in the planet formation [@BS95; @Br99; @GL99; @GL00; @KB06]. Regular vortex structures are also observed in several spiral galaxies [@FK99]. In general, one can say that the astrophysical disc community has been advancing along a crooked path solving dynamical problems. Matters are complicated by one of the main sources supplying energetically dynamical processes in discs – differential character of disc rotation. Moreover, this source is universal, as astrophysical discs actually always rotate differentially. The complication is due to the nonnormal character of flows with inhomogeneous kinematics (such as differential rotation). The nonnormality and its consequences were well understood and precisely described by the hydrodynamic community in the 90s of the last century. The imperfection of the traditional/modal analysis (spectral expansion of perturbations in time and subsequent examination of eigenfunctions) in regard to smooth (without inflection point) inhomogeneous/shear flows was revealed: operators existing in the mathematical formalism of the modal analysis of shear flows (e.g. plane Couette and Poiseuille) are nonnormal and, hence, corresponding eigenfunctions are nonorthogonal and strongly interfere [@RSH93; @Tr93]. Consequently, a correct approach should fully analyse the interference of eigenfunctions, which is actually calculable/manageable for asymptotically large times. In fact, in the modal analysis the main focus is on the asymptotic stability of flows, while no attention is directed to any particular initial value or finite time period of the dynamics – this period of the evolution is thought to be of no significance and is left to speculation. This fact prompted the above mentioned revision of the generally accepted spectral/modal approach with the special emphasis being shifted from the analysis of long time asymptotic flow stability to the study of finite time (transient) behaviour. It was demonstrated that just because of this nonnormality of the operators, a strong linear transient growth of vortex and/or wave mode perturbations occurs in asymptotically stable (in accordance with Rayleigh’s stability criterion; Rayleigh 1880) hydrodynamic shear flows [@Gu91; @BF92; @RH93; @FI93; @FI00], that determines the fate of a flow. Differential rotation (i.e., the disc flow nonnormality) is determinant in a number of cases. Even when other basic factors (e.g. self-gravity, stratification) are involved, it interplays with them and strongly modifies the dynamical picture. This circumstance attaches great importance to disc flow shear (nonnormality) induced phenomena. [*The main goal of this paper is to study the dynamical manifestations of the nonnormality in a simple model of self-gravitating astrophysical discs.*]{} Specifically, we consider the linear dynamics of two-dimensional perturbations in thin self-gravitating gaseous discs in the shearing sheet approximation [@GLB65]. This system includes two modes of perturbations – vortex and (spiral density) wave. The distinguishing features of these modes are the following: the vortex mode is aperiodic and has nonzero potential vorticity; the SD wave mode is oscillatory and has zero potential vorticity. We use the nonmodal approach instead of modal/spectral one in describing the linear dynamics of perturbations. This approach was applied in previous well-known papers [@GLB65; @JT66; @GT78; @To81], but they concentrate on perturbations with zero potential vorticity, i.e., on SD wave mode perturbations, whereas vortex mode ones can be important as well, for example, in the angular momentum transport. This is evidenced by the references above concerning the dynamical activity of vortical perturbations in astrophysical, though non-self-gravitating, discs. We would like to emphasize that associating turbulence and angular momentum transport in self-gravitating discs only with SD waves, one in that way does not make rigorous identification (according to the value of potential vorticity) of perturbation modes and does not investigate the relative contributions/fractions of vortical and SD wave perturbations in shear stresses. In other words, the role of vortical perturbations is left out of self-gravitating disc analysis (see also Sec. 5). Considered in the present paper linear coupling of SD wave and vortex modes (see below) makes the latter more obvious participant in the angular momentum transport process, at least as an additional generator of SD waves. We outline some possible ways of injection of nonzero vorticity perturbations discussed in the literature. Spatial inhomogeneities of entropy (temperature) distribution, that is, baroclinic instability [@KB03; @K04] and Rossby wave instability [@Lo99], are able to generate nonzero vorticity. Random perturbations of the vorticity field may be present in a disc that forms as a result of collapse of a turbulent protostellar cloud. Accretion of clumps of gas onto a disc and convection are other possibilities for vorticity generation (see Godon & Livio 2000 for the details about the last three mechanisms). The first dynamical manifestation of the disc flow nonnormality is a transient character of growth of both vortex and wave mode perturbations irrespective of the value of Toomre’s stability parameter $Q$. In dynamically important regions of wavenumber plane, *vortex mode perturbations always exhibit larger growth factors than wave mode ones.* Consequently, the underestimation of the vortex mode and its transient (swing) growth may result in an incomplete dynamical picture of discs. First of all, we would like to mention the following from our investigation fact: the transient growth of vortex mode perturbations in self-gravitating discs is much stronger than in non-self-gravitating ones. Due to this fact, the presence of self-gravity (gravitational instability) might, in principle, allow for the onset of hydrodynamic turbulence in astrophysical discs. The so-called *bypass* mechanism of the onset of hydrodynamic turbulence, which was elaborated by the hydrodynamic community in the 90s [@GG94; @HR94; @BDT95; @Gr00], may play a part in the process of triggering turbulence in self-gravitating discs as well, because linear transient amplification of perturbations due to flow nonnormality, which can supply turbulence with mean flow energy, is a basis for this concept. The details of the bypass concept as applied to astrophysical discs are given in Chagelishvili et al. 2003 and in Tevzadze et al. 2003. The second implication of flow nonnormality – the linear coupling of vortex and wave modes, which is caused by the strong shear of disc flow – is also important for the proper comprehension of disc flow dynamics. One should note that the coupling is asymmetric: vortex mode perturbations (i.e. perturbations with nonzero potential vorticity) are able to excite SD waves (i.e. perturbations with zero potential vorticity), but not vice versa. So, this asymmetric coupling lends additional significance to the vortex mode as a participant in SD waves and shocks manifestations in astrophysical discs [@Bo05; @Bo07]. The paper is organized as follows: physical approximations and the mathematical formalism of the problem are introduced in Sec. 2, classification of perturbations is described in Sec. 3, the numerical analysis of the linear dynamical equations, including transient growth and coupling of vortex and wave modes, is presented in Sec. 4, discussions and summary are given in Sec. 5. Physical Model and Equations ============================ Let us study the linear dynamics of vortex and wave mode perturbations in a simple analogue to a differentially rotating disc – in a thin self-gravitating gas sheet, where unperturbed velocity field is a parallel flow with a constant shear (the shearing sheet approximation). A Coriolis force is included to take into account the effects of disc rotation. The equation of state is assumed to be polytropic. In this case, the linearized dynamical equations read as [@GT78]: $$\frac{\partial \sigma}{\partial t} + 2Ax \frac{\partial \sigma}{\partial y} + {\Sigma_0} \left(\frac {\partial u}{ \partial x} + \frac{\partial v}{\partial y} \right) = 0,$$ $$\frac{\partial u}{\partial t} + 2Ax \frac{\partial u}{\partial y} - 2\Omega_0 v =- \frac{\partial}{\partial x} \left( c_{s}^2\frac{\sigma}{\Sigma_0}+\psi \right)$$ $$\frac{\partial v}{\partial t} + 2Ax \frac{\partial v}{\partial y} + 2Bu=-\frac{\partial}{\partial y} \left( c_{s}^2\frac{\sigma}{\Sigma_0}+\psi \right).$$ This set of linear perturbation equations is supplemented by Poisson’s equation $$\Delta \psi=4\pi G \sigma\delta(z).$$ Here $u,v,\sigma$ and $\psi$ are, respectively, the perturbed radial and azimuthal velocities, surface density and gravitational potential of the gas sheet with spatially constant unperturbed surface density $\Sigma_0$. $\Omega_0$ is the angular velocity of the shearing sheet, $c_{\rm s}$ is the sound speed in the gas, $x$ and $y$ are, respectively, the radial and azimuthal coordinates of the shearing sheet, $A$ is the Oort constant (shear parameter), which is $A/\Omega_0 \simeq -0.75 < 0$ for quasi-Keplerian rotation considered in this paper, and $B\equiv A+\Omega_0$. As usual, we define the epicyclic frequency $\kappa$ by $\kappa^2\equiv 4B\Omega_0.$ Following the standard procedure of nonmodal analysis [@GLB65; @GT78; @NS92; @Ch97], we introduce the spatial Fourier harmonics (SFHs) of perturbations with time-dependent phases: $$F({\bf r},t)\sim F(k_x,k_y,t) \exp [ {\rm i}k_x(t)x + {\rm i}k_yy],$$ $$k_x(t) = k_x - 2A k_y t,$$ where $F\equiv(u,v,\sigma,\psi)$. The streamwise/azimuthal wavenumber $k_{y}$ remains unchanged. The streamcross/radial wavenumber $k_{x}(t)$ changes with time at a constant rate due to the effect of the shearing background on wave crests. One can say that in the linear approximation SFHs “drift” along the $k_{x}$-axis in ${\bf k}$-plane (wavenumber plane). In other words, lines of constant phase of each SFH are sheared over by the basic flow in physical plane. So, imposing initially any kind (vortex or/and wave mode) of a leading SFH ($k_{x}(0)/k_{y}<0$) on the flow, in the linear regime, it eventually becomes a trailing one ($k_{x}(t)/k_{y}>0$) as time passes. It should also be noted that SFHs represent the simplest/basic “elements” of dynamical processes at constant shear rate and greatly help to grasp the phenomena of transient growth and coupling of perturbation modes. Substituting equation (5) into equations (1-4) and introducing $\hat{\sigma} \equiv i\sigma/\Sigma_0,~ \phi \equiv i\psi$, we get the system of ordinary differential equations that govern the linear dynamics of SFHs of perturbations: $${\frac {d {\hat {\sigma}}(t)} {d t}} = k_x(t) u(t) + k_y v(t). \label{eqsigma2}$$ $${\frac{d u(t)} {d t}} - 2\Omega_0 v(t)= - k_x(t) \left[c_{\rm s}^2 {\hat {\sigma}}(t) + \phi(t)\right], \label{equ2}$$ $${\frac {d v(t)}{d t}} +2Bu(t) =- k_y \left[c_{\rm s}^2 {\hat {\sigma}}(t) + \phi(t)\right], \label{eqv2}$$ $$\phi(t)=-\frac{2\pi G\Sigma_0}{k(t)} {\hat {\sigma}}(t), \label{eqvphi}$$ where $k(t)=(k_x^2(t)+k_y^2)^{1/2}$. The last equation follows from Poisson’s equation and is straightforward to derive [@GT78; @NS92]. One can easily show that this system possesses an important time invariant: $$k_x(t) v(t) - k_y u(t) + 2B{\hat {\sigma}}(t) \equiv {\cal I,}$$ which follows (for SFHs of perturbations) from the conservation of potential vorticity. This time invariant ${\cal I}$, in turn, indicates the existence of the vortex/aperiodic mode in the perturbation spectrum. Clarification of the role of this mode in the disc flow dynamics represents the primary purpose of our study. In the calculations below, we use the quadratic form (spectral energy density) for a separate SFH as a measure of its intensity: $$E(t)\equiv \frac{\Sigma_0}{2} \left (|u|^2+|v|^2 \right)+\frac{\Sigma_0}{2} c_s^2 {|\hat {\sigma}|}^2,$$ where the two terms correspond to the kinetic and potential energies of SFH, respectively. Strictly speaking, this is not an exact expression for perturbation energy, since it does not contain terms corresponding to gravitational energy. Nevertheless, we find this quadratic form convenient for a comparative analysis of transient growth of perturbation modes at different values of Toomre’s parameter $Q$ presented below. The numerical study of SFH’s dynamics is based on equations (6-11). However, for the comprehension of the dynamical processes – transient growth and coupling of vortex and SD wave modes and – it is advisable to rewrite them in the form of a single second order inhomogeneous differential equation for $\phi$. Introducing dimensionless parameters and variables: $ \tau \equiv t\kappa,~~ K_y \equiv c_{\rm s}k_{y}/\kappa,~~K_x(\tau) \equiv c_{\rm s}k_x(t)/\kappa,~~K(\tau) \equiv c_{\rm s}k(t)/\kappa,~~\hat{\Omega}_{0}\equiv \Omega_{0}/\kappa\simeq 1,~~\hat{A} \equiv A/\kappa\simeq -0.75,~~\hat{B}\equiv B/\kappa\simeq 0.25,~~\hat{\cal I} \equiv {\cal I}/\kappa,~~Q\equiv c_{\rm s}\kappa/\pi G \Sigma_{0},~~\hat{u} \equiv u/c_{\rm s},~~\hat{v} \equiv v/c_{\rm s},~~\hat{\phi} \equiv \phi/c^2_{\rm s},~~\hat{E}(\tau) \equiv E(t)/\Sigma_{0}c^2_{\rm s},$ one finally gets: $$K_x(\tau) {\hat v}(\tau) - K_y {\hat u}(\tau) + 2\hat{B}{\hat {\sigma}}(\tau) \equiv {\hat {\cal I}},$$ $${\hat E}(\tau) \equiv \frac{1}{2} \left (|{\hat u}|^2+|{\hat v}|^2 +|\hat {\sigma}|^2 \right),$$ $${\frac{d^2 {\hat{\phi}}(\tau)}{d {\tau}^2}} + \hat{\omega}^{2}(\tau)\hat{\phi}(\tau)=-\frac{4}{QK(\tau)} \left(\hat{\Omega}_{0}+\frac{2\hat{A}K_y^2}{K^2(\tau)}\right){\hat{\cal I}},$$ where $$\hat {\omega}^{2}(\tau) =1+K^2(\tau)-\frac{2}{Q}K(\tau)+\frac {12\hat{A}^{2}K_y^4}{K^4(\tau)}+\frac{8\hat{\Omega}_{0}\hat{A}K_y^2}{K^2(\tau)}.$$ We retain $\hat{A}$ (equal to $-0.75$) in equations (14) and (15) to make obvious the role of the flow shear in the perturbation dynamics. The rhs of equation (14) can be viewed as a source term for SD waves (see below) and it resembles the bar term in Goldreich & Tremaine 78. All other perturbed quantities are easily expressed in terms of $\hat{\phi}(t)$ and its time derivative. We do not give those expressions here. Notice that due to the varying radial wavenumber $K_x(\tau)$, frequency $\hat{\omega}$ is also dime-dependent and, as a result (we will show that below), the modes of perturbations appear to be coupled in the linear theory. For further reference, in Fig. 1 we show unstable ($\hat{\omega}^2<0$) domains in [**K**]{}-plane for various $Q$. At $Q<1$, the unstable domains exist even in the shearless limit, while at $Q \geq 1$, their occurrence is brought about just by the combined action of shear and self-gravity. Classification of perturbations =============================== One can classify perturbation modes involved in equation (14) (or in equations (6-9)) from the mathematical and physical standpoints separately. Mathematically, a general solution of equation (14) can be written as a sum of two parts: a *general* solution of the corresponding homogeneous equation (oscillatory SD wave mode) and a *particular* solution of this inhomogeneous equation. It should be emphasized that the particular solution is not uniquely determined: the sum of a particular solution of the inhomogeneous equation and any particular solution of the corresponding homogeneous equation (i.e. wave mode solution) is also a particular solution of the inhomogeneous equation, i.e., a particular solution may comprise any amount of the wave mode. Physically, equation (14) describes two different modes/types of perturbations:\ [**(a)**]{} SD wave mode ($\hat{\phi}^{\rm (w)}$), that is determined by general solution of the corresponding homogeneous equation and has zero potential vorticity ($\hat{\cal I}=0$);\ [**(b)**]{} Vortex mode ($\hat{\phi}^{\rm (v)}$), originating from the equation inhomogeneity (the rhs of equation (14)), is associated with the nonoscillatory part of a particular solution of the inhomogeneous equation. In the shearless limit, the vortex mode is independent of time and has zero velocity divergence, but nonzero potential vorticity. However, in the presence of a shear it acquires divergence as well (this question is addressed in detail below). From the above argument, it follows that the correspondence between the aperiodic vortex mode and the particular solution of the inhomogeneous equation is quite unambiguous – the vortex mode is associated only with that part of a particular solution not containing any oscillations. The amplitude of the vortex mode is proportional to $\hat{\cal I}$ and goes to zero when $\hat{\cal I}=0$. ![Panel (a) is the evolution of $\eta$ in the case where a leading pure SD wave mode SFH with only one sign (positive) of frequency is inserted initially into equations (6-9). This wave mode SFH acquires curl at about the time when it starts to enter the unstable (nonadiabatic) domain in Figs. 1,4. Panel (b) shows the evolution of $1/\eta$ for an initially inserted leading pure vortex mode SFH. This vortex mode SFH acquires divergent nature at about the same time. In both figures, $Q=1.5$ and $K_y=0.2$.](Fig.2.eps){width="\columnwidth"} In the following, we will keep to the physical standpoint of separation of perturbation modes. Thus, any solution of equations (6-9) can be expressed as a superposition of oscillatory/SD wave and aperiodic/vortex modes: $$~~~~~~~~~~~~~~~~~~~~~\hat{u}={\hat u}^{\rm (w)}+{\hat u}^{\rm (v)},~ {\hat v}={\hat v}^{\rm (w)}+{\hat v}^{\rm (v)},~$$ $$~~~~~~~~~~~~~~~~~~~~~{\hat \sigma}={\hat \sigma}^{\rm (w)}+{\hat \sigma}^{\rm (v)},~{\hat \phi}={\hat \phi}^{\rm (w)}+{\hat \phi}^{\rm (v)},$$ where ${\hat u}^{\rm (w)},~{\hat v}^{\rm (w)},~{\hat \sigma}^{\rm (w)}$ and ${\hat u}^{\rm (v)},~{\hat v}^{\rm (v)},~{\hat \sigma}^{\rm (v)}$ are found from ${\hat \phi}^{\rm (w)}$ and ${\hat \phi}^{\rm (v)}$ respectively. ![image](Fig.3a.eps){width="95.00000%" height="40.00000%"} ![image](Fig.3b.eps){width="95.00000%" height="40.00000%"} ![image](Fig.3c.eps){width="95.00000%" height="40.00000%"} In fact, the (modified) initial value problem is solved by equations (6-9) (or, equivalently, by equations (12-14)). The character of the dynamics depends on the mode of perturbation inserted initially into equations (6-9): pure SD wave mode (without admixes of aperiodic vortices) or pure aperiodic vortex mode (without admixes of SD waves). Classification of perturbation modes that is widespread divides them into vortical and divergent types. Such a classification coincides with the described above physical classification in the case of nonvortical (rigid) mean rotation, when the wave mode has zero potential vorticity, but nonzero divergence and the vortex mode has zero divergence, but nonzero potential vorticity. In the considered quasi-Keplerian (i.e. strongly sheared) flow, the situation is fundamentally different: the vortex mode may acquire divergent nature and initially predominantly divergent wave mode may acquire curl in the course of evolution. In Fig. 2 we present the time-development of the parameter: $$~~~~~~~~~~~~~~~~~~~~~~~~\eta=\left| \frac{K_x(\tau) {\hat v}(\tau) - K_y {\hat u}(\tau)}{K_x(\tau) {\hat u}(\tau) + K_y {\hat v}(\tau)}\right|,$$ which represents the ratio of the $z$-component of curl to divergence, and its inverse value $1/\eta$ for initially imposed SD wave and vortex mode SFHs respectively. In the case where initially a predominantly divergent leading pure SD wave mode SFH with positive frequency is inserted into equations (6-9) (the procedure for selecting this type of initial condition is described in detail in Nakagawa & Sekiya 1992), it acquires curl at about the time of entering the unstable (nonadiabatic) domain in Figs. 1,4, as seen in Fig. 2(a). Fig. 2(b) shows that an initially inserted leading pure vortex mode SFH acquires divergent nature at about the same time. Thus, divergent (or vortical) perturbations in the quasi-Keplerian flow (or, in a shear flow in general) represent some mix of vortex and wave modes and classification of perturbations as vortical and divergent may be misleading. So, we prefer the classification of perturbations into wave and vortex modes and investigate dynamical processes in terms of dynamics of these two modes. Transient growth and coupling of vortex and SD wave mode SFH[s]{} – numerical analysis ====================================================================================== We begin the numerical integration of equations (6-9), choosing as an initial condition leading ($K_x(0)/K_y<0$) pure vortex mode SFH without any admixes of SD wave mode SFHs. Such a selection of the vortex mode is possible only far from the unstable domains, where $|K_x(\tau)/K_y|\gg 1$ and the adiabatic condition $|d{\hat \omega}(\tau)/d\tau|\ll {\hat \omega}^{2}(\tau)$ is met. The procedure for selecting is described in detail in the Appendix of Bodo et al. 2005. In Fig. 3, we present the subsequent evolution of ${\hat u}$, ${\hat v}$, ${\hat \sigma}$ and ${\hat E}/{\hat E}(0)$ for this kind of initial condition at different values of $Q$ and $K_y$ (in these and other figures below, hats are omitted). The sketch of the SFH’s evolution/drift in wavenumber plane is given in Fig. 4. We single out a leading SFH, for which $K_y < 2Q^{-1}$ and that is located initially at point 1 far from the unstable domains and coincides here, as mentioned above, with a pure vortex mode SFH (henceforth, we take the azimuthal wavenumber $K_{y}$ to be positive without loss of generality). As seen in this figure, this SFH drifts along the $K_x$-axis in the direction denoted by the arrows ($1 \to 2 \to 3 \to 4 \to 5 \to 6$). The drift velocity ($= 2K_y |\hat{A}|$) depends linearly on $K_y$. Initially, being in the adiabatic region, the SFH gains energy from the mean flow due to the nonnormality and amplifies algebraically, but retains its aperiodic nature. Then, the dynamics becomes nonadiabatic – the SFH reaches the unstable domain where $\hat{\omega}^{2}(\tau)<0$ (point 2). From this point, a temporal exponential growth and simultaneous excitation of the corresponding SFH of SD wave mode take place – at this stage of the evolution, the linear coupling of vortex and wave mode SFHs is at work (this phenomenon was found and thoroughly described for the simplest shear flow in Chagelishvili et al. 1997 and for non-self-gravitating Keplerian discs in Bodo et al. 2005). Then, the vortex and the generated SD wave mode SFHs reach the intermediate stable region (point 3) where $\hat{\omega}^{2}(\tau)>0$, pass it and get again into the domain where $\hat{\omega}^{2}(\tau)<0$ (point 4). Further exponential growth of both vortex and SD wave mode SFHs and, in addition, excitation of another SD wave mode SFH by the vortex mode one occur until they cross this second unstable domain (point 5). After that, the linear dynamics of the vortex and SD wave mode SFHs become decoupled and adiabatic: the energy of the vortex mode SFH dies down and the energy of the wave mode SFHs increases. No further SD wave excitation is observed beyond point 5. Here we have described the SD wave generation for $K_y<2Q^{-1},$ although it similarly occurs for $K_{y}\sim 2Q^{-1}$ (see Fig. 3), except that the transient amplification of an initially imposed vortex mode SFH is mainly due to the nonnormality, since the unstable domains do not extend to such $K_y$. As a consequence, the amplification amount and the amplitudes of generated SD wave mode SFHs are several orders of magnitude lower than those for $K_y<2Q^{-1}.$ ![Sketch of SFH’s evolution in [**K**]{}-plane. A leading vortex mode SFH, located initially at point 1, drifts with time along the $K_x$-axis in the direction denoted by the arrows $1 \to 2 \to 3 \to 4 \to 5 \to 6$. After crossing point 2, a SD wave mode SFH appears. The first stage of the wave excitation takes place from point 2 to point 3, the second one – from point 4 to point 5.](Fig.4.eps){width="\columnwidth"} ![Influence of self-gravity on the transient (swing) amplification of a vortex mode SFH (dotted lines correspond to the non-self-gravitating case). This influence is, as expected, largest for $K_{y}Q<2$ (panel (a), all four curves are characterized by the same $K_{x}(0)=-5,~K_{y}=0.2$ and identical initial values of perturbed quantities) and negligible for $K_{y}Q\sim 2$ (panel (b), $K_{x}(0)=-10,~K_{y}=2$).](Fig.5.eps){width="\columnwidth"} Thus, the linear dynamics of a vortex mode SFH is followed by the generation of the corresponding SD wave mode SFHs. This generated SFHs eventually acquire a trailing orientation, since $K_x(\tau)/K_y>0$ after leaving the nonadiabatic region (that stretches roughly from point 2 to point 5 in Fig. 4). In the nonadiabatic region, the characteristic timescales of the vortex and SD wave mode SFHs are comparable and the perturbation modes cannot be separated/distinguished. But with moving away from the nonadiabatic region, modes get cleanly separated: the timescale of the SD wave mode SFHs becomes much shorter than that of the vortex mode SFH (the frequency of waves increases with time). One can formally divide the energy evolution into two stages: the first stage represents the transient amplification (both due to the nonnormality and to the unstable domains) of the originally imposed pure vortex mode SFH and excitation (and also subsequent exponential amplification) of the corresponding SFHs of SD wave mode, and the second one represents the algebraic growth of the generated SD wave mode SFHs. The latter exhibit linear amplification at asymptotically large times. In the absence of the wave excitation (e.g. for $K_y \gg 2Q^{-1}$), this second stage describes decreasing energy of a vortex mode SFH. Thus, newly created trailing SD wave mode SFHs in the linear regime extract energy from the mean quasi-Keplerian flow in contrast to a trailing vortex mode SFH that after leaving the nonadiabatic region, gradually returns all its energy to the mean flow. One can say that vortex mode perturbations act as a mediator between the mean flow and waves. The energy needed for the wave excitation is extracted from the shear and self-gravity with the help of the vortex mode. The following two subsections are devoted to the behaviour of vortex mode perturbations for various values of Toomre’s parameter $Q$. Effect of self-gravity on the transient growth of vortex mode perturbations --------------------------------------------------------------------------- ![image](Fig.6.eps){width="\textwidth" height="60.00000%"} In Fig. 5, we compare the time-development of ${\hat E}/{\hat E}(0)$ for initially imposed vortex mode SFHs in the presence and absence of self-gravity for different values of $Q$ and $K_y$. It is clear that the growth of vortex mode SFHs continues longer time and may be several orders of magnitude stronger than in the non-self-gravitating case ($Q \rightarrow \infty$, dotted lines in the panels). In this case, the growth of vortex mode SFHs occurs just at the leading stage ($K_x(\tau)/K_y<0$), on becoming trailing ($K_x(\tau)/K_y>0$) SFHs give back energy to the mean flow and weaken [@Ch03]. In the self-gravitating case instead, the amplification of SFHs continues into the trailing stage as well due to the existence of the unstable domain at $K_x(\tau)/K_y>0$ (see Figs. 1,4). From Fig. 5, one can see that at $Q=1$ and $K_y=0.2$, a vortex mode SFH grows about $10^6$ times stronger than in the non-self-gravitating case; at $Q=1.5$ and $K_y=0.2$ – about $10^4$ times stronger; at $Q=3$ and $K_y=0.2$ – about $10^2$ times stronger; at $Q=1$ and $K_y=2$, i.e., at $K_{y}Q \sim 2$, the growth is the same as in the non-self-gravitating case. In any case, one can conclude that self-gravity provides a substantial enhancement of the transient growth of vortex mode perturbations, thereby making them active participants in dynamical processes. This, in turn, shows that the bypass mechanism may play a part in the onset of turbulence in self-gravitating discs. In order to better understand the role of the vortex mode, it is interesting to carry out a comparative analysis of the transient (swing) amplification of the vortex and SD wave modes. First we define a density growth factor $f$ for SFH initially located at point 1, as a ratio of the absolute values of $\hat {\sigma}(\tau)$ after (at point 5) and before (at point 2) crossing the unstable domains in Fig. 4, $f\equiv|\hat {\sigma}(\tau'')|/|\hat {\sigma}(\tau')|$, where $\tau''$ and $\tau'$ are the moments corresponding to points 5 and 2 respectively. A similar growth factor for coherent wavelet solutions was used by Kim & Ostriker (2001), but they took its logarithm. In Fig. 6(a), we present this parameter computed separately for the initially imposed vortex and SD wave mode SFHs as a function of the dimensionless azimuthal wavenumber $K_y$ at $Q=1.5$. The initially imposed SD wave mode SFH has a certain (positive) sign of frequency. As seen in this panel, in the dynamically important regions (i.e., for such values of $K_y$, at which both modes experience maximum transient growth), the growth factor for the vortex mode SFH is almost two times larger than that for the SD wave mode one. An analogous comparison is made in Fig. 6(b). Here we display the ratio of the maximum value achieved by the energy $\hat {E}(\tau)$ during transient amplification in the unstable domains to its initial value on entering these domains (i.e., at point 2) computed separately for the imposed at point 2 vortex and SD wave mode SFHs as a function of $K_y$, similar to what is done in Fig. 6(a). But now, as distinct from the first case, for the wave mode SFH we choose initial conditions at point 2 in such a way as to obtain the largest possible amplification of the wave energy in the transient growth (swing) phase for its fixed initial value, i.e., we take transiently most unstable wave mode SFH. The situation is similar to the above one: the energy amplification factor for the vortex mode SFH is more than two times greater than the largest possible energy amplification factor for the SD wave mode SFH. In Fig. 6(c), we present the parallel evolution of the energies of the initially (at point 2) imposed vortex and maximally amplified wave mode SFHs for $K_y=0.3$, at which the energy growth factors of both modes during swing phase are the largest (see Fig. 6(b)). Both SFHs start with the same energy. This panel shows that the energy corresponding to the initially imposed vortex mode SFH remains about two times larger than that corresponding to the SD wave mode SFH at all times. From Fig. 6, it is evident that the vortex mode prevails over the SD wave mode in two respects: in the transient amplification and wave generation (by the wave generation for the initially imposed SD wave mode SFH we mean wave amplification due to the over-reflection mechanism; see Nakagawa & Sekiya 1992 for details). The latter follows from the asymptotic stage at large times, when both energy curves become linear (see Figs. 3,6(c)) with inclinations proportional to the square of the amplitudes of generated SD wave mode SFHs after crossing the unstable domains. We see that the energy of SD wave mode SFHs generated by the initially imposed vortex mode one at this asymptotic stage is about two times larger than that of the generated by the initially imposed SD wave mode SFH. Ways of SD wave generation -------------------------- There are two ways of SD wave generation in the considered here disc flow. The first is a direct and well-known over-reflection mechanism: inserting a leading SD wave mode SFH in equations (6-9), one can get the energy dynamics represented by the dotted curve in Fig. 6c. The curve describes the energy growth in the unstable domains that is followed by a linear growth of the total energy of the resulting over-reflected and (over)-transmitted trailing SD wave mode SFHs at large times. Another way of the generation of trailing SD wave mode SFHs is by means of vortex mode SFHs: leading pure vortex mode SFHs can effectively excite trailing SD wave mode ones due to the mode coupling phenomenon. Figure 6 shows that this second way of SD wave generation is about two times more effective than the first one. We go on to calculate the amplitudes of SD waves generated due to mode coupling (more precisely, the amplitudes for the gravitational potential perturbations. The amplitudes for other quantities can afterwards be found easily). Insert an adiabatic leading vortex mode SFH into equation (14). Then after passing the nonadiabatic region (in the other adiabatic region at $\tau \rightarrow \infty$) this solution goes over to the superposition of a trailing SFH of the vortex mode and generated SD wave mode SFHs: $${\hat \phi}(\tau)={\hat \phi}^{\rm v}(\tau)+{\hat \phi}^{\rm w}(\tau)= -\frac{4}{QK(\tau){\hat \omega}^2(\tau)}\left(\hat{\Omega}_{0}+\frac{2{\hat A}K_y^2}{K^2(\tau)}\right){\hat {\cal I}}+$$ $$~~~~~~~~~+\frac{a}{Q\sqrt{{\hat \omega}(\tau)}}e^{ -i\int^{\tau}{\hat \omega}(\tau')d\tau'}+\frac{a^{\ast}}{Q\sqrt{{\hat \omega}(\tau)}} e^{i\int^{\tau}{\hat \omega}(\tau')d\tau'},$$ where $a$ and $a^{\ast}$ are the amplitudes of generated SD wave mode SFHs. The latter come in complex conjugate pairs with different signs of frequencies and, hence, propagating in the opposite directions. In Fig. 7, we plot the numerically obtained $|a|$ as a function of $K_y$ at $Q=1.5$ and in Fig. 8 the same for different values of $Q$. In both figures $\hat{\cal I}$ is set to unity. The procedure for the calculations is analogous to that employed by Nakagawa & Sekiya (1992) to study the over-reflection of SD waves. ![The amplitude $|a|$ of trailing SD wave mode SFH generated by a leading vortex mode SFH vs $K_y$ at $Q=1.5$. ](Fig.7.eps){width="\columnwidth"} ![Same as in Fig. 7, but for different values of $Q$, including the non-self-gravitating case ($Q \rightarrow \infty $).](Fig.8.eps){width="\columnwidth"} Let us analyse the curves in Figs. 7,8. The maximum value of $|a|$ is achieved for $K_{y}\sim O(0.1)$, as at such $K_y$ a SFH drifts slowly in **K**-plane, consequently, slowly crosses the unstable domains and has more time for transient growth. The cavities in these figures are due to the crossing of two wave excitation domains by the SFH (see Fig. 4) and, therefore, due to the existence of two, more or less independent, stages of the wave excitation/generation: the resulting (after leaving both unstable domains) wave mode SFH is a superposition of SFHs generated at these stages. At $Q=1.5$, this interference is destructive close to $K_{y}=1$ and results in the cavity (Fig. 7). As one can see from Fig. 8, the cavity point occurs at different $K_y$ for different $Q$. At small values of $Q$, the number of cavity points increases (see curve for $Q=1$ in Fig. 8), as destructive interference happens at different $K_y$. In the non-self-gravitating case ($Q \rightarrow \infty$), there is no cavity point, as in this case there is just one stage of the wave excitation and, therefore, the interference phenomenon is absent. Summary and Discussions ======================= Studying flow nonnormality induced transient phenomena in thin self-gravitating astrophysical discs, we have concentrated on the dynamics of vortex mode perturbations. The linear dynamics of perturbations has been investigated by means of so-called nonmodal approach, which consists in tracing the dynamics of SFHs (see equation (5)). SFHs represent the simplest/basic “elements” of the dynamical processes at constant shear rate and greatly help to grasp transient growth and coupling of perturbation modes. It has been shown that self-gravity considerably alters the dynamics of vortex mode SFHs – their transient growth may be several orders of magnitude stronger than in the non-self-gravitating case and 2-3 times larger than the transient growth of the wave mode (see Figs. 5,6). The evolution of vortical and wave type perturbations has recently been studied by Wada, Meurer & Norman (2002) in high-resolution numerical simulations of two-dimensional hydrodynamic turbulence in certain galactic disc flows. These simulations clearly demonstrate that vortical/solenoidal perturbations are equally important together with spiral density wave/compressible perturbations in determining the properties (spectra) of the resulting gravitoturbulent state. They also suggest, based on their simulation results, that self-sustained turbulence can also occur in the case of self-gravitating Keplerian rotation. Their work supports the conclusion that the described here gravity-enhanced transient growth of a vortex mode perturbation is a key factor in the simulated self-sustained turbulence and, hence, the vortex mode perturbation itself – a key participant. Consequently, self-gravity, or gravitational instability, might allow for the onset of two-dimensional hydrodynamic turbulence in astrophysical disc flows and the bypass mechanism of the onset of turbulence (elaborated by the hydrodynamic community in the 90s of the last century), may play a part in this process. Another relevant to the present paper work is that of Gammie (2001). In this paper, turbulence and angular momentum transport in self-gravitating discs are studied numerically in the shearing sheet approximation. One has to note that the initial white noise distribution adopted by Gammie (2001), in fact, includes also vortex mode perturbations, as the potential vorticity of the initially imposed white noise is nonzero, i.e., the initial perturbation is a mixture of vortex and wave modes. However, as mentioned in the Introduction, the identification/separation of the modes and the separate study of their properties have been left out of analysis. In the case of such a mixture, vortex mode perturbations grow transiently and at the same time generate zero vorticity perturbations (i.e., SD waves) via the described here linear mechanism. The resulting turbulence can be more “violent” than the zero vorticity one, i.e., turbulence in which the basic elements are SD waves, and therefore the angular momentum transport can be more intense because of the larger transient growth factors of vortex mode perturbations. There should also be differences in the statistical properties (energy spectra) of these two kinds of turbulence. (This question is addressed, in part, in Wada, Meurer & Norman 2002. However, they make somewhat different from ours classification of perturbation modes). Gammie’s analysis actually concentrates only on the question of locality of angular momentum transport in a gravitoturbulent state and not on the investigation of the relative contributions/fractions of vortex and SD wave mode perturbations in shear stresses governing angular momentum transport in discs. The described linear coupling of vortex and wave modes, which is caused by the differential character of the disc flow, is asymmetric: vortex mode perturbations are able to generate wave mode ones, but not vice versa. The considered system conserves potential vorticity and it is obvious that vortex mode perturbations, having nonzero vorticity, are able to excite SD wave mode perturbations having zero potential vorticity. This asymmetric coupling lends additional significance to the vortex mode as a participant in SD waves and shocks manifestations in astrophysical discs. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the International Science and Technology Center (ISTC) grant G-1217. We would like to thank the anonymous referee for helpful comments. Afshordi N., Mukhopadhyay B., Narayan R., 2005, ApJ, 629, 373 Baggett J. S., Driscoll T. A., Trefethen L. N., 1995, Phys. Fluids, 7, 833 Barge P., Sommeria J., 1995, A&A, 295, L1 Bodo G., Chagelishvili G., Murante G., Tevzadze A., Rossi P., Ferrari A., 2005, A&A, 437, 9 Bodo G., Tevzadze A., Chagelishvili G., Mignone A., Rossi P., Ferrari A., 2007, submitted to A&A Bracco A., Chavanis P. H., Provenzale A., Spiegel E., 1999, Phys. Fluids, 11, 2280 Butler K. M., Farrell B. F., 1992, Phys. Fluids A, 4, 1637 Chagelishvili G. D., Tevzadze A. G., Bodo G., Moiseev S. S., 1997, Phys. Rev. Lett., 79, 3178 Chagelishvili G., Zahn J-P., Tevzadze A., Lominadze J., 2003, A&A, 402, 401 Davis S. S., Sheehan D. P., Cuzzi J. N., 2000, ApJ, 545, 494 Davis S. S., 2002, ApJ, 576, 450 Farrell B. F., Ioannou P. J., 1993, Phys. Fluids A, 5, 1390 Farrell B. F., Ioannou P. J., 2000, Phys. Fluids, 12, 3021 Fridman A. M., Khoruzhii O. V., 1999, in Sellwood J. A. and Goodman J., eds, ASP Conf. Ser. Vol. 160, Astrophysical Discs. Astron. Soc. Pac., San Francisco, p. 341 Gammie C., 2001, ApJ, 553, 174 Gebhardt T., Grossmann S., 1994, Phys. Rev. E, 50, 3705 Godon P., Livio M., 1999, ApJ, 523, 350 Godon P., Livio M., 2000, ApJ, 537, 396 Goldreich P., Lynden-Bell D., 1965, MNRAS, 130, 125 Goldreich P., Tremaine S., 1978, ApJ, 222, 850 Grossmann S., 2000, Rev. Mod. Phys., 72, 603 Gustavsson L. H., 1991, J. Fluid Mech., 224, 241 Henningson D. S., Reddy S. C., 1994, Phys. Fluids, 6, 1396 Ioannou P. J., Kakouris A., 2001, ApJ, 550, 931 Johnson B. M., Gammie C. F., 2005, ApJ, 635, 149 Julian W. H., Toomre A., 1966, ApJ, 146, 810 Kim W., Ostriker E., 2001, ApJ, 559, 70 Klahr H. H., Bodenheimer P., 2003, ApJ, 582, 869 Klahr H. H., 2004, ApJ, 606, 1070 Klahr H. H., Bodenheimer P., 2006, ApJ, 639, 432 Lodato G., Rice W. K. M., 2004, MNRAS, 351, 630 Lodato G., Rice W. K. M., 2005, MNRAS, 358, 1489 Lominadze J. G., Chagelishvili G. D., Chanishvili R. G., 1988, SvA. Lett., 14, 364 Lovelace R. V. E., Li H., Colgate S. A., Nelson A. F., 1999, ApJ, 513, 805 Mejia A. C., Durisen R. H., Pickett M. K., Cai K., 2005, ApJ, 619, 1098 Nakagawa Y., Sekiya M., 1992, MNRAS, 256, 685 Pringle J. E., 1981, ARA&A, 19, 137 Rayleigh, Lord, 1880, Scientific Papers, Vol. 1, Cambridge Univ. Press, Cambridge, p. 474 Reddy S. C., Henningson D. S., 1993, J. Fluid Mech., 252, 209 Reddy S. C., Schmid P. J., Henningson D. S., 1993, SIAM J. Appl. Math., 53, 15 Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337 Tagger M., 2001, A&A, 380, 750 Tevzadze A., Chagelishvili G., Zahn J.-P., Chanishvili R., Lominadze J., 2003, A&A, 407, 779 Toomre A., 1981, in Fall S. M. and Lynden-Bell D., eds, The Structure and Evolution of Normal Galaxies. Cambridge Univ. Press, Cambridge, p. 111 Trefethen L. N., Trefethen A. E., Reddy S. C., Driscoll T. A., 1993, Sci., 261, 578 Umurhan O. M., Regev O., 2004, A&A, 427, 855 Wada K., Meurer G., Norman C., 2002, ApJ, 577, 197 Yecko P. A., 2004, A&A, 425, 385 [^1]: E-mail: g.mamatsashvili@astro-ge.org
--- abstract: 'Infrared quadrupole modes (IRQM) of the valence electrons in light deformed sodium clusters are studied by means of the time-dependent local-density approximation (TDLDA). IRQM are classified by angular momentum components $\lambda\mu =$20, 21 and 22 whose $\mu$ branches are separated by cluster deformation. In light clusters with a low spectral density, IRQM are unambiguously related to specific electron-hole excitations, thus giving access to the single-electron spectrum near the Fermi surface (HOMO-LUMO region). Most of IRQM are determined by cluster deformation and so can serve as a sensitive probe of the deformation effects in the mean field. The IRQM branch $\lambda\mu =$21 is coupled with the magnetic scissors mode, which gives a chance to detect the latter. We discuss two-photon processes, Raman scattering (RS), stimulated emission pumping (SEP), and stimulated adiabatic Raman passage (STIRAP), as the relevant tools to observe IRQM. A new method to detect the IRQM population in clusters is proposed.' author: - 'V.O. Nesterenko$^{1,2,5}$, P.-G. Reinhard$^{3}$, W. Kleinig$^{1,4}$, and D.S. Dolci$^{1}$' title: Infrared electron modes in light deformed clusters --- Introduction ============ The most prominent electron mode in metal clusters is the Mie surface plasmon [@Mie]. Being the doorway to many structural and dynamical properties, it has been much investigated in the past, for extensive summaries see e.g. [@KreVol; @deH93; @Bra93; @Hab]. Besides the dominant Mie plasmon, many other electron modes, both electric and magnetic, have been predicted but not yet observed in clusters, for a brief overview see [@Ne_SN]. Similar modes exist in other many-fermion systems. In particular, almost all of them were identified experimentally in atomic nuclei, for a review see [@Ber]. The analogy with deformed nuclei suggests that deformed clusters should exhibit a family of low-energy (infrared) electron modes which are absent in spherical systems. Deformed clusters have a partly filled valence electron shell and so sustain low-energy one-electron-one-hole ($1eh$) excitations inside this shell. Being mainly determined by the deformation splitting, the excitation energies are rather small and typically lie in the infrared. Since major shells in light clusters cover electron levels with the same space parity, the infrared modes should have positive parity. Their softness and deformation dependence suggest that these should be infrared quadrupole modes (IRQM). In axially deformed clusters, IRQM are represented by separate $\lambda\mu =$20, 21 and 22 branches (where $\lambda=2$ stands for total angular momentum and $\mu$ its azimuthal component). Our present knowledge on infrared electronic excitations in clusters is very poor. Even basic IRQM properties (spectrum, collectivity, evolution with cluster size and deformation, responses to external fields) are unknown. It is the aim of this paper to investigate main features of IRQM from a theoretical perspective, hopefully delivering useful hints for an experimental search. IRQM in light and in heavy clusters have a different origin [@na118]. In the present paper, we will consider light clusters or, specifically, free deformed singly-charged light clusters. Such clusters have essential advantages with respect to IRQM. First, they have an extremely low spectral density at low excitation energies, a feature which facilitates a discrimination of IRQM in two-photon processes. The low spectral density also reduces the unwanted level broadening (Landau fragmentation, electron-electron collisions). Beams of size-selected singly-charged light clusters are readily available. So, these clusters are well suited for measurements. Second, as is shown below, IRQM in light clusters are dominated by one $1eh$ component and can be directly and unambiguously interpreted in terms of the electron spectrum in the HOMO-LUMO region. So, IRQM can deliver an important information on the underlying mean field level structure. Since most of IRQM are induced by cluster deformation, they allow to study the evolution of the electronic levels with deformation. And third, there is a close connection between $\lambda\mu =$21 branch of the IRQM and the electronic scissors mode (SM) [@LS_M1; @prl_M1; @pra_M1; @epjd_M1]. This gives a chance to observe the SM, at least indirectly, through IRQM. SM is a universal dipole magnetic orbital mode peculiar to deformed systems. It was already observed in a variety of quantum systems but not yet in clusters. Our calculations were performed within a linearized time-dependent local density approximation (TDLDA). The ionic background is approximated by a soft deformed jellium density, for reviews see [@Cal00; @Reibook]. The method goes back to early studies on cluster spectra [@Eka84]. It has been successfully applied to study the dipole plasmon in spherical [@Ne_EPJD_98] and deformed [@Ne_AP; @Ne_EPJD_02] clusters. We will consider here Na clusters as the simplest test case. However, IRQM properties worked out below are of a general character and so should appear for all other metal clusters with pronounced shell structure for valence electrons. IRQM are not accessible by one-photon transitions which excite exclusively dipole modes. Thus one has to use two-photon processes (TPP) where the target state is populated via an intermediate state by two (absorption and emission) dipole transitions. The dipole plasmon can serve as an intermediate state. Experimental data on IRQM are very scarce and limited to heavy clusters (e.g. RS measurements of IRQM in embedded silver clusters were reported [@Duval]). Moreover, typical two-photon techniques of atomic and molecular spectroscopy are rarely used for clusters. This can be partly explained by the fact that clusters have some specific properties (see discussion in Sec.\[sec:exp\]), which makes a straightforward application of these techniques questionable. In particular, the common methods of detection of the level population, based on the radiative decay, seem not to be appropriate for clusters. In this paper, we will examine applicability to clusters of some widespread TPP: Raman scattering (RS), stimulated emission pumping (SEP), or stimulated Raman adiabatic passage (STIRAP). An alternative method for detection of the level population in clusters will be proposed. Altogether, the present study has two goals: i) to provide a first guide on quadrupole electronic excitations in light deformed clusters; ii) to explore the access to non-dipole electron modes in clusters by modern two-photon techniques. The paper is outlined as follows. Section \[sec:model\] describes the theoretical framework and choice of the most suitable clusters. In section \[sec:origin\], a general hierarchy of quadrupole electron excitations in clusters is presented and the origin of IRQM is clarified. In section \[sec:res\], the IRQM responses in one- and two-photon processes are compared and the $1eh$ nature of IRQM is demonstrated. In section \[sec:exp\], the possibility to observe IRQM in two-photon process (RS, SEP and STIRAP) is discussed. Conclusions are drawn in section \[sec:conc\]. In Appendix A, the dipole matrix elements responsible for the coupling between the states involved into two-photon process are analyzed. In Appendix B, the scissor mode and its connection with the $\lambda\mu=21$ branch of the IRQM are briefly discussed. Calculation scheme and cluster choice {#sec:model} ===================================== Basic spectral properties ------------------------- The electron cloud is described by the density-functional theory at the level of the local-density approximation (LDA) for the ground state and time-dependent LDA (TDLDA) for the excitations, using actually the functional of [@GL]. The ionic background of the cluster is approximated by the soft jellium model allowing for quadrupole and hexadecapole deformation [@Mon95b]. IRQM stay in the regime of small amplitudes. We thus employ linearized TDLDA, often called the random-phase-approximation (RPA). The actual implementation in axial symmetry is explained in [@Ne_AP]. The reliability of the method has been checked in diverse studies of the Mie plasmon in spherical [@Ne_EPJD_98] and deformed [@Ne_AP; @Ne_EPJD_02] clusters. The choice of the clusters was dictated by the following reasons: i) The clusters should be small enough to possess a dilute and non-collective IRQM spectrum. Only then the spectrum can be resolved and unambiguously related to the single-particle levels. ii) Since IRQM are mainly induced by cluster deformation, the clusters with a strong deformation (both prolate and oblate) are desirable. The simplest case of axial shape is most suitable for the analysis. iii) Shape isomers exhibit different single-electron spectra [@Ne_AP; @Ne_EPJD_02], which can result in smearing out the low-energy spectral lines. The heavier are clusters, the more isomers they have [@Ne_AP; @Ne_EPJD_02]. So, the light clusters with a strictly dominant equilibrium shape are preferable. Between them, we should choose the clusters whose ground state and isomers have the similar (prolate or oblate) shape. Thus we will minimize the blurring IRQM spectra. iv) The jellium approximation is not correct for very small clusters. This establishes a lower limit for the cluster size. The cluster choice can be preliminary done by reviewing the properties of the dipole plasmon which will serve as intermediate state in a two-photon process. Besides, description of the dipole plasmon allows to estimate accuracy of the model. Fig. 1 shows RPA results for the dipole plasmon in light axially deformed Na clusters: prolate Na$^+_{11}$, Na$^+_{15}$, Na$^+_{27}$ and oblate Na$^+_{7}$, Na$^+_{19}$. The shape of each cluster is characterized by equilibrium quadrupole and hexadecapole deformations, $\delta_2$ and $\delta_4$, obtained by minimization of the total cluster energy [@Mon95b; @Ne_AP; @Ne_EPJD_02]. To simulate temperature and other sources of line broadening, the photoabsorption cross section is smoothed by a Lorentzian [@Ne_AP]. As is seen from Fig. 1, the emerging dipole spectra consist basically of two peaks, the weaker one ($\lambda\mu =$10) corresponds to oscillations along the symmetry axis and the stronger one to the orthogonal mode $\lambda\mu =$11 (in fact, two identical modes $\mu=\pm 1$). In most of the clusters, these peaks are well separated by the deformation splitting. The most heavy sample Na$_{27}^+$ shows some Landau fragmentation of the resonance peaks (distribution of the collective strength between several RPA levels indicated by vertical bars). This is the beginning of a trend which persists towards heavier clusters. The Landau fragmentation overrules the deformation splitting already at $N\approx 50$ [@Ne_EPJD_02]. Thus one should stay in the region $N\leq 20$ where the spectra are still dilute and the Landau fragmentation is weak. Within that region, one should pick samples with a large deformation. The clusters Na$_{7}^+$, Na$_{11}^+$ and Na$_{15}^+$ seem to be the best candidates. Besides, their isomeric and ground states have the similar shapes [@Kuemmel], oblate in Na$_{7}^+$ and prolate in Na$_{11}^+$ and Na$_{15}^+$. The analysis of IRQM in the next section confirms this cluster choice. Fig. 1 shows that in general the folded spectra (solid lines) agree nicely with the experiment (triangles) which signifies the reliability of the method. However, the jellium plus RPA description, working well for the heavier clusters, worsens for lighter ones. In Na$^+_{7}$ and Na$^+_{11}$, the calculated spectra are redshifted and do not reproduce the structure details. The discrepancy is due to the fact that light clusters tend to be more compressed due to larger surface tension [@Mon94b]. But we use here a constant Wigner-Seitz radius for reasons of simplicity. Besides, the jellium approximation is certainly too rough for the description of details in so light clusters. The results for the smallest samples may not reach a quantitative level, but they are still useful for qualitative consideration as will be done here. Fig. 1 shows that even in the lightest clusters the jellium TDLDA sufficiently well reproduces the average energy, principle gross-structure, and magnitude of the deformation splitting of the resonance. Such accuracy suffices for our present survey. Two photon process ------------------ We consider a two-photon process running via $\lambda\mu =10$ or 11 branches of the dipole plasmon as the intermediate states, see Fig. 2. The branches are assumed to be well separated by deformation splitting, which is indeed the case for strongly deformed light clusters (see Fig. 1). Then the two reaction paths can be disentangled by tuning the photon frequency. This allows to specify and monitor the process. For example, if the reaction runs only via $\lambda\mu =$10 plasmon branch, then the population of the low-energy $\lambda\mu =$22 mode is forbidden and only 20 and 21 modes remain to be considered. If one of the paths is suppressed, (see discussion for Na$^+_7$ in the next section), then the reaction can be tuned to another path. The comparison of the path rates delivers important information about the structure of the dipole plasmon and IRQM. The population of the IRQM is approximately calculated as a coherent sum of independent two-step processes, each one being a product of dipole photoabsorption and photoemission: $$\begin{aligned} \label{eq:TPP} \sigma_{\uparrow\downarrow}(2\mu_2 i_2) &=& \sum_{i_1} \sigma^{ab}_{E1\uparrow} (0 \rightarrow 1\mu_1 i_1) \sigma^{em}_{E1\downarrow} (1\mu_1 i_1 \rightarrow 2\mu_2 i_2) \nonumber\\ &\propto& \sum_{i_1} \omega_{1\mu_1 i_1} |\langle1\mu_1 i_1|e{\bf r}|0\rangle|^2 \\ &&\quad \cdot \, (\omega_{1\mu_1 i_1}\!-\!\omega_{2\mu_2 i_2})^3 |\langle2\mu_2 i_2|e{\bf r}|1\mu_1 i_1\rangle|^2 \,. \nonumber %\label{eq:TPP}\end{aligned}$$ Here $|1\mu_1 i_1>$ and $|2\mu_2 i_2>$ are RPA states of the dipole and quadrupole spectra, respectively. The index $i_1$ runs over all RPA states with dipole content in the accessible energy interval. The photoabsorption (photoemission) dipole matrix elements in (\[eq:TPP\]) define the coupling between the ground and dipole (dipole and quadrupole) states in two-photon processes. This coupling determines the Rabi frequency, the decisive value in TPP. Eq. (\[eq:TPP\]) follows from the general expression for RS rate given elsewhere (see, e.g. [@Weissbluth]) if one keeps only the Stokes term and neglects the interference between the neighboring intermediate states. The Stokes term alone suffices to illustrate the population of IRQM in two-photon processes. Moreover, already the dipole matrix elements in (\[eq:TPP\]) provide a solid ground for the analysis. As for the interference, it should be weak in light clusters. Indeed, in Fig. 1 the dipole plasmon in clusters with $N\leq 20$ is represented by a few states separated by large energy intervals. In any case, the interference mainly leads to smoothing of the response, which can be taken into account by a reasonable averaging the results. In the present paper, we simulate the interference, temperature and other smearing factors by folding the results by a Lorentzian profile with an averaging width $\Delta$. The spectral width is known to increase with the excitation energy. So, we use $\Delta=$0.25 eV for the high-energy dipole and quadrupole plasmons (like in our previous RPA calculations for the photoabsorption spectra [@Ne_EPJD_98; @Ne_AP; @Ne_EPJD_02]) and $\Delta=$0.1 eV for the low-energy IRQM. It worth noting that the photoabsorption and photoemission dipole matrix elements in (1) have essentially different structure (see Appendix A for more detail). The former is determined by the $1eh$ part of the dipole operator $e{\bf r}$ and is generally large. The latter is given by $1ee$ (electron-electron) and $1hh$ (hole-hole) parts of the dipole operator and its value can broadly vary, depending on the structure of the dipole and quadrupole states. IRQM in the hierarchy of quadrupole excitations {#sec:origin} =============================================== The spectrum of the three-dimensional harmonic oscillator provides a useful sorting scheme for the valence electron levels in metal clusters [@Cle; @deH93; @Bra93]. The levels are sorted in perfectly degenerate bunches, the major quantum shells characterized by the principle quantum number ${\cal N}=0,1,2,...$. The shells are separated by appreciable energy gaps and every shell involves only states of the same space parity $\pi=(-1)^{\cal N}$. This oscillator picture is well fulfilled in light clusters and provides still a good approximation in medium and heavy ones. In axially deformed clusters, the levels are characterized by Nilsson–Clemenger quantum numbers [@Cle] $\nu=[{\cal N}n_z \Lambda ]$ where ${\cal N}$ is the major shell as before, $n_z$ is the number of nodes along the symmetry axis $z$, and $\Lambda$ is the projection of the orbital moment onto the same axis. Following this scheme, excitations of valence electrons are characterized by the $\Delta {\cal N}$ value, the difference in shell for the dominant $1eh$ jumps. Quadrupole excitations have even parity and are collected into the branches $\Delta{\cal N} =0$ and 2. The excitations with larger $\Delta{\cal N}$ are weak and can be neglected. Figure 3 illustrates the hierarchy of quadrupole excitations. The lower panels show the unperturbed photoabsorption calculated for the pure $1eh$ states ignoring the residual interaction. The upper panels give the RPA photoabsorption. Let us first look at the strong quadrupole resonance appearing at high frequencies in the range 2-4 eV. It is blue-shifted by the residual interaction and the larger the cluster, the stronger the shift [@Rei96b]. In heavy clusters this resonance is associated with the quadrupole plasmon. The resonance is mainly formed by E2 transitions over two major shells ($\Delta {\cal N} =2$). It exists in clusters of any shape and exhausts most of the quadrupole strength. The resonance is energetically very close to the dipole Mie plasmon (in a simple estimate $\omega_\lambda =\omega_{\rm pl}\sqrt{\lambda/(2\lambda\!+\!1)}$ where $\omega_{\rm pl}$ is the frequency of the volume plasmon [@Bra93]). In spite of the energy overlap with the dominant E1 plasmon, the E2 plasmon may, in principle, be discriminated by means of angular-resolved electron energy-loss spectroscopy (AR-EELS) at electron scattering angles $\sim 6^{\circ}$ [@Ger]. In the present study, we are interested not in the high-energy E2 resonance but in the IRQM which, being dilute and non-collective, can deliver important information about the electron single-particle spectrum near the Fermi (=HOMO) level. IRQM are associated with the low-energy $\Delta {\cal N} =0$ branch created by E2 transitions inside the valence shell. Being of the $\Delta {\cal N} =0$ origin, IRQM can exist only in clusters with partly occupied valence shell, i.e. in deformed clusters. In Fig. 3, IRQM reside at 0.5-1.5 eV. As compared with the E2 resonance, the IRQM spectrum is very dilute. It is represented only by a few well separated levels. This prevents mixing of $1eh$ configurations by the residual interaction and creation of collective states. The IRQM persist to keep their $1eh$ nature and so can be easily identified as particular $1eh$ configurations. As is seen from Fig. 3, the IRQM have very weak quadrupole strength in the photoabsorption. But they may be accessible in two-photon reactions. Fig. 4 shows single-particle levels and $1eh$ quadrupole transitions inside the valence shells in Na$^+_7$, Na$^+_{11}$, and Na$^+_{15}$. Following this scheme, the IRQM excitations in Fig. 3 can be unambiguously identified as particular $1eh$ states. They include $\{ [101]-[110]\}_{21}$ in Na$^+_7$, $\{ [220]-[200]\}_{20}$, $\{ [220]-[211]\}_{21}$, $\{ [220]-[202]\}_{22}$ in Na$^+_{11}$, and $\{ [220]-[200]\}_{20}$, $\{ [211]-[202]\}_{21}$, $\{ [211]-[200]\}_{21}$, $\{ [220]-[202]\}_{22}$ in Na$^+_{15}$. Electric and magnetic modes with the same $\Lambda^{\pi}$ are known to mix in deformed systems (see, e.g., [@Iud]). The mixture of electric E21 and magnetic orbital M11 excitations is especially interesting since it provides access to the orbital M1 scissors mode (SM) [@LS_M1; @prl_M1; @pra_M1; @epjd_M1]. The properties of this mode are sketched in Appendix B. Results and discussion {#sec:res} ====================== A variety of IRQM spectral distributions is shown in Fig. 5. The first line of the figure contains the quadrupole photoabsorption for IRQM states with $\lambda\mu =$20, 21 and 22. Though IRQM cannot be observed in photoabsorption, the latter is useful in any analysis of electron modes and so is worth to be considered. The second line exhibits magnetic dipole strengths for the magnetic dipole scissors mode (SM). The next two lines show the two-photon populations (\[eq:TPP\]) when the TPP runs separately through $\lambda\mu =$10 and 11 branches of the dipole plasmon. The intermediate states in the two-photon process involve all the RPA dipole states from the plasmon region. The quadrupole spectra in Fig. 5 can be easily identified by $1eh$ transitions exhibited in Fig. 4. Most of the transitions connect the levels arising due to deformation splitting. The corresponding IRQM are determined by the deformation and vanish at the spherical shape. The first panel of Fig. 5 shows that IRQM are selectively active in the quadrupole photoabsorption. Some modes ($\lambda\mu =$22 in Na$^+_{11}$ and $\lambda\mu =$20 and 22 in Na$^+_{15}$) have small E2-transition matrix elements and so are suppressed in this one-photon process. However, as is shown below, these modes can be detected in TPP. The calculations show that all the IRQM are almost pure $1eh$ states, i.e. are dominated by one $1eh$ component (with the related transition in Fig. 4. The contributions of the dominant components typically attain $99-100\%$. Even in $\lambda\mu =$21 states in Na$^+_{15}$, which most deviate from the $1eh$ nature, the contribution of the leading $1eh$ component are 92-93$\%$. This feature is additionally illustrated in Fig. 6 where the unperturbed (the dominant $1eh$ configuration alone) and RPA IRQM strengths are compared. One sees that in both cases IRQM spectra and strengths are very similar. Only the $\lambda\mu =$21 states in Na$^+_{15}$ show considerable redistribution of the strength (keeping, nevertheless, predominantly $1eh$ structure). This case anticipates the involved picture for heavier clusters where the collective redistribution of strength is much stronger. The second panel in Fig. 5 exhibits the photoabsorption cross section for the dipole magnetic SM. The one-to-one correspondence between E21 and M11 peaks demonstrates the intimate connection between these two modes. As is discussed in Appendix B, the scissors mode is driven by cluster deformation and vanishes in the spherical case. Two $\lambda\mu =$21 states in Na$_{15}^+$ represent an instructive example. The lower $\lambda\mu =$21 state determined by $[211] \to [202]$ transition between the members of the deformation multiplet exhibits an appreciable magnetic dipole strength and thus should carry a large SM fraction. Instead, the higher $\lambda\mu =$21 state is determined by a $[211] \to [200]$ transition which takes place even in the spherical case. The deformation is not crucial here. This state favors the E21 field and so can be treated as an ordinary quadrupole mode. Altogether, it is seen that deformation-induced IRQM provide access to SM. The third and fourth panels in Fig. 5 show the TPP cross-section for the reaction paths via $\lambda\mu =10$ and 11 branches of the dipole plasmon. It is seen that TPP response considerably deviates from the photoabsorption. Besides, the population of IRQM crucially depends on the reaction path. For example, it is negligible for $\lambda\mu =$21 states in Na$_{7}^+$ (path 10) and Na$_{15}^+$ (path 11). Suppression of the TPP transfer in these cases is explained by destructive effects in the photoemission dipole matrix element (see discussion in Appendix A). This matrix element can vary to a large extent, depending on the particular structure of IRQM and dipole intermediate states. So, for the efficient population of the particular IRQM, one has to choose the most optimal TPP path. The microscopic calculations can be used here as a guide. Because of the $1eh$ nature of IRQM and their strong dependence on cluster deformation, they can deliver a valuable spectroscopic information on the electron levels in the HOMO-LUMO region as well as on the deformation effects in the cluster mean field. For example, the measurement for $\lambda\mu =$21 state in Na$_{7}^+$ can provide the energy difference between the Fermi \[101\] and particle \[110\] levels (see Fig. 4). If to deduce the Fermi level energy from the ionization potential data, one finally gets the energy of \[110\] level. Simultaneously we will estimate the deformation splitting of the active subshell. The analysis will be even more effective, if to combine the TPP data with the photoemission and inverse photoemission results for the electron spectra. Experimental access to IRQM {#sec:exp} =========================== General view ------------ Two-photon processes are widely used in atomic and molecular physics (for a comprehensive discussion see [@Scoles]). Since atomic clusters are similar to molecules, it would be natural to use the same reactions in clusters. However, applications of TPP to clusters are very scarce. This can be partly explained by our poor knowledge on non-dipole low-energy electron modes in clusters, partly by peculiarities of clusters. In this section we will discuss applicability of traditional TPP methods to clusters with a particular accent to IRQM. As compared with atoms and molecules, atomic clusters have at least three features essential for TPP. i) The dominant decay channel in clusters is usually not radiative. Instead, the levels mainly decay through Landau fragmentation (dissipation of the collective motion through surrounding $1eh$ excitation), electron-electron collisions, and electron-ion coupling. This property spoils the radiation-based methods (typical for atomic and molecular measurements) to detect population of levels in clusters. In this connection, we will propose an alternative detection method based on the photoabsorption depletion of cluster beam, which is more suitable for clusters. ii) Cluster excitations have extremely short lifetimes (10-20 fs for the dipole plasmon in medium clusters) and usually are rather broad (see e.g. Fig. 1 for the dipole plasmon). The short lifetimes inhibit the application of adiabatic transfer methods while large widths impede maintaining the resonance conditions. iii) Clusters demonstrate a strong shape isomerism. Cluster beams are usually a statistical mix of clusters of a different shape. This blurs the measured low-energy electron spectra. Light clusters with their dilute spectrum allow to circumvent some of these troubles. Indeed, Landau fragmentation in such clusters is very weak, the levels not so broad and the lifetimes are much longer. Light clusters are strictly dominated by one equilibrium shape and their isomeric shapes are close to the ground state one [@Kuemmel]. So, we do not expect a strong blurring of the low-energy spectra. Beams of size-selected singly-charged light clusters are readily available. The light deformed singly-charged clusters considered in this paper seem to be most suitable for TPP measurements. The cluster temperature $\sim$ 100 K could be optimal. Then the thermal broadening of electron levels is small and, at the same time, the effects of the electron-ion coupling are yet smoothed. In the next subsections we will consider some typical two-photon processes (RS, SEP, STIRAP) and estimate their ability to observe IRQM. Raman scattering ---------------- RS is one of the simplest TPP. In this reaction, a dipole laser-induced transition to an intermediate electronic level (real or virtual) is followed by the dipole fluorescence to low-energy levels. RS measurements of electron infrared modes in clusters are very rare. We know only one experiment where IRQM were observed in heavy silver clusters embedded into amorphous silica films [@Duval]. The dipole plasmon was used as a resonant intermediate state. The further measurements revealed that IRQM are affected by cluster deformation [@Duval_privat]. These observations show that the radiative decay of the dipole plasmon is detectable in spite of the strong competition with other decay channels. This message is very encouraging for application of TPP to clusters. RS generally assumes that the coupling between the intermediate dipole and final quadrupole states is of the same order of magnitude as the coupling between the intermediate and ground states. Only then the final state is successfully populated. This means that the absorption and emission dipole matrix elements defined in (\[eq:d01\]) and (\[eq:d12\]) should be of the same scale. The calculations show that this is indeed the case: in the clusters considered in the present paper most of the absorption and emission dipole matrix elements lie inside the interval 2-10 $ea_0$ (in atomic units), i.e. are basically of the same order of magnitude (for exception of some emission matrix elements strictly suppressed due to destructive contributions, see discussion in Appendix \[sec:A\]). This means that IRQM in light deformed clusters have a chance to be observed in the resonant RS (running via the dipole plasmon). RS is effective only if the coupling between intermediate and desirable target levels is strong enough. As was shown for Na$^+_7$, this is not always the case. To overcome this trouble, we should consider the methods with stimulated emission into a specific target level (SEP, STIRAP). Stimulated emission pumping --------------------------- Unlike RS, this method exploits two lasers, pump and dump [@SEP]. The pump laser is responsible for the first photoabsorption step. The dump pulse follows the pump one with some delay and couples intermediate to target states. If the difference between the pump and dump frequencies is in resonance with the frequency of the target state, then the dump radiation stimulates the emission to this state. SEP enjoys widespread application for atoms and molecules but can encounter troubles for clusters where SEP methods to detect the population (based on measurement of spontaneous photoemission or photoelectron yield from certain levels excited by probe lasers) can fail for short-lived cluster levels with their small radiative width. In this connection, we propose a new detection method which does not need any probe radiative procedure. The idea is to use depletion of cluster beam caused by the photoabsorption. It is known that photoabsorption heats the clusters leading to subsequent evaporation of atoms. The recoil effect from the evaporated atom forces the cluster to leave the beam. Thus the photoabsorption can be measured in terms of depletion. In the scheme we propose, the frequency of the pump laser is fixed in resonance with the dipole plasmon while the frequency of the dump laser scans to get the resonance with IRQM at $\omega_P-\omega_D=\omega_{IRQM}$. Then one can detect the IRQM population as a dark resonance in the beam depletion. Indeed if the dump pulse is not in the resonance with IRQM, then a usual photoabsorption induced by the pump laser and additionally supported by the dump laser takes place. In the case of the resonance with IRQM, part of the energy will be used not for cluster heating but for stimulated radiative decay from the dipole plasmon to the given IRQM. So, the photoabsorption (depletion) should demonstrate a sharp decrease, kind of a dark resonance. The deeper the dark resonance, the more populated is the IRQM. The photoionization channel plays no role because the ionization potential in light clusters is higher than $\omega_P+ \omega_D$. If the laser intensities are not too high, then multi-photon ionization can be also neglected. The scheme is simple and suitable for clusters. However, even in the best case, SEP can transfer to the target level only 25$\%$ of the population [@SEP]. Spontaneous emission from the intermediate level results in an additional leaking. So, it is worth to continue our discussion and consider more efficient methods. STIRAP ------ Much better results can be obtained with STIRAP [@Berg_Shore; @Berg] which promises up to 100$\%$ of the population transfer from the initial to the target level. So high effectivity is provided by the coherent character of the process and the principle possibility to avoid the leaking from the intermediate levels. Like the previous method, STIRAP also implements two lasers: pump laser to couple the initial state $|0>$ and intermediate state $|1>$ and Stokes laser to stimulate the emission from $|1>$ to the target state $|2>$. However, STIRAP is more involved because it implies the coherent adiabatic transfer. STIRAP has three principle requirements: i) two-resonance condition $\omega_P-\omega_S=\omega_2-\omega_0$ which allows a detuning $\Delta=(\omega_1-\omega_0)-\omega_P=(\omega_1-\omega_2)-\omega_S$ from the intermediate state frequency; ii) counterintuitive sequence of the pulses when the Stokes pulse proceeds the pump one; 3) adiabatic evolution. Under these conditions, one of the time-dependent eigenfunctions of the system is a superposition of initial and target bare states only: $|b_0(t)>=c_0(t)|0>+c_2(t)|2>$. It is reduced to $|0>$ at the beginning of the adiabatic evolution and to $|2>$ at the end. Hence, the system finally finds itself in the target state. The intermediate state $|1>$ is not involved to $|b_0(t)>$ and so is not populated at any time. Thus any leak in the population is avoided and the transfer is complete. The main point is to evolve the system adiabatically, keeping it all the time in the state $|b_0(t)>$. STIRAP is widely used in atomic and molecular spectroscopy. It is rather insensitive to precise pulse characteristics. Both continuous and pulse lasers can be implemented. Let’s examine the STIRAP requirements for atomic clusters, in particular for the IRQM population via the dipole plasmon. [*Two-resonance condition*]{}. This condition can be obviously maintained even for a broad dipole plasmon. However, the role of detuning $\Delta$ (which is used as an additional tool to prevent the $|1>$ admixture to $|b_0(t)>$) becomes vague. The dipole plasmon vanishes slowly at its flanks and so rather large detuning is necessary. This can be done even under the restriction $\Delta \ll \omega_P \pm \omega_S$ imposed by the rotation wave approximation. At the same time, this point was not yet properly investigated and, probably, some fraction of $|1>$ will admix $|b_0(t)>$. Then the population will not be complete, although it still can remain large. [*Counterintuitive order of pump and Stokes pulses*]{}. The order when Stokes pulse precedes the pump one is crucial for the maximal population [@Berg_Shore; @Berg]. Then the Stokes pulse prepares the coherent superposition of the intermediate and final states just before arriving the pump pulse to stimulate the subsequent desirable transfer. Stokes and pump pulses must overlap and the overlapping time $\Delta \tau$ determines duration of the adiabatic evolution. This time should not be longer than the lifetime of the dipole plasmon, i.e. should not exceed 10-100 fs. Though such $\Delta \tau$ is extremely short, it is quite accessible in modern experiments [@SEP]. [*Adiabatic passage*]{}. This condition is most tough. Following [@Berg_Shore; @Berg] it can be formulated as $$\label{eq:adia} \Omega \Delta\tau > 10$$ where $\Omega=\sqrt{\Omega_P^2+\Omega_S^2}$ is the average of the pump and Stokes Rabi frequencies. The Rabi frequency is [@Berg_Shore] $$\Omega = \frac{|d|}{\hbar}\sqrt{\frac{2I}{c\epsilon_0}} \simeq 2.20 \cdot 10^8 \; |d[ea_0]| \; \sqrt{I[\frac{W}{cm^2}]} \; s^{-1} \; ,$$ where $d$ is the dipole coupling matrix element in atomic units, $I$ is the laser intensity in $W/cm^2$, $c$ is the light speed, $\epsilon_0$ is the vacuum dielectric constant. It is easy to estimate (for $d\sim 5 ea_0$ and $\Delta\tau =$10 fs) that the condition (\[eq:adia\]) is maintained for intensities $I > 10^{12}$ $W/cm^2$. At so high intensities, the multiphoton ionization and fragmentation of clusters should take place [@Cal00], which can spoil STIRAP. However, there is a way to circumvent the trouble. The condition (\[eq:adia\]) was obtained for [*one*]{} intermediate level. But the realistic spectrum of the dipole plasmon consists of a sequence of dipole levels. In this case, the STIRAP condition should be revised. The realistic case is more complicated but, at the same time, opens new possibilities for STIRAP. In particular, one may loose the adiabatic STIRAP condition and thus the requirements for the laser intensity. A general case of N intermediate states, each with its own coupling and detuning was studied in [@Vitanov]. It was shown that the trapped adiabatic state $|b_0(t)>$ can be created only when the ratio between each pump coupling and the respective Stokes coupling is the same for all intermediate states. Following our calculations, this condition is unrealistic for atomic clusters. However, the softer alternative adiabatic requirements can be formulated. In particular, in the general case of arbitrary couplings, one may tune the pump and Stokes lasers just below all intermediate states and thus form so called adiabatic-transfer state which also results in a high if not complete population of the target level. Unlike $|b_0(t)>$, the adiabatic-transfer state can have admixtures from the intermediate states during the evolution period $\Delta \tau$ and so some population leaking is unavoidable. Nevertheless, we have here a solid adiabatic transfer with a high population of the target state. Finally, one may conclude that STIRAP can be used for the efficient population of IRQM in clusters. Conclusions {#sec:conc} =========== We have presented a first exploration of infrared quadrupole electron modes (IRQM) in light deformed clusters. Most of IRQM are induced by cluster deformation and thus can deliver useful information on deformation effects, e.g. on the deformation splitting of electron levels. Besides, IRQM are about pure one-electron-one-hole excitations and so give access to the single-electron spectrum. We explained the origin of IRQM and showed that they can be easily identified in the dilute spectrum of light clusters. In the second part, we examined some typical two-photon processes (Raman scattering, stimulated emission population and stimulated adiabatic Raman passage) which are widely used in atomic and molecular spectroscopy but not yet for atomic clusters. It was shown that, in spite of some peculiarities of clusters (broad resonances, short level lifetimes, domination of non-radiative decay channels), these TPP can be applied to populate IRQM. Besides, a new method to detect the population of the target cluster states was proposed. TPP measurements of IRQM can be supplemented by photoemission and inverse photoemission experiments delivering the similar information. We hope that our analysis will stimulate application of the experimental TPP methods of atomic and molecular spectroscopy to atomic clusters. Non-dipole electron excitations in clusters represent a new promising research field. They deliver interesting physics and can serve as a robust test for the theory which still paid main attention to integral characteristics of clusters but not to so fragile patterns as the single-electron spectra. It worth noting that the single-electron spectra are very sensitive to many cluster features and so can be used as an effective tool for investigation of these features. Though our calculations have been done for sodium clusters, the qualitative results we obtained are of a general nature and should be valid for other metal clusters as well. In particular, similar deformation-induced IRQM are expected for supported clusters where they can serve as sensitive indicators of the interface interaction and cluster deformation. Besides, IRQM can be important for the quantum transport in clusters where they can lead to a resonant transmission. The work was supported by the Visitors Program of Max Planck Institute for the Physics of Complex Systems (Dresden, Germany). We thank professors E. Duval and J.-M. Rost for valuable discussions. Dipole matrix elements {#sec:A} ====================== The dipole and quadrupole electron states are described as RPA modes $$Q_{\lambda\mu i}^{\dag}=\frac{1}{2}\sum_{eh} (\psi^{\lambda\mu i}_{eh}a^{\dag}_e a_{h} -\phi^{\lambda\mu i}_{eh}a_e a^{\dag}_{h}) \label{eq:Q}$$ where $$\psi^{\lambda\mu i}_{eh} \sim N_{\lambda\mu i} \frac{f^{\lambda\mu}_{eh}}{\epsilon_{eh}-\omega_{\lambda\mu i}} \: , \quad \phi^{\lambda\mu i}_{eh} \sim N_{\lambda\mu i} \frac{f^{\lambda\mu}_{eh}}{\epsilon_{eh}+\omega_{\lambda\mu i}}$$ are forward and backward amplitudes characterizing contributions of electron-hole ($1eh$) configurations to the mode. Furthermore, $N_{\lambda\mu i}$ is the normalization coefficient, $f^{\lambda\mu}_{eh}$ is the single-particle matrix element of the residual two-body interaction, $\epsilon_{eh}$ and $\omega_{\lambda\mu i}$ are energies of the $1eh$ and RPA excitations, respectively. RPA describes equally well both collective and non-collective modes. In the latter case (i.e. in the limit $|\lambda\mu i> \to |1eh>$), only the amplitude $|\psi^{\lambda\mu i}_{eh}| \to 1$ survives while all others vanish. The photoabsorption matrix element responsible for the coupling between the ground and dipole states is determined by $1eh$ part of the dipole operator and reads $$\label{eq:d01} \langle 1\mu_1 i_1|erY_{1\mu}|0> = \sum_{eh} f^{E1\mu_1}_{eh} (\psi^{1\mu_1i_1}_{eh}+\phi^{1\mu_1i_1}_{eh})$$ where $f^{E1\mu_1}_{eh}$ is the matrix element of the dipole transition. If to put $f^{1\mu_1}_{eh} \approx f^{E1\mu_1}_{eh}$ and to take into account that all the relevant dipole $1eh$-levels are blue-shifted by the dipole interaction, then $$\langle 1\mu_1 i_1|erY_{1\mu}|0> \approx 2 N_{1\mu_1 i_1} \sum_{eh} \frac{(f^{E1\mu_1}_{eh})^2 \epsilon_{eh}} {\epsilon_{eh}^2-\omega_{1\mu_1 i_1}^2}$$ and it is easy to see that all $1eh$ levels contribute constructively to the transition. Instead, the photoemission dipole matrix element connecting the dipole and quadrupole RPA modes is determined by $1ee$ and $1hh$ parts of the dipole operator and has the more complicated structure: $$\begin{aligned} \label{eq:d12} \langle 2\mu_2 i_2 | erY_{1\mu}| 1\mu_1 i_1 \rangle \nonumber \\ =\frac{1}{2} \lbrack \sum_{ee'} f^{E1\mu}_{ee'} \sum_{h} \langle \psi^{2\mu_2i_2}_{eh}\psi^{1\mu_1i_1}_{he'}+ \psi^{2\mu_2i_2}_{he'}\psi^{1\mu_1i_1}_{he} \rangle \nonumber \\ + \sum_{hh'} f^{E1\mu}_{hh'} \sum_{e} \langle \psi^{2\mu_2i_2}_{he}\psi^{1\mu_1i_1}_{eh'}+ \psi^{2\mu_2i_2}_{h'e}\psi^{1\mu_1i_1}_{eh} \rangle \rbrack \; .\end{aligned}$$ The terms with backward amplitudes, $\sim \phi\phi $, are omitted in (\[eq:d12\]) since they are usually small. The matrix element (\[eq:d12\]) becomes simpler when the quadrupole state is dominated by a single $1eh$ configuration, say $\{ \bar{e}_2 \bar{h}_2 \}$. As is shown in Sec. \[sec:res\], this is indeed a common case. Then all the quadrupole amplitudes vanish except of $|\psi^{2\mu_2 i_2}_{\bar{e}\bar{h}}| \to 1$ and Eq. (\[eq:d12\]) is reduced to $$\begin{aligned} \langle \{ \bar{e}_2 \bar{h}_2 \}|erY_{1\mu}|1\mu_1 i_1\rangle \nonumber \\ =\pm\frac{1}{2} \lbrack \sum_e f^{E1\mu_1}_{\bar{e_2}e} \psi^{1\mu_1i_1}_{\bar{h}_2e} + \sum_h f^{E1\mu}_{h\bar{h}_2} \psi^{1\mu_1i_1}_{\bar{e}_2h} \rbrack \; . \label{eq:emiss_simpl}\end{aligned}$$ Now only the dipole amplitudes including the hole (first term) or particle (second term) from the pair $\{ \bar{e}_2 \bar{h}_2 \}$ contribute to the transition. The contribution is considerable only subject to the large dipole amplitude and strong matrix element. (The latter take place if the matrix element fulfills the asymptotic Nilsson selection rules [@Nilsson; @BM]: E1$\mu$ transition between $[{\mathcal N}_i n_{iz} \Lambda_i]$ and $[{\mathcal N}_j n_{jz} \Lambda_j]$ states is favored if ${\mathcal N}_i={\mathcal N}_j\pm 1, \quad n_{iz}=n_{jz}\pm 1$ for $\mu=0$ and $\quad n_{iz}=n_{jz}$ for $\mu=1$). Under these tough requirements, only a few terms yield large contributions to (\[eq:emiss\_simpl\]). Depending on the structure of the dipole and quadrupole states, the contributions can have different signs and thus lead to constructive or destructive results. So, the magnitude of the photoemission matrix element (and thus the coupling between the dipole and quadrupole states) can be large (like for the photoabsorption) or very small. Scissors mode {#sec:B} ============= The SM is a general dynamical phenomenon already found or predicted in different finite quantum systems (atomic nuclei, metal clusters, quantum dots, dilute ultra-cold gases of Bose and Fermi atoms), see the review [@epjd_M1]. All these different systems have two features in common: broken spherical symmetry (deformation) and a two-component structure. Like the quadrupole modes, the SM separates into low-energy ($\Delta {\cal N} =0$) and high-energy ($\Delta {\cal N} =2$) branches. In this paper we consider only the low-energy branch which is mixed with IRQM. Macroscopically, this branch is treated as a small-amplitude rotational oscillations of a spheroid of valence electrons against a spheroid of the ionic background (hence the name scissors mode). Like IRQM, the SM is driven by cluster deformation and its energy scale is naturally determined by the deformation splitting of the electron levels. In axial clusters with a quadrupole deformation $\delta_2$, the energy and magnetic strength of the mode are estimated as [@LS_M1; @prl_M1]: $$\omega = \frac{20.7}{r_{s}^{2}}N_{e}^{-1/3}\delta_{2} \ eV, \qquad B(M1) \simeq N_{e}^{4/3}\delta_{2} \ \mu _b^{2} \label{eq:om}$$ where $r_s$ is the Wigner-Seitz radius (in $\AA$), $N_{e}$ is the number of valence electrons, and $\mu_b$ is the Bohr magneton. It is seen that both the energy and strength are proportional to the deformation parameter $\delta_2$ and vanish for the spherical shape. In axially symmetric systems, the SM is generated by the orbital momentum fields, $L_x$ and $L_y$, perpendicular to the symmetry axis $z$ and, like the quadrupole mode $\lambda\mu=21$, forms the states $|\Lambda^{\pi}=1^{+}>$. SM strongly responds to an external magnetic dipole field. Besides, it determines the van Vleck paramagnetism in deformed clusters [@LS_M1; @prl_M1; @pra_M1; @epjd_M1]. The experimental search of the SM in clusters is still in very beginning [@Duval]. There is an intimate connection between the scissors and quadrupole E21 modes in deformed clusters. If fact, both SM and E21 mode are parts of a general motion of multipolarity $\lambda\mu=21$. To illustrate this point, we expand a single-particle electron state $\nu$ in terms of a spherical basis $(n L\Lambda )$ $$\Psi_{\nu =[{\cal N}n_z\Lambda ]} = \sum_{nL} a^{\nu}_{nL}R_{nL}(r) Y_{L\Lambda}(\Omega)$$ and estimate the SM $1eh$ matrix element: $$\begin{aligned} \label{eq:me} \langle\Psi_{p}|{\hat L}_{x}|\Psi_{h}\rangle &\propto& %{\textstyle\frac{1}{2}} %\delta^{\mbox{}}_{\pi_{p},\pi_{h}} \delta^{\mbox{}}_{{\pi}_{p},{\pi}_{h}} \delta^{\mbox{}}_{\Lambda_{p},\Lambda_{h}\!\pm\!1} \\ &\cdot & \sum_{nL} a^{p}_{nL}a^{h}_{nL}\sqrt{L(L\!+\!1)\!-\!\Lambda_h(\Lambda_h\!\pm\!1)} \,. \nonumber\end{aligned}$$ It is seen that the SM operator connects only components from one and the same basis level $nL$. Indeed, the operators ${\hat L}_{x}$ and ${\hat L}_{y}$ have no $r$-dependent part and so, due to orthogonality of the basis functions $R_{nL}(r)$, cannot connect the components with different $nL$. But the latter can be done by the quadrupole operator $r^2 Y_{21}$. In this sense the SM operator is more selective than E21, though both operators generate transitions of the same multipolarity. $\Lambda^{\pi}=1^+$ states involve both SM and E21 modes and respond to both M11 and E21 external fields. The states are treated as magnetic dipole SM or electric quadrupole E21, depending on each of the two responses dominates. [99]{} G. Mie, Ann. Phys. (Leipzig) [**25**]{}, 377 (1908). U. Kreibig and M. Vollmer, [*Optical properties of metal clusters*]{} (Springer, Berlin, 1993). W.A. de Heer, Rev. Mod. Phys. [**65**]{}, 611 (1993). M. Brack, Rev. Mod. Phys. [**65**]{}, 677 (1993). [*Clusters of atoms and molecules*]{}, ed. H. Haberland, (Springer series in chemical physics, [**52**]{}, Springer, Berlin, 1994). V.O. Nesterenko, W. Kleinig, and F.F. de Souza Cruz, in Proc. of Intern. Workshop “Collective excitations in Fermi and Bose Systems”, Serra Negra, San Paulo, Brazil, 1998 edited by C.A. Bertulani, L.F. Canto and M.S. Hussein (World Scientific, Singapore, 1999), p. 205-224. G.F. Bertsch and R. Broglia, [*Oscillations in finite quantum systems*]{}, (Cambridge Univ. Press, 1994). V.O. Nesterenko, P.-G. Reinhard, W. Kleinig, under preparation for publication. E. Lipparini and S. Stringari, [Phys. Rev. Lett.]{} [**63**]{}, 570 (1989); [Z. Phys.]{} D [**18**]{}, 193 (1991). V.O. Nesterenko, W. Kleinig, F.F. de Souza Cruz and N. Lo Iudice, [Phys. Rev. Lett.]{} [**83**]{}, 57 (1999). P.-G. Reinhard, V.O. Nesterenko, E. Suraud, S. El Gammal, and W. Kleinig, [Phys. Rev.]{} A [**66**]{}, 013206 (2002). V.O. Nesterenko, W. Kleinig, P.-G. Reinhard, N. Lo Iudice, F.F. de Souza Cruz, and J.R. Marinelli, Eur. Phys. J. D[**27**]{}, 43 (2003). F. Calvayrac, P.-G. Reinhard, E. Suraud, and C.A. Ullrich, Phys. Rep. [**337**]{}, 493 (2000). P.-G. Reinhard and E. Suraud, [*Introduction to Cluster Dynamics*]{}, (Wiley-VCH, Berlin, 2003). W. Ekardt, Phys. Rev. Lett. [**52**]{}, 1925 (1984). W. Kleinig, V.O. Nesterenko, P.-G. Reinhard, and Ll. Serra, [Eur. Phys. J.]{} D [**4**]{}, 343 (1998). W. Kleinig, V.O. Nesterenko, and P.-G. Reinhard, [Ann. Phys. (NY)]{} [**297**]{}, 1 (2002). V.O. Nesterenko, W. Kleinig, and P.-G. Reinhard, [Eur. Phys. J.]{} D [**19**]{}, 57 (2002). H. Portales, E. Duval, L. Saviot, M. Fujii, M. Sumitoto, and S. Hayashi, [Phys. Rev.]{} B [**63**]{}, 233402 (2001). O. Gunnarson and B.I. Lundqvist, [Phys. Rev.]{} B [**13**]{}, 4274 (1976). B. Montag, Th. Hirschmann, J. Meyer, P.-G. Reinhard, and M Brack, Phys. Rev. B [**52**]{}, 4775 (1995). S. K${\ddot u}$mmel, M. Brack, and P.-G. Reinhard, [Phys. Rev.]{} B[**62**]{}, 7602 (2000). M. Schmidt and H. Haberland, Eur. Phys. J. D[**6**]{}, 109 (1999). B. Montag, P.-G. Reinhard, J. Meyer, Z. Phys. D [**32**]{}, 125 (1994). M. Weissbluth, [*Atoms and Molecules*]{} (Academic Press, New York, 1978). K. Clemenger, Phys. Rev. B [**32**]{}, 1359 (1985). S.G. Nilsson, K. Dan. Vidensk. Selsk. Mat. Fys. Medd., [**29**]{}, n. 16 (1955). A. Bohr and B. Mottelson, [*Nuclear Structure*]{}, v. 2, (Benjamin, Reading, MA, 1975). P.-G. Reinhard, O. Genzken, and M. Brack, Ann. Phys. (Leipzig) [**5**]{}, 1 (1996). L. G. Gerchikov, A. N. Ipatov, A. V. Solov’ev, and W. Greiner, [J. Phys.]{} B [**31**]{}, 3065 (1998). N. Lo Iudice, Prog. Part. Nucl. Phys. [**28**]{}, 556 (1997). , ed. G. Scoles, (Oxford Univ. Press, Oxford, 1988). , ed. H.-L. Dai and R.W. Field (Advanced series in physical chemistry, [**4**]{}, World Scientific, Singapure, 1999). K. Bergmann and B.W. Shore, in [*Molecular dynamics and spectroscopy by stimulated emission pumping*]{}, ed. H.-L. Dai and R.W. Field (Advanced series in physical chemistry, [**4**]{}, World Scientific, Singapure, 1999), Chapter 9, p. 319. K. Bergmann, H. Theuer, and B.W. Shore, [Rev. Mod. Phys.]{} [**70**]{}, 1003 (1998). E. Duval, private communication. N.V. Vitanov and S. Stenholm, Phys. Rev. A[**60**]{}, 3820 (1999). [**FIGURE CAPTIONS**]{} : Photoabsorption dipole cross section in light axially deformed Na clusters. Quadrupole and hexadecapole deformation parameters are indicated in boxes. RPA results are given as vertical bars (in $eV \AA^2$) and as a strength function smoothed by the Lorentz weight with the averaging parameter 0.25 eV. Separate contributions to the strength function from the $\lambda\mu =$10 and 11 dipole branches (the latter is twice stronger) are shown by dashed curves. The experimental data (triangles) from [@SH] are given for the comparison. : Two-photon process: scheme of population of IRQM states $\lambda\mu =$20, 21 and 22 via the $\lambda\mu =10$ (left) and 11 (right) branches of the dipole plasmon. : Quadrupole strength distribution in light Na clusters. Lower panels: the unperturbed $1eh$ strength (without residual interaction). Upper panels: the RPA strength (with residual interaction). The results are given as bars (for every discrete RPA or $1eh$ state) and as smooth strength functions obtained by folding with a Lorentzian of width $\Delta =$0.25 eV. The IRQM strength (enclosed by the circles at 0.5-1.5 eV) is very weak and so rescaled by the factor $10^2$. : The electron level scheme for Na$_7^+$, Na$^+_{11}$ and Na$^+_{15}$ in the spherical limit (left) and at the equilibrium deformation (right). Occupied and unoccupied states are drawn by solid and dashed lines, respectively. The Fermi (HOMO) level is marked by index F. Arrows depict the possible low-energy hole-electron $E2\mu$ transitions. : IRQM in light clusters. The plots exhibit quadrupole photoabsorption (uppermost panels, marked E2), scissors M1 photoabsorption (second line, marked M1), and two-photon population of IRQM via $\lambda\mu =10$ (third line, marked TPP E10) and 11 (lowest panels, marked TPP E11) dipole branches. IRQM are depicted by solid ($\lambda\mu =20$), dashed ($\lambda\mu =21$), and dotted ($\lambda\mu =22$) curves. The strengths are smoothed by a Lorentzian with $\Delta =0.1$ eV. : Quadrupole strengths for particular IRQM. The unperturbed $1eh$ (dashed curve) and RPA (solid curve) strengths are compared. The results are smoothed by a Lorentzian with the width $\Delta =0.1$ eV.
--- abstract: 'We prove a multiplication theorem for quantum cluster algebras of acyclic quivers. The theorem generalizes the multiplication formula for quantum cluster variables in [@fanqin]. Moreover some $\mathbb{ZP}$-bases in quantum cluster algebras of finite and affine types are constructed. Under the specialization $q$ and coefficients to $1$, these bases are the integral bases of cluster algebra of finite and affine types (see [@CK1] and [@DXX]).' address: - | Institute for advanced study\ Tsinghua University\ Beijing 100084, P. R. China - | Department of Mathematical Sciences\ Tsinghua University\ Beijing 100084, P. R. China author: - Ming Ding and Fan Xu title: The multiplication theorem and bases in finite and affine quantum cluster algebras --- [^1] Introduction ============ Quantum cluster algebras were introduced by A. Berenstein and A. Zelevinsky [@berzel] as a noncommutative analogue of cluster algebras [@ca1][@ca2] to study canonical bases. A quantum cluster algebra is generated by a set of generators called the *quantum cluster variables* inside an ambient skew-field $\mathcal{F}$. Under the specialization $q=1,$ the quantum cluster algebras are exactly cluster algebras which were introduced by S. Fomin and A. Zelevinsky [@ca1][@ca2]. Cluster algebras have a close link to quiver representations via cluster categories invented in [@BMRRT]. The link is explicitly characterized by the Caldero-Chapoton map ([@caldchap]) and the Caldero-Keller multiplication theorems ([@CK1],[@CK2]). The Caldero-Chapoton map associates the objects in the cluster categories to some Laurent polynomials, in particular, sends rigid objects to cluster variables. The Caldero-Keller multiplication theorems show the multiplication rules between images of objects under the Caldero-Chapoton map. The theorem is remarkable. On the one hand, it is similar to the multiplication in a dual Hall algebra and unifies homological and geometric properties of cluster categories and combinatorial properties of cluster algebras. On the other hand, since cluster algebras were introduced to study canonical bases, it is important to construct integral bases of cluster algebras. The Caldero-Keller multiplication theorems are essentially important to construct integral bases of cluster algebras. Following this link, some good bases have been constructed for finite and affine cluster algebras ([@CK1], [@calzel], [@Dup] and [@DXX]). Naturally, one can study the quantum analogue of the link. Recently, Rupel ([@rupel]) defined a quantum analog of the Caldero-Chapoton map (called the quantum Caldero-Chapoton map) and conjectured that cluster variables could be expressed as images of indecomposable rigid objects under the quantum Caldero-Chapoton formula. A key ingredient of the conjecture is to confirm the multiplication rules between quantum cluster variables given by [@berzel]. Most recently, the conjecture has been proved by Qin ([@fanqin]) for acyclic equally valued quivers. There Qin constructed a quantum cluster multiplication formula and then confirmed the multiplication rules between quantum cluster variables. The present paper is contributed to prove a multiplication theorem (a combination of Theorem \[multi-formula\] and \[exchange2\]) for acyclic quantum cluster algebras in Section 3. The theorem generalizes the quantum cluster multiplication formula in [@fanqin] and can be viewed as a quantum analogue of the $1$-dimensional Caldero-Keller multiplication theorem in [@CK2]. Compared to the role which the Caldero-Keller multiplication theorems play for cluster algebras, our multiplication theorem is worthy of highlighting and also reflects the information and the difficulty to prove the more general quantum analog of the Caldero-Keller multiplication theorems. The main idea in the proof of the multiplication theorem is taken from [@Hubery]. Moreover, we construct some good $\mathbb{ZP}$-bases in quantum cluster algebras of finite and affine types. By specializing $q$ and coefficients to $1$, these bases induce the good bases for cluster algebras of finite[@CK1] and affine types[@DXX], respectively. The quantum Caldero-Chapoton map ================================ Quantum cluster algebras ------------------------ The main reference for quantum cluster algebras is [@berzel]. Here, we also recommend [@fanqin Section2] as a nice reference. Let $L$ be a lattice of rank $m$ and $\Lambda:L\times L\to \ZZ$ a skew-symmetric bilinear form. Let $q$ be a formal variable and consider the ring of integer Laurent polynomials $\ZZ[q^{\pm1/2}]$. Define the *based quantum torus* associated to the pair $(L,\Lambda)$ to be the $\ZZ[q^{\pm1/2}]$-algebra $\mathcal{T}$ with a distinguished $\ZZ[q^{\pm1/2}]$-basis $\{X^e: e\in L\}$ and the multiplication given by $$X^eX^f=q^{\Lambda(e,f)/2}X^{e+f}.$$ It is easy to see that $\Tcal$ is associative and the basis elements satisfy the following relations: $$X^eX^f=q^{\Lambda(e,f)}X^fX^e,\ X^0=1,\ (X^e)^{-1}=X^{-e}.$$ It is known that $\Tcal$ is an Ore domain, i.e., is contained in its skew-field of fractions $\Fcal$. The quantum cluster algebra will be defined as a $\ZZ[q^{\pm1/2}]$-subalgebra of $\Fcal$. A *toric frame* in $\Fcal$ is a map $M: \ZZ^m\to \Fcal \setminus \{0\}$ of the form $$M({\bf c})=\varphi(X^{\eta({\bf c})})$$ where $\varphi$ is an automorphism of $\Fcal$ and $\eta: \ZZ^m\to L$ is an isomorphism of lattices. By the definition, the elements $M({\bf c})$ form a $\ZZ[q^{\pm1/2}]$-basis of the based quantum torus $\Tcal_M:=\varphi(\Tcal)$ and satisfy the following relations: $$M({\bf c})M({\bf d})=q^{\Lambda_M({\bf c},{\bf d})/2}M({\bf c}+{\bf d}),\ M({\bf c})M({\bf d})=q^{\Lambda_M({\bf c},{\bf d})}M({\bf d})M({\bf c}),$$ $$M({\bf 0})=1,\ M({\bf c})^{-1}=M(-{\bf c}),$$ where $\Lambda_M$ is the skew-symmetric bilinear form on $\ZZ^m$ obtained from the lattice isomorphism $\eta$. Let $\Lambda_M$ also denote the skew-symmetric $m\times m$ matrix defined by $\lambda_{ij}=\Lambda_M(e_i,e_j)$ where $\{e_1, \ldots, e_m\}$ is the standard basis of $\ZZ^m$. Given a toric frame $M$, let $X_i=M(e_i)$. Then we have $$\Tcal_M=\ZZ[q^{\pm1/2}]\langle X_1^{\pm 1}, \ldots, X_m^{\pm1}:X_iX_j=q^{\lambda_{ij}}X_jX_i\rangle.$$ An easy computation shows that $$M({\bf c})=q^{\frac{1}{2}\sum_{i<j} c_ic_j\lambda_{ji}}X_1^{c_1}X_2^{c_2}\cdots X_m^{c_m}=:X^{{\bf c}} \ \ \ ({\bf c}\in\ZZ^m).$$ Let $\Lambda$ be an $m\times m$ skew-symmetric matrix and let $\widetilde{B}$ be an $m\times n$ matrix for some positive integer $n\leq m$. We call the pair $(\Lambda, \widetilde{B})$ *compatible* if $\widetilde{B}^T\Lambda=(D|0)$ is an $n\times m$ matrix with $D=diag(d_1,\cdots,d_n)$ where $d_i\in \mathbb{N}$ for $1\leq i\leq n$. The pair $(M,\widetilde{B})$ is called a *quantum seed* if the pair $(\Lambda_M, \widetilde{B})$ is compatible. Define the $m\times m$ matrix $E=(e_{ij})$ by $$e_{ij}=\begin{cases} \delta_{ij} & \text{if $j\ne k$;}\\ -1 & \text{if $i=j=k$;}\\ max(0,-b_{ik}) & \text{if $i\ne j = k$.} \end{cases}$$ For $n,k\in\ZZ$, $k\ge0$, denote ${n\brack k}_q=\frac{(q^n-q^{-n})\cdots(q^{n-k+1}-q^{-n+k-1})}{(q^k-q^{-k})\cdots(q-q^{-1})}$. Let ${\bf c}=(c_1,\ldots,c_m)\in\ZZ^m$ with $c_{k}\geq 0$. Define the toric frame $M': \ZZ^m\to \Fcal \setminus \{0\}$ as follows: $$\label{eq:cl_exp}M'({\bf c})=\sum^{c_k}_{p=0} {c_k \brack p}_{q^{d_k/2}} M(E{\bf c}+p{\bf b}^k),\ \ M'({\bf -c})=M'({\bf c})^{-1}.$$ where the vector ${\bf b}^k\in\ZZ^m$ is the $k-$th column of $\widetilde{B}$. Then the quantum seed $(M',\widetilde{B}')$ is defined to be the mutation of $(M,\widetilde{B})$ in direction $k$. In general, two quantum seeds $(M, \widetilde{B})$ and $(M', \widetilde{B}')$ are mutation-equivalent if they can be obtained from each other by a sequence of mutations, denoted by $(M, \widetilde{B})\sim (M', \widetilde{B}')$. Let $\mathcal{C}=\{M'(e_i) \mid (M, \widetilde{B})\sim (M', \widetilde{B}'), i=1, \cdots n\}$. The elements of $\mathcal{C}$ are called *quantum cluster variables*. Let $\mathcal{P}=\{M(e_i): i=n+1, \cdots, m]\}$ and the elements of $\mathcal{P}$ are called *coefficients*. Given $(M', \widetilde{B}')\sim (M, \widetilde{B})$ and ${\bf c}=(c_i)\in \mathbb{Z}^m$, a element $M'(\bf c)$ is called a *quantum cluster monomial* if $c_i\geq 0$ for $i=1, \cdots, n$ and $0$ for $i=n+1, \cdots, m.$ We denote by $\mathbb{P}$ the multiplicative group by $q^{\frac{1}{2}}$ and $\mathcal{P}$. Write $\mathbb{ZP}$ as the ring of Laurent polynomials in the elements of $\mathcal{P}$ with coefficients in $\mathbb{Z}[q^{\pm 1/2}]$. The *quantum cluster algebra* $\Acal_q(\Lambda_M,\widetilde{B})$ is the $\mathbb{ZP}$-subalgebra of $\Fcal$ generated by $\mathcal{C}$. We associate $(M,\tilde{B})$ a $\ZZ$-linear *bar-involution* on $\Tcal_M$ defined by $$\overline{q^{r/2}M({\bf c})}=q^{-r/2}M({\bf c}), \ \ (r\in\ZZ,\ {\bf c}\in\ZZ^n).$$ It is easy to show that $\overline{XY}=\overline{Y}~\overline{X}$ for all $X,Y\in \Acal_q(\Lambda_M, \widetilde{B})$ and that each element of $\mathcal{C}\cup \mathcal{P}$ is *bar-invariant*. Now assume that there exists a finite field $k$ satisfying $|k|=q$. In the same way, we can define based quantum torus $\mathcal{T}_{|k|}$ and *specialized quantum cluster algebras* $\Acal_{|k|}(\Lambda_M,\widetilde{B})$ by substituting $\mathbb{Z}[|k|^{\pm\frac{1}{2}}]$ for $\mathbb{Z}[q^{\pm\frac{1}{2}}]$ in the above definition. By [@berzel Corollary 5.2], $\Acal_q(\Lambda_M, \widetilde{B})$ and $\Acal_{|k|}(\Lambda_M,\widetilde{B})$ are subalgebras of $\mathcal{T}$ and $\mathcal{T}_{|k|}$, respectively. There is a specialization map $ev: \mathcal{T}\rightarrow \mathcal{T}_{|k|}$ by mapping $q^{\frac{1}{2}}$ to $|k|^{\frac{1}{2}}$, which induces a bijection between quantum monomials of $\Acal_{q}(\Lambda_M,\widetilde{B})$ and $\Acal_{|k|}(\Lambda_M,\widetilde{B})$ ([@fanqin Section 2.2]). The quantum Caldero-Chapoton map -------------------------------- Let $k$ be a finite field with cardinality $|k|=q$ and $m\geq n$ be two positive integers and $\widetilde{Q}$ an acyclic quiver with vertex set $\{1,\ldots,m\}$ [@fanqin]. Denote the subset $\{n+1,\dots,m\}$ by $C$. The elements in $C$ are called the *frozen vertices* , and $\widetilde{Q}$ is called an *ice quiver*. The full subquiver $Q$ on the vertices $1,\ldots,n$ is called the *principal part* of $\widetilde{Q}$. Let $\widetilde{B}$ be the $m\times n$ matrix associated to the ice quiver $\widetilde{Q}$, i.e., its entry in position $(i,j)$ is $$b_{ij}=|\{\mathrm{arrows}\, i\longrightarrow j\}|-|\{\mathrm{arrows}\, j\longrightarrow i\}|$$ for $1\leq i\leq m$, $1\leq j\leq n$. And let $\widetilde{I}$ be the left $m\times n$ matrix of the identity matrix of size $m\times m$. Further assume that there exists some antisymmetric $m\times m$ integer matrix $\Lambda$ such that $$\begin{aligned} \label{eq:simply_laced_compatible} \Lambda(-\widetilde{B})=\widetilde{I}:=\begin{bmatrix}I_n\\0 \end{bmatrix},\end{aligned}$$ where $I_n$ is the identity matrix of size $n\times n$. Thus, the matrix $\widetilde{B}$ is of full rank. Let $\widetilde{R}$ and $\widetilde{R}^{tr}$ be the $m\times n$ matrix with its entry in position $(i,j)$ is $$\widetilde{r}_{ij}=\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(S_i,S_j)$$ and $$\widetilde{r}^{*}_{ij}=\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(S_j,S_i)$$ for $1\leq i\leq m$, $1\leq j\leq n$, respectively. Note that $$\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(S_i,S_j)=|\{\mathrm{arrows}\, j\longrightarrow i\}|.$$ Denote the principal parts of the matrices $\widetilde{B}$ and $\widetilde{R}$ by $B$ and $R$ respectively. Note that $\widetilde{B}=\widetilde{R}^{tr}-\widetilde{R}$ and $B=R^{tr}-R$ where $R^{tr}$ represents the transposition of the matrix $R.$ In general, the matrix $B$ is not of full rank so that there exists no matrix $\Lambda$ compatible with $B$. Hence, one need add some frozen vertices to $Q$ and then obtain an acyclic quiver $\widetilde{Q}$ with a compatible pair $(\widetilde{B}, \Lambda).$ Let $\mathcal C_{\widetilde{Q}}$ be the cluster category of $k \widetilde{Q}$, i.e., the orbit category of the derived category $\mathcal{D}^b(\widetilde{Q})$ by the functor $F=\tau\circ[-1]$ where $\tau$ is the Auslander-Reiten translation and $[1]$ is the translation functor. We note that the indecomposable objects of the cluster category $\mathcal C_{\widetilde{Q}}$ are either the indecomposable $k \widetilde{Q}$-modules or $P_i[1]$ for indecomposable projective modules $P_i$($1\leq i \leq m$). Each object $M$ in $\mathcal C_{\widetilde{Q}}$ can be uniquely decomposed in the following way: $$M\cong M_0\oplus P_M[1]$$ where $M_0$ is a $k\widetilde{Q}$-module and $P_M$ is a projective $k\widetilde{Q}$-module. Let $P_M=\bigoplus_{1\leq i \leq m}m_iP_i.$ We extend the definition of the dimension vector $\mathrm{\underline{dim}}$ on modules in $\mathrm{mod}k \widetilde{Q}$ to objects in $\mathcal C_{\widetilde{Q}}$ by setting $$\mathrm{\underline{dim}}M=\mathrm{\underline{dim}}M_0-(m_i)_{1\leq i \leq m}.$$ The Euler form on $k\widetilde{Q}$-modules $M$ and $N$ is given by $$\langle M,N\rangle=\mathrm{dim}_{k}\mathrm{Hom}_{k\widetilde{Q}}(M,N)-\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(M,N).$$ Note that the Euler form only depends on the dimension vectors of $M$ and $N$. As in [@Hubery], we define $$[M, N]=\mathrm{dim}_{k}\mathrm{Hom}_{k\widetilde{Q}}(M,N)\mbox{ and }[M, N]^1=\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(M,N).$$ The quantum Caldero-Chapoton map of an acyclic quiver $\widetilde{Q}$ has been studied in [@rupel] and [@fanqin]. Here, we reformulate their definitions to the following map $$X^{\widetilde{Q}}_?: \mathrm{obj}\mathcal C_{\widetilde{Q}}\longrightarrow \Tcal$$ defined by the following rule: If $M$ is a $k Q$-module and $P$ is a projective $k \widetilde{Q}$-module, then $$X^{\widetilde{Q}}_{M\oplus P[1]}=\sum_{\underline{e}} |\mathrm{Gr}_{\underline{e}} M|q^{-\frac{1}{2} \langle \underline{e},\underline{m}-\underline{e}\rangle}X^{\widetilde{B}\underline{e}-(\widetilde{I}-\widetilde{R})\underline{m}+\underline{\mathrm{dim}} P/\mathrm{rad}P},$$ where $\underline{\mathrm{dim}} M= \underline{m}$ and $\mathrm{Gr}_{\underline{e}}M$ denotes the set of all submodules $V$ of $M$ with $\underline{\mathrm{dim}} V= \underline{e}$. Usually, we omit the upper index $\widetilde{Q}$ in the notation $X^{\widetilde{Q}}_?$ (except Section 4 and Section 5) if there is no confusion. We note that $$X_{P[1]}=X_{\tau P}=X^{\underline{\mathrm{dim}} P/rad P}=X^{\underline{\mathrm{dim}}\mathrm{soc}I}=X_{I[-1]}=X_{\tau^{-1}I}.$$ for any projective $k\widetilde{Q}$-module $P$ and injective $k\widetilde{Q}$-module $I$ with $\mathrm{soc}I=P/\mathrm{rad}P.$ Hereinafter, we denote by the corresponding underlined small letter $\underline{x}$ the dimension vector of a $kQ$-module $X$ and view $\underline{x}$ as a column vector in $\mathbb{Z}^n.$ Multiplication theorems for acyclic quantum cluster algebras ============================================================ Throughout this section, assume that $\widetilde{Q}$ is an acyclic quiver and $Q$ is its full subquiver. In this section, we will prove a multiplication theorem for any acyclic quantum cluster algebra. First, we improve Lemma 5.2.1 and Corollary 5.2.2 in [@fanqin], i.e., here we handle the dimension vector of any $kQ$-module while in [@fanqin] the author only deals with dimension vectors of rigid modules. \[1\] For any dimension vector $\underline{m}, \underline{e}, \underline{f}\in \mathbb{Z}^{n}_{\geq 0},$ we have $$(1)\ \Lambda((\widetilde{I}-\widetilde{R})\underline{m}, \widetilde{B}\underline{e})=-\langle \underline{e}, \underline{m}\rangle;$$ $$(2)\ \Lambda(\widetilde{B}\underline{e}, \widetilde{B}\underline{f})=\langle \underline{e}, \underline{f}\rangle-\langle \underline{f}, \underline{e}\rangle.$$ By definition, we have $$\begin{aligned} && \Lambda((\widetilde{I}-\widetilde{R})\underline{m}, \widetilde{B}\underline{e}) \nonumber\\ &=& \underline{m}^{tr}(\widetilde{I}-\widetilde{R})^{tr}\Lambda \widetilde{B}\underline{e}=-\underline{m}^{tr}(\widetilde{I}-\widetilde{R})^{tr}\begin{bmatrix}I_n\\0 \end{bmatrix}\underline{e}\nonumber\\ &=& -\underline{m}^{tr}(I_{n}-R)^{tr}\underline{e}=-\underline{e}^{tr}(I_{n}-R)\underline{m}\nonumber\\ &=& -\langle \underline{e}, \underline{m}\rangle.\nonumber\end{aligned}$$ As for (2), the left side of the desired equation is equal to $$\underline{e}^{tr}\widetilde{B}^{tr}\Lambda \widetilde{B}\underline{f}=-\underline{e}^{tr}\widetilde{B}^{tr}\begin{bmatrix}I_n\\0 \end{bmatrix}\underline{f}=-\underline{e}^{tr}B^{tr}\underline{f}.$$ The right side is $$\begin{aligned} && \langle \underline{e}, \underline{f}\rangle-\langle \underline{f}, \underline{e}\rangle \nonumber\\ &=& \underline{e}^{tr}(I_{n}-R)\underline{f}-\underline{f}^{tr}(I_{n}-R)\underline{e}\nonumber\\ &=& \underline{e}^{tr}(I_{n}-R)\underline{f}-\underline{e}^{tr}(I_{n}-R)^{tr}\underline{f}\nonumber\\ &=& \underline{e}^{tr}(R^{tr}-R)\underline{f}=-\underline{e}^{tr}(R-R^{tr})\underline{f}=-\underline{e}^{tr}B^{tr}\underline{f}.\nonumber\end{aligned}$$ Thus we prove the lemma. \[2\] For any dimension vector $\underline{m}, \underline{l}, \underline{e}, \underline{f}\in \mathbb{Z}^{n}_{\geq 0},$ we have $$\begin{aligned} && \Lambda(\widetilde{B}\underline{e}-(\widetilde{I}-\widetilde{R})\underline{m},\widetilde{B}\underline{f}-(\widetilde{I}-\widetilde{R})\underline{l}) \nonumber\\ &=&\Lambda((\widetilde{I}-\widetilde{R})\underline{m},(\widetilde{I}-\widetilde{R})\underline{l})+\langle \underline{e}, \underline{f}\rangle-\langle \underline{f}, \underline{e}\rangle-\langle \underline{e}, \underline{l}\rangle+\langle \underline{f}, \underline{m}\rangle.\nonumber\end{aligned}$$ For any $kQ-$modules $M,N,E$, denote by $\varepsilon_{MN}^{E}$ the cardinality of the set $\mathrm{Ext}^{1}_{kQ}(M,N)_{E}$ which is the subset of $ \mathrm{Ext}^{1}_{kQ}(M,N)$ consisting of those equivalence classes of short exact sequences with middle term isomorphic to $M$ ([@Hubery Section 4]). For $kQ$-modules $M$, $A$ and $B$, we denote by $F^M_{AB}$ the number of submodules $U$ of $M$ such that $U$ is isomorphic to $B$ and $M/U$ is isomorphic to $A$. Then by definition, we have $$|\mathrm{Gr}_{\underline{e}}(M)|=\sum_{A, B; \underline{\mathrm{dim}}B=\underline{e}}F_{AB}^M.$$ Different from the case in cluster categories, for $kQ$-modules, it does not generally hold that $X_{N}X_{M}=X_{N\oplus M}.$ We have the following explicit characterization, which is a generalization of [@fanqin Proposition 5.3.2]. \[hall multi\] Let $M$ and $N$ be $kQ$-modules. Then $$q^{[M,N]^{1}}X_{N}X_{M}=q^{-\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{m}, (\widetilde{I}-\widetilde{R})\underline{n})} \sum_{E}\varepsilon_{MN}^{E}X_E.$$ We apply Green’s formula in [@Green] $$\sum_{E}\varepsilon_{MN}^{E}F^{E}_{XY}=\sum_{A,B,C,D}q^{[M,N]-[A,C]-[B,D]-\langle A,D\rangle}F^{M}_{AB}F^{N}_{CD}\varepsilon_{AC}^{X}\varepsilon_{BD}^{Y}.$$ Then $$\begin{aligned} && \sum_{E}\varepsilon_{MN}^{E}X_E \nonumber\\ &=& \sum_{E,X,Y}\varepsilon_{MN}^{E}q^{-\frac{1}{2}\langle Y,X\rangle}F^{E}_{XY}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}}\nonumber\\ &=&\sum_{A,B,C,D,X,Y}q^{[M,N]-[A,C]-[B,D]-\langle A,D\rangle-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}\varepsilon_{AC}^{X}\varepsilon_{BD}^{Y}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}}.\nonumber\end{aligned}$$ Since $$\begin{aligned} && X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}} \nonumber\\ &=& X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}\nonumber\\ &=& q^{-\frac{1}{2}\Lambda(\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}, \widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m})}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}}\nonumber\\ &=& q^{-\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})-\frac{1}{2}[\langle D,B\rangle-\langle B,D\rangle-\langle D,M\rangle+\langle B,N\rangle]}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}}\nonumber\\ &=&q^{-\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}q^{\frac{1}{2}\langle D,A\rangle-\frac{1}{2}\langle B,C\rangle}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}}.\nonumber\end{aligned}$$ Thus $$\begin{aligned} && \sum_{E}\varepsilon_{MN}^{E}X_E \nonumber\\ &=& q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{m}, (\widetilde{I}-\widetilde{R})\underline{n})}\sum_{A,B,C,D}q^{[M,N]-[A,C]-[B,D]-\langle A,D\rangle-\frac{1}{2}\langle B+D,A+C\rangle+[A,C]^{1}+[B,D]^{1}}\cdot\nonumber\\ &&q^{\frac{1}{2}\langle D,A\rangle-\frac{1}{2}\langle B,C\rangle}F^{M}_{AB}F^{N}_{CD}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}}.\nonumber\end{aligned}$$ Here we use the following fact $$\sum_{X}\varepsilon_{AC}^{X}=q^{[A,C]^{1}},\sum_{Y}\varepsilon_{BD}^{Y}=q^{[B,D]^{1}}$$ Note that $$[M,N]-[A,C]-[B,D]-\langle A,D\rangle+[A,C]^{1}+[B,D]^{1}=[M,N]^{1}+\langle B, C\rangle.$$ Hence $$\begin{aligned} && \sum_{E}\varepsilon_{MN}^{E}X_E \nonumber\\ &=& q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{m}, (\widetilde{I}-\widetilde{R})\underline{n})}q^{[M,N]^{1}}\sum_{A,B,C,D}q^{\langle B,C\rangle-\frac{1}{2}\langle B,C\rangle-\frac{1}{2}\langle D,A\rangle+\frac{1}{2}\langle D,A\rangle-\frac{1}{2}\langle B,C\rangle}\cdot\nonumber\\ && F^{N}_{CD}q^{-\frac{1}{2}\langle D,C\rangle}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} F^{M}_{AB}q^{-\frac{1}{2}\langle B,A\rangle}X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}} \nonumber\\ &=&q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{m}, (\widetilde{I}-\widetilde{R})\underline{n})}q^{[M,N]^{1}}X_{N}X_{M}.\nonumber\end{aligned}$$ This completes the proof. Theorem \[hall multi\] is similar to the multiplication formula in dual Hall algebras. It is reasonable to conjecture that it provides some PBW-type basis ([@GP]) in the corresponding quantum cluster algebra. Let $M,N$ be $kQ-$modules and assume that $$\mathrm{dim}_{k}\mathrm{Ext}^{1}_{k\widetilde{Q}}(M,N)=\mathrm{dim}_{k}\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)=1.$$ Then there are two “canonical” exact sequences $$\varepsilon:\quad 0\longrightarrow N\longrightarrow E\longrightarrow M\longrightarrow 0$$ $$\varepsilon': \quad 0\longrightarrow D_{0}\longrightarrow N\longrightarrow \tau M\longrightarrow \tau A\oplus I\longrightarrow 0$$ which induces the $k$-bases of $\mathrm{Ext}^{1}_{k\widetilde{Q}}(M,N)$ and $\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)$, respectively. We fix them. Set $M=M'\oplus P_0, A_0=A\oplus P_0$ where $P_0$ is a projective $k\widetilde{Q}$-module, $A$ and $M'$ have no projective summands. The exact sequences also provide the two non-split triangles in $\mathcal{C}_{\widetilde{Q}}$: $$N\longrightarrow E\longrightarrow M\longrightarrow N[1]=\tau N$$ and $$M\longrightarrow D_{0}\oplus A_0\oplus I[-1]\longrightarrow N\longrightarrow \tau M.$$ Now we state the first part of our multiplication theorem for acyclic quantum cluster algebras, which can be viewed as a quantum analogue of the one-dimensional Caldero-Keller multiplication theorem in [@CK2]. The main idea in the proof comes from [@Hubery]. \[multi-formula\] With the above notation, assume that $\mathrm{Hom}_{k\widetilde{Q}}(D_0,\tau A_0\oplus I)=\mathrm{Hom}_{k\widetilde{Q}}(A_0,I)=0.$ Then the following formula holds $$X_{N}X_M=q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})+\frac{1}{2}\langle M,N\rangle-\frac{1}{2}\langle A_0, D_0\rangle}X_{D_0\oplus A_0\oplus I[-1]}.$$ Here, we note that $$\frac{q^{{[M,N]^1}}-1}{q-1}X_{N}X_M=X_{N}X_M.$$ By definition, we have $$\begin{aligned} && X_{N}X_M \nonumber\\ &=& \sum_{C,D}q^{-\frac{1}{2}\langle D,C\rangle}F^{N}_{CD}X^{\widetilde{B}\underline{d}-(\widetilde{I}-\widetilde{R})\underline{n}} \sum_{A,B}q^{-\frac{1}{2}\langle B,A\rangle}F^{M}_{AB}X^{\widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m}}\nonumber\\ &=& \sum_{A,B,C,D}F^{M}_{AB}F^{N}_{CD}q^{-\frac{1}{2}\langle D,C\rangle-\frac{1}{2}\langle B,A\rangle+\frac{1}{2}\Lambda(\widetilde{B}\underline{d}- (\widetilde{I}-\widetilde{R})\underline{n}, \widetilde{B}\underline{b}-(\widetilde{I}-\widetilde{R})\underline{m})}X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})} \nonumber\\ &=&\sum_{A,B,C,D}F^{M}_{AB}F^{N}_{CD}q^{-\frac{1}{2}\langle B+D,A+C\rangle}q^{\langle B,C\rangle}q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}.\nonumber\end{aligned}$$ We set $$s_1:=\sum_{E\ncong M\oplus N}\frac{\varepsilon_{MN}^{E}}{q-1}X_E=\sum_{X,Y,E\ncong M\oplus N} \frac{\varepsilon_{MN}^{E}}{q-1}F^{E}_{XY}q^{-\frac{1}{2}\langle Y,X\rangle} X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}}$$ As in the proof of Theorem \[hall multi\], we have $$\begin{aligned} && \sum_{X,Y,E}\varepsilon_{MN}^{E}X_E \nonumber\\ &=& \sum_{A,B,C,D,X,Y}q^{[M,N]-[A,C]-[B,D]-\langle A,D\rangle-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}\varepsilon_{AC}^{X}\varepsilon_{BD}^{Y}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}}\nonumber\\ &=& \sum_{A,B,C,D}q^{[M,N]^{1}+\langle B,C\rangle-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{e}}.\nonumber\end{aligned}$$ On the other hand $$X_{M\oplus N}=\sum_{A,B,C,D}q^{[B,C]-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}.$$ Thus $$s_1=\sum_{A,B,C,D}\frac{q^{[M,N]^{1}}-q^{[B,C]^{1}}}{q-1}q^{\langle B,C\rangle-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}.$$ Thirdly we compute the term $$s_2:=\sum_{A,D_0,I,D_0\ncong N}\frac{|\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)_{D_{0}AI}|}{q-1}X_{A_{0}\oplus D_{0}\oplus I[-1]}.$$ Here, we use the following notation as in [@Hubery] $$\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)_{D_0AI}:=\{f\neq 0: N\longrightarrow \tau M|\mathrm{ker}f\cong D_0, \mathrm{coker}f\cong \tau A\oplus I\}.$$ Note that $\mathrm{dim}_{k}\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)=1,$ we have the following exact sequences $$0\longrightarrow B_0\longrightarrow M\longrightarrow A_0\longrightarrow 0$$ $$0\longrightarrow C\longrightarrow \tau B_0\longrightarrow I\longrightarrow 0$$ where $C=\mathrm{im}f, \mathrm{ker}f=D_0.$ $$\begin{aligned} s_2&=& \frac{|\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)|-1}{q-1}X_{A_0\oplus D_0\oplus I[-1]} \nonumber\\ &=& \sum_{X,Y,K,L}\frac{|\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)|-1}{q-1}F^{D_0}_{XY}F^{A_0}_{KL}q^{[L,X]-\frac{1}{2}\langle Y+L,K+X\rangle}X^{\widetilde{B}(\underline{y}+\underline{l}+\underline{b_0})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}\nonumber\\ &=&\sum_{A,B,C,D} \frac{q^{[C,\tau B]}-1}{q-1}F^{M}_{AB}F^{N}_{CD}q^{[L,X]-\frac{1}{2}\langle Y+L,K+X\rangle}X^{\widetilde{B}(\underline{y}+\underline{l}+\underline{b_0})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}.\nonumber\end{aligned}$$ Here, $Y=D,K=A,B=B_0+L$ in the above expression and the equality can be illustrated by the following diagram: $$\xymatrix{&Y\ar@{=}[r]\ar[d]&Y\ar[d]&\tau A\ar@{=}[r]&\tau A&\\ 0\ar[r]&D_0\ar[r]\ar[d]&N\ar[r]\ar[d]&\tau M\ar[r]\ar[u]&\tau A_0\oplus I\ar[r]\ar[u]&0\\ 0\ar[r]&X\ar[r]&C\ar[r]&\tau B\ar[r]\ar[u]&\tau L\oplus I\ar[r]\ar[u]&0}$$ We must to check the relation between $$-\frac{1}{2}\langle Y+L,K+X\rangle+[L,X]$$ and $$-\frac{1}{2}\langle B+D,A+C\rangle+\langle B,C\rangle.$$ In this case, note that $ D=Y, L=A_0-A,K=A,[L,X]^{1}=[X,\tau L]=0.$ We have $$\begin{aligned} -\frac{1}{2}\langle Y+L,K+X\rangle+[L,X] &=& -\frac{1}{2}\langle Y+L,K+X\rangle+\langle L,X\rangle \nonumber\\ &=& -\frac{1}{2}\langle D+A_0-A,A+D_0-D\rangle+\langle A-A_0,D_0-D\rangle\nonumber\\ &=& -\frac{1}{2}\langle D,A\rangle-\frac{1}{2}\langle D,D_0\rangle+\frac{1}{2}\langle D,D\rangle-\frac{1}{2}\langle A_0,A\rangle +\frac{1}{2}\langle A_0,D_0\rangle\nonumber\\ && -\frac{1}{2}\langle A_0,D\rangle+\frac{1}{2}\langle A,A\rangle -\frac{1}{2}\langle A,D_0\rangle+\frac{1}{2}\langle A,D\rangle. \nonumber\\end{aligned}$$ And $$\begin{aligned} && -\frac{1}{2}\langle B+D,A+C\rangle+\langle B,C\rangle \nonumber\\ &=& -\frac{1}{2}\langle M-A+D,A+N-D\rangle+\langle M-A,N-D\rangle\nonumber\\ &=& -\frac{1}{2}\langle M,A\rangle-\frac{1}{2}\langle M,D\rangle+\frac{1}{2}\langle A,A\rangle-\frac{1}{2}\langle A,N\rangle+\frac{1}{2}\langle A,D\rangle\nonumber\\ && -\frac{1}{2}\langle D,A\rangle-\frac{1}{2}\langle D,N\rangle+\frac{1}{2}\langle D,D\rangle+\frac{1}{2}\langle M,N\rangle. \nonumber\\end{aligned}$$ Hence it is equivalent to compare $$-\frac{1}{2}\langle D,D_0\rangle-\frac{1}{2}\langle A_0,A\rangle+\frac{1}{2}\langle A_0,D_0\rangle-\frac{1}{2}\langle A_0,D\rangle-\frac{1}{2}\langle A,D_0\rangle$$ and $$-\frac{1}{2}\langle D,N\rangle-\frac{1}{2}\langle M,A\rangle+\frac{1}{2}\langle M,N\rangle-\frac{1}{2}\langle M,D\rangle-\frac{1}{2}\langle A,N\rangle.$$ We claim that $$\langle D,N\rangle+\langle M,D\rangle=\langle D,D_0\rangle+\langle A_0,D\rangle$$ and $$\langle A_0,A\rangle+\langle A,D_0\rangle=\langle M,A\rangle+\langle A,N\rangle.$$ Indeed, we have $$\begin{aligned} \langle D,N-D_0\rangle &=& \langle D,\tau M-\tau A_0-I\rangle \nonumber\\ &=& \langle D,\tau M-\tau A_0\rangle=\langle A_0-M,D\rangle. \nonumber\\end{aligned}$$ In the same way, we also have $$\langle A, N-D_0\rangle=\langle A_0-M, A\rangle.$$ Thus $$s_2=q^{\frac{1}{2}\langle A_0,D_0\rangle-\frac{1}{2}\langle M,N\rangle}\sum_{A,B,C,D}\frac{q^{[B,C]^{1}}-1}{q-1}q^{\langle B,C\rangle-\frac{1}{2}\langle B+D,A+C\rangle}F^{M}_{AB}F^{N}_{CD}X^{\widetilde{B}(\underline{b}+\underline{d})-(\widetilde{I}-\widetilde{R})(\underline{m}+\underline{n})}.$$ Therefore we have the following multiplication formula $$X_{N}X_M=q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})+\frac{1}{2}\langle M,N\rangle-\frac{1}{2}\langle A_0,D_0\rangle}X_{D_0\oplus A_0\oplus I[-1]}.$$ There are three canonical special cases satisfying the assumption $\mathrm{Hom}_{k\widetilde{Q}}(D_0,\tau A\oplus I)=\mathrm{Hom}_{k\widetilde{Q}}(A_0,I)=0$ in Theorem \[multi-formula\]. **Special case I**. Assume that $A_0=0=I.$ Then $L=K=0=A.$ If $B\neq M,$ i.e, $B\subsetneqq M,$ then there exists $f_1: N\longrightarrow \tau M$ induced by the above diagram which is not surjective. It is a contradiction to the assumption $\mathrm{dim}_{k}\mathrm{Hom}_{k\widetilde{Q}}(N,\tau M)=1.$ In this case, the multiplication formula is $$X_{N}X_M=q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})+\frac{1}{2}\langle M,N\rangle}X_{D_0}.$$ **Special case II**. Assume that $D_0=0$ and $\mathrm{Hom}_{k\widetilde{Q}}(A_0,I)=0.$ Then $Y=X=0, C=N.$ In this case, the multiplication formula is $$X_{N}X_M=q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})+\frac{1}{2}\langle M,N\rangle}X_{A_0\oplus I[-1]}.$$ **Special case III**. Assume that $M,N$ are indecomposable rigid $kQ$-mod and $$\mathrm{dim}_k\mathrm{Ext}^1_{\mathcal{C}_{\widetilde{Q}}}(M, N)=1.$$ Since $D_0\oplus A_0\oplus I[-1]$ is rigid, then the assumption $\mathrm{Hom}_{k\widetilde{Q}}(D_0,\tau A\oplus I)=\mathrm{Hom}_{k\widetilde{Q}}(A_0,I)=0$ in Theorem \[multi-formula\] holds. \[special\] With the assumption in Special case III, we have $\frac{1}{2}\langle A_0,D_0\rangle-\frac{1}{2}\langle M,N\rangle=\frac{1}{2}.$ Note that we have $$\frac{1}{2}\langle A_0,D_0\rangle-\frac{1}{2}\langle M,N\rangle=\frac{1}{2}\langle A_0,N-N/D_0\rangle-\frac{1}{2}\langle M,N\rangle.$$ We need to confirm the two equations 1. $\langle M,N\rangle=\langle A_0,N\rangle-1$ and 2. $\langle A_0,N/D_0\rangle=0.$ Note that $A_0\oplus N$ is rigid, thus $[A_0,N]^{1}=0.$ We have the following exact sequences $$0\longrightarrow N/D_0\longrightarrow \tau M\longrightarrow \tau A_0 \oplus I\longrightarrow 0$$ $$0\longrightarrow D_0\longrightarrow N\longrightarrow N/D_0\longrightarrow 0$$ Applying the functor $\mathrm{Hom}_{k\widetilde{Q}}(N,-)$, we have the following exact sequences $$[N,N/D_0]^{1}\longrightarrow [N,\tau M]^{1}\longrightarrow [N,\tau A_0\oplus I]^{1}\longrightarrow 0$$ $$[N,N]^{1}\longrightarrow [N,N/D_0]^{1}\longrightarrow 0.$$ Hence $$\langle M,N\rangle=[M,N]-1=[A_0,N]-1=\langle A_0,N\rangle-1.$$ As for the second equation, apply the functor $\mathrm{Hom}_{k\widetilde{Q}}(A_0,-)$ to the exact sequence $$0\longrightarrow D_0\longrightarrow N\longrightarrow N/D_0\longrightarrow 0$$ We have the following exact sequence $$[A_0,N]^{1}\longrightarrow [A_0,N/D_0]^{1}\longrightarrow 0$$ Thus $[A_0,N/D_0]^{1}=0.$ Applying the functor $\mathrm{Hom}_{k\widetilde{Q}}(\tau M,-)$ to the exact sequence $$0\longrightarrow N/D_0\longrightarrow \tau M\longrightarrow \tau A_0 \oplus I\longrightarrow 0$$ We have the following exact sequence $$[\tau M,\tau M]^{1}\longrightarrow [\tau M,\tau A_0\oplus I]^{1}\longrightarrow 0$$ Thus we have $$[M,A_0]^{1}=[A_0,\tau M]=0.$$ Again applying the functor $\mathrm{Hom}_{k\widetilde{Q}}(A_0,-)$, we have the exact sequence $$0\longrightarrow [A_0,N/D_0]\longrightarrow [A_0,\tau M]=0$$ Hence $[A_0,N/D_0]=0.$ By Lemma \[special\], we obtain the following multiplication theorem between quantum cluster variables in [@fanqin]. Let $M$ and $N$ be indecomposable rigid $kQ$-modules and $\mathrm{dim}_k\mathrm{Ext}^1_{\mathcal{C}_{\widetilde{Q}}}(M, N)=1.$ Let $$N\longrightarrow E\longrightarrow M\longrightarrow N[1]=\tau N$$ and $$M\longrightarrow D_{0}\oplus A_{0}\oplus I[-1]\longrightarrow N\longrightarrow \tau M$$ be two non-split triangles in $\mathcal{C}_{\widetilde{Q}}$ as above. Then we have $$X_{N}X_M=q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda((\widetilde{I}-\widetilde{R})\underline{n}, (\widetilde{I}-\widetilde{R})\underline{m})-\frac{1}{2}}X_{D_0\oplus A_0\oplus I[-1]}.$$ Now let $M$ be a $kQ$-module and $P$ be a projective $k\widetilde{Q}$-module with $[P,M]=[M,I]=1,$ where $I=\nu(P)$, here $\nu=\mathrm{DHom}_{k\widetilde{Q}}(-,k\widetilde{Q})$ is the Nakayama functor. It is well-known that $I$ is an injective module with $\mathrm{soc}I=P/\mathrm{rad}P.$ Fix two nonzero morphisms $f\in \mathrm{Hom}_{k\widetilde{Q}}(P, M)$ and $g\in \mathrm{Hom}_{k\widetilde{Q}}(M, I).$ The two morphisms induce the following exact sequences $$\xymatrix{0\ar[r]& P'\ar[r]& P\ar[r]^{f}& M\ar[r]& A\ar[r]& 0}$$ and $$\xymatrix{0\ar[r]& B\ar[r]& M\ar[r]^{g}& I\ar[r]& I'\ar[r]& 0}.$$ These correspond to two non-split triangles in $\mathcal{C}_{\widetilde{Q}}$ $$M\longrightarrow E'\longrightarrow P[1]\longrightarrow M[1]$$ and $$I[-1]\longrightarrow E\longrightarrow M\longrightarrow I,$$ respectively, where $E\simeq B\oplus I'[-1]$ and $E'\simeq A\oplus P'[1].$ Now we state the second part of our multiplication theorem for acyclic quantum cluster algebras. \[exchange2\] With the above notations, assume that $[B,I']=[P',A]=0.$ Then we have $$X_{\tau P}X_{M}=q^{\frac{1}{2}\Lambda(\underline{\mathrm{dim}}P/rad P, -(\widetilde{I}-\widetilde{R})\underline{m})}X_E+q^{\frac{1}{2}\Lambda(\underline{\mathrm{dim}}P/rad P, -(\widetilde{I}-\widetilde{R})\underline{m})-\frac{1}{2}}X_{E'}.$$ $$\begin{aligned} && X_{\tau P}X_{M} \nonumber\\ &=& X^{\underline{\mathrm{dim}} P/rad P}\sum_{G,H}q^{-\frac{1}{2}\langle H,G\rangle}F^{M}_{GH}X^{\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R})\underline{m}} \nonumber\\ &=& \sum_{G,H}q^{-\frac{1}{2}\langle H,G\rangle}F^{M}_{GH}q^{\frac{1}{2}\Lambda(\underline{\mathrm{dim}} P/\mathrm{rad}P,\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R}) \underline{m})}X^{\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R})\underline{m}+\underline{\mathrm{dim}} P/rad P} \nonumber\\ &=&q^{\frac{1}{2}\Lambda(\underline{\mathrm{dim}} P/\mathrm{rad}P,-(\widetilde{I}-\widetilde{R})\underline{m})}\sum_{G,H}q^{-\frac{1}{2}\langle H,G\rangle}q^{-\frac{1}{2}[P,H]}F^{M}_{GH}X^{\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R})\underline{m}+\underline{\mathrm{\mathrm{dim}}} P/\mathrm{rad}P}.\nonumber\end{aligned}$$ Here we use the following fact $$\Lambda(\underline{\mathrm{dim}} P/\mathrm{rad}P,\widetilde{B}\underline{h})=(\underline{\mathrm{dim}} P/\mathrm{rad}P)^{tr}\Lambda\widetilde{B}\underline{h}=- (\underline{\mathrm{dim}} P/\mathrm{rad}P)^{tr}\begin{bmatrix}I_n\\0 \end{bmatrix}\underline{h}=-[P,H].$$ Note that by assumption $[P,M]=1,$ we have that $[P,H]=0\ or \ 1.$ We firstly compute the term $$X_E=X_{B\oplus I'[-1]}=\sum_{X,Y}q^{-\frac{1}{2}\langle Y,X\rangle}F^{B}_{XY}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{b}+\underline{\mathrm{dim}} \mathrm{soc}I'}.$$ We have the following diagram $$\xymatrix{&0\ar[d]&0\ar[d]\\ &Y\ar@{=}[r]\ar[d]&Y\ar[d]\\ 0\ar[r]&B\ar[r]\ar[d]&M\ar[r]^{\theta}\ar[d]&I\ar[r]&I'\ar[r]&0\\ 0\ar[r]&X\ar[r]\ar[d]&G\ar[d]\\ &0&0}$$ and a short exact sequence $$0\longrightarrow \mathrm{im}\theta \longrightarrow I\longrightarrow I'\longrightarrow 0.$$ As we assume that $[B,I']=0,$ thus $[H,I']=0.$ Then $$\begin{aligned} && \langle Y,X\rangle-\langle H,G\rangle = \langle H,X\rangle-\langle H,G\rangle =\langle H,X-G\rangle =\langle H,B-M\rangle=-\langle H, \mathrm{im}\theta\rangle.\nonumber\end{aligned}$$ Applying the functor $[H,-]$ to the above short exact sequence, we have $$0\longrightarrow [H, \mathrm{im}\theta]\longrightarrow [H, I]\longrightarrow [H, I']\longrightarrow [H, \mathrm{im}\theta]^{1}\longrightarrow 0$$ Note that $[H, I]=[H, I']=0,$ thus $\langle H, \mathrm{im}\theta\rangle=0.$ Hence $$X_E=\sum_{G,H,[P,H]=0}q^{-\frac{1}{2}\langle H,G\rangle}F^{M}_{GH}X^{\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R})\underline{m}+\underline{\mathrm{dim}} P/\mathrm{rad}P}.$$ Now compute the term $$X_{E'}=X_{A\oplus P'[1]}=\sum_{X,Y}q^{-\frac{1}{2}\langle Y,X\rangle}F^{A}_{XY}X^{\widetilde{B}\underline{y}-(\widetilde{I}-\widetilde{R})\underline{a}+\underline{\mathrm{dim}} P'/\mathrm{rad}P'}.$$ We have the following diagram $$\xymatrix{&&&0\ar[d]&0\ar[d]\\ &&P\ar[r]\ar@{=}[d]&H\ar[r]\ar[d]&Y\ar[r]\ar[d]&0\\ 0\ar[r]&P'\ar[r]&P\ar[r]^{\theta'}&M\ar[r]\ar[d]&A\ar[r]\ar[d]&0\\ &&&G\ar@{=}[r]\ar[d]&X\ar[d]\\ &&&0&0}$$ Applying the functor $[P',-]$ to the following short exact sequence $$0\longrightarrow Y \longrightarrow A\longrightarrow G\longrightarrow 0.$$ we have $$0\longrightarrow [P',Y]\longrightarrow [P', A]\longrightarrow [P', G]\longrightarrow 0$$ As we assume that $[P',A]=0,$ thus $[P',G]=0.$ Then $\langle P',G\rangle=0.$ Hence we have $$\begin{aligned} && \langle Y,X\rangle-\langle H,G\rangle = \langle Y,G\rangle-\langle H,G\rangle =\langle Y-H,G\rangle =\langle A-M,G\rangle\nonumber\\ &=& \langle P'-P,G\rangle=\langle P',G\rangle=0.\nonumber\end{aligned}$$ Therefore $$X_E'=\sum_{G,H,[P,H]=1}q^{-\frac{1}{2}\langle H,G\rangle}F^{M}_{GH}X^{\widetilde{B}\underline{h}-(\widetilde{I}-\widetilde{R})\underline{m}+\underline{\mathrm{dim}} P/\mathrm{rad}P}.$$ This completes the proof. Note that if $M$ is indecomposable rigid object in $\mathcal{C}_{\widetilde{Q}}$ and $[P,M]=[M,I]=1$, then both $E=B\oplus I'[-1]$ and $E'=A\oplus P'[1]$ are rigid. Thus the assumptions $\mathrm{Hom}_{k\widetilde{Q}}(B,I')=\mathrm{Hom}_{k\widetilde{Q}}(P',A)=0$ in Theorem \[exchange2\] naturally hold. The quantum cluster multiplication theorem in [@fanqin] deals with this special case. $\mathbb{ZP}$-bases in specialized quantum cluster algebras of finite type ========================================================================== Let $k$ be a finite field with cardinality $|k|=q$ and $m\geq n$ be two positive integers and $\widetilde{Q}$ an acyclic quiver with vertex set $\{1,\ldots,m\}$. The full subquiver $Q$ on the vertices $\{1,\ldots,n\}$ is the principal part of $\widetilde{Q}$. Let $\mathcal{A}_{|k|}(\widetilde{Q})$ be the corresponding specialized quantum cluster algebra of $Q$ with coefficients. Then the main theorem in [@fanqin] shows that $\mathcal{A}_{|k|}(\widetilde{Q})$ is the $\mathbb{ZP}$-subalgebra of $\Fcal$ generated by $$\{X_M| M\ \text{is indecomposable rigid $kQ$-mod}\}\cup$$$$\{X_{\tau P_i},1\leq i\leq n|P_i\ \text{is indecomposable projective $k\widetilde{Q}$-mod}\}.$$ Let $i$ be a sink or a source in $\widetilde{Q}$. We define the reflected quiver $\sigma_i(\widetilde{Q})$ by reversing all the arrows ending at $i$. An *admissible sequence of sinks (resp. sources)* is a sequence $(i_1, \ldots, i_l)$ such that $i_1$ is a sink (resp. source) in $\widetilde{Q}$ and $i_k$ is a sink (resp source) in $\sigma_{i_{k-1}}\cdots \sigma_{i_1}(\widetilde{Q})$ for any $k=2, \ldots, l$. A quiver $\widetilde{Q}'$ is called *reflection-equivalent* to $\widetilde{Q}$ if there exists an admissible sequence of sinks or sources $(i_1, \ldots, i_l)$ such that $\widetilde{Q}'=\sigma_{i_{l}}\cdots \sigma_{i_1}(\widetilde{Q})$. Note that mutations can be viewed as generalizations of reflections, i.e, if $i$ is a sink or a source in a quiver $\widetilde{Q}$, then $\mu_i(\widetilde{Q})=\sigma_i(\widetilde{Q})$ where $\mu_i$ denotes the mutation in the direction $i$. Thus if $\widetilde{Q}'$ is a quiver mutation-equivalent to $\widetilde{Q}$, there is a natural canonical isomorphism between $\mathcal{A}_{|k|}(\widetilde{Q})$ and $\mathcal{A}_{|k|}(\widetilde{Q}'),$ denoted by $$\Phi_{i}: \mathcal{A}_{|k|}(\widetilde{Q})\rightarrow \mathcal{A}_{|k|}(\widetilde{Q}').$$ Let $\Sigma_i^+:\ \mathrm{mod}(\widetilde{Q}) \longrightarrow \ \mathrm{mod}(\widetilde{Q}')$ be the standard BGP-reflection functor and $R_i^+:\mathcal C_{\widetilde{Q}} \longrightarrow \mathcal C_{\widetilde{Q}'}$ be the extended BGP-reflection functor defined by [@Zhu]: $$R_i^+:\left\{\begin{array}{rcll} X & \mapsto & \Sigma_i^+(X), & \textrm{ if }X \not \simeq S_i \textrm{ is a $kQ$-module,}\\ S_i & \mapsto & P_i[1], \\ P_j[1] & \mapsto & P_j[1], & \textrm{ if }j \neq i,\\ P_i[1] & \mapsto & S_i. \end{array}\right.$$ By Rupel [@rupel], the following holds: [@rupel Theorem 2.4, Lemma 5.6]\[ref\] For any $ X_M^{\widetilde{Q}}\in\mathcal{A}_{|k|}(\widetilde{Q})$, we have $\Phi_{i}(X_M^{\widetilde{Q}})=X_{R_i^+M}^{\widetilde{Q}'}.$ Let $Q$ be an acyclic quiver with associated matrix $B$. $Q$ will be called *graded* if there exists a linear form $\epsilon$ on $\mathbb{Z}^{n}$ such that $\epsilon(B \alpha_i)<0$ for any $1\leq i \leq n$ where $\alpha_i$ still denotes the $i$-th vector of the canonical basis of $\mathbb{Z}^{n}$. If $Q$ is a graded quiver, then it is proved in [@CK1] that we can endow the cluster algebra of $Q$ with a grading. Namely, the results are the following: For any Laurent polynomial $P$ in the variables $X_i$, the $supp(P)$ of $P$ is defined as the set of points $\lambda=(\lambda_i,1\leq i \leq n)$ of $\mathbb{Z}^{n}$ such that the $\lambda$-component, that is, the coefficient of $\prod_{1\leq i \leq n} X_i^{\lambda_i}$ in $P$ is nonzero. For any $\lambda$ in $\mathbb{Z}^{n}$, let $C_\lambda$ be the convex cone with vertex $\lambda$ and edge vectors generated by the $B\alpha_i$ for any $1\leq i \leq n$. Then we have the following two propositions as the quantum versions of Proposition 5 and Proposition 7 in [@CK1] respectively. \[prop:supportconeCK1\] Let $Q$ be a graded acyclic quiver with no multiple arrows and $M=M_0 \oplus P_M[1]$ with $M_0$ is $kQ$-module and $P_M$ projective $k\widetilde{Q}$-module. Then, $supp(X_{M_0\oplus P_M[1]})$ is in $C_{\lambda_M}$ with $\lambda_M:=(-\langle\alpha_i,\underline{dim} M_0\rangle+\langle\underline{dim} P_M, \alpha_i\rangle) _{1\leq i \leq n}$. Moreover, the $\lambda_M$-component of $X_{M_0\oplus P_M[1]}$ is some nonzero monomials in $\{|k|^{\pm\frac{1}{2}},X^{\pm 1 }_{n+1},\cdots,X^{\pm 1 }_{m}\}$. \[prop:graduationCK1\] Let $Q$ be a graded acyclic quiver with no multiple arrows. For any $m \in \mathbb{Z}$, set $$F_m=\left( \bigoplus_{\epsilon(\nu) \leq m} \mathbb{ZP}\prod_{1\leq i \leq n}u_i^{\nu_i}\right) \cap \Acal_{|k|}(\widetilde{Q}),$$ then the set $(F_m)_{m \in \mathbb{Z}}$ defines a $\mathbb{Z}$-filtration of $\Acal_{|k|}(\widetilde{Q})$. For any $\underline{d}\in \mathbb{Z}^{n},$ define $\underline{d}^{+}=(d^+_i)_{1\leq i \leq n}$ such that $d^+_i=d_i$ if $d_i>0$ and $d_i^+=0$ if $d_i\leq 0$ for any $1\leq i \leq n.$ Dually, we set $\underline{d}^-=\underline{d}^+-\underline{d}.$ The following proposition \[10\] can be viewed as the categorification of [@berzel Theorem 7.3]. \[10\] Let $\widetilde{Q}$ be an acyclic quiver. Then the set $\{\prod_{i=1}^{n}X^{d^+_i}_{S_i}X^{d^-_i}_{P_i[1]}\mid (d_1,\cdots,d_n)\in \mathbb{Z}^n\}$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q})$. For any $1\leq i\leq n,$ it is easy to check that the following set is a cluster $$\{X_{\tau P_{1}},\cdots,X_{\tau P_{i-1}}, X_{S_{i}},X_{\tau P_{i+1}},\cdots, X_{\tau P_{n}}\}$$ obtained by the mutation in direction $i$ of the cluster $$\{X_{\tau P_{1}},\cdots,X_{\tau P_{i-1}}, X_{\tau P_{i}},X_{\tau P_{i+1}},\cdots, X_{\tau P_{n}}\}.$$ Then the proposition immediately follows from [@berzel Theorem 7.3] and [@fanqin Theorem 5.4.3]. The main result is the following theorem showing the $\mathbb{ZP}$-basis in a quantum cluster algebra of finite type. It is the good bases in a cluster algebra of finite type in [@CK1] by specializing $q$ and coefficients to $1$ and the existence of Hall polynomials for representation direct algebras [@ringel:2]. \[12\] Let $Q$ is a simple-laced Dynkin quiver with $Q_0=\{1,2,\cdots, n\}.$ Then the set $\mathcal{B}(Q):=\{X_{M}|M=M_0 \oplus P_M[1]\ \text{with}\ M_0\ \text{is}\ kQ\text{-module},\ P_M\ \ \text{projective}\ k\widetilde{Q}\text{-module},\ M\ \text{rigid}$ $\text{object in}\ \mathcal C_{\widetilde{Q}}\}$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q})$. It is obvious to see that there exists an orientation such that $Q'$ is a graded quiver where $Q'$ is reflection-equivalent to $Q$. Assume that $\sigma_{i_{l}}\cdots \sigma_{i_1}(Q')=Q$. For any $X_{M}\in \mathcal{B}(Q)$ with dimension vector $\mathrm{\underline{dim}}M=\underline{m}=(m_1,\cdots,m_n)\in \mathbb{Z}^n$, we know that $X_{M}\in \Acal_{|k|}(\widetilde{Q}')$. Then by Proposition \[10\] we have $$X^{\widetilde{Q}'}_{M}=b_{\underline{m}}\prod_{i=1}^{n}(X_{S_i}^{\widetilde{Q}'})^{m^+_i}(X^{\widetilde{Q}'}_{P_i[1]})^{m^-_i} +\sum_{\epsilon(\underline{l})< \epsilon(\underline{m})}b_{\underline{l}}\prod_{i=1}^{n}(X_{S_i}^{\widetilde{Q}'})^{l^+_i}(X^{\widetilde{Q}'}_{P_i[1]})^{l^-_i}$$ where $\underline{l}=(l^+_i-l^-_i)_{i\in Q_0}$, $b_{\underline{m}}$ and $b_{\underline{l}}\in \mathbb{ZP}$. As $Q'$ is a graded quiver, then by Proposition \[prop:supportconeCK1\], Proposition \[prop:graduationCK1\], it follows that $b_{\underline{m}}$ must be some nonzero monomial in $\{q^{\pm\frac{1}{2}},X^{\pm 1 }_{n+1},\cdots,X^{\pm 1 }_{m}\}$. Therefore, $\mathcal{B}(Q)$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q}')$. There is a natural isomorphism: $\Phi_{i_{l}}\cdots \Phi_{i_1}: \mathcal{A}_{|k|}(\widetilde{Q}')\rightarrow \mathcal{A}_{|k|}(\widetilde{Q})$. By Theorem \[ref\], we obtain that $$\Phi_{i_{l}}\cdots \Phi_{i_1}(X^{\widetilde{Q}'}_M)=X^{\widetilde{Q}}_{R^+_{i_{l}}\cdots R^+_{i_1}(M)}.$$ Hence, $\mathcal{B}(Q)$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q})$. $\mathbb{ZP}$-bases in quantum cluster algebras of affine type ============================================================== The case in the Kronecker quiver -------------------------------- Let $Q$ be the tame quiver of type $\widetilde{A}^{(1)}_{1}$ as follows (5,4) (0,2)[1]{}(0.4,2.2)[$\bullet$]{} (3,2.2)[$\bullet$]{}(3.4,2)[2]{} (0.8,2.5)[(3,0)[2]{}]{} (0.8,2.2)[(3,0)[2]{}]{} It is well known that the regular indecomposable modules decomposes into a direct sum of homogeneous tubes indexed by the projective line $\mathbb{P}^1$. We denote the regular indecomposable modules in the homogeneous tube for $p\in \mathbb{P}^1$ of degree $1$ by $R_p(n)$ where $n\in\mathbb{N}$ and $\underline{\mathrm{dim}}R_p(n)=(n, n)$. We consider the following ice quiver $\widetilde{Q}$ with frozen vertices $3$ and $4$: $$\xymatrix{1\bullet\ar[d]\ar @<2pt>[r] \ar@<-2pt>[r]& \bullet 2\ar[d]\\ 3& 4}$$ Thus we have $$\widetilde{R}=\left(\begin{array}{cc} 0 & 0\\ 2& 0\\ 1& 0\\ 0 & 1\end{array}\right),\ \widetilde{I}=\left(\begin{array}{cc} 1 & 0\\ 0& 1\\ 0& 0\\ 0 & 0\end{array}\right),\ \widetilde{B}=\left(\begin{array}{cc} 0 & 2\\ -2& 0\\ -1& 0\\ 0 & -1\end{array}\right).$$ An easy calculation shows that the following antisymmetric $4\times 4$ integer matrix $$\Lambda=\left(\begin{array}{cccc} 0 & 0& 1& 0\\ 0 & 0& 0& 1\\ -1 & 0& 0& -2\\ 0 & -1& 2& 0\end{array}\right)$$ satisfying $$\begin{aligned} \label{eq:simply_laced_compatible} \Lambda(-\widetilde{B})=\widetilde{I}:=\begin{bmatrix}I_2\\0 \end{bmatrix},\end{aligned}$$ where $I_2$ is the identity matrix of size $2\times 2$. Then we have the following result. \[kro\] Let $R_{p}(1)$ be the indecomposable regular module of degree $1$ as above. Then $$X_{R_p(1)}=X_{S_1}X_{S_2}-q^{-\frac{3}{2}}X_1X_2X_3.$$ By definition, we have $$X_{S_1}=X^{(-1,2,1,0)}+X^{(-1,0,0,0)};$$ $$X_{S_2}=X^{(0,-1,0,1)}+X^{(2,-1,0,0)};$$ $$X_{R_p(1)}=X^{(-1,1,1,1)}+X^{(1,-1,0,0)}+X^{(-1,-1,0,1)}.$$ Hence the lemma follows from a direct calculation. By Lemma \[1\], the expression of $X_{R_p(1)}$ is independent of the choice of $p\in \mathbb{P}^1_k$ of degree 1. Hence, we set $$X_{\delta}:= X_{R_p(1)}.$$ \[kro2\] 1. By Lemma \[kro\], we know that $X_{\delta}$ belongs to $\Acal_{|k|}(\widetilde{Q}).$ 2. By the following Theorem \[affine\], $\mathcal{B}(Q)$ is a $\mathbb{ZP}-$basis in the quantum cluster algebra $\Acal_{|k|}(\widetilde{Q}).$ Moreover, if specializing $q$ and coefficients to $1$, $\mathcal{B}(Q)$ is exactly the generic basis in the sense of [@Dup]. Note that there is an alternative choice of $(\Lambda, \widetilde{B})$, i.e., $\widetilde{Q}=Q$ and set $\Lambda=\left(\begin{array}{cc} 0 & 1\\ -1 & 0\end{array}\right)$ and $\widetilde{B}=\left(\begin{array}{cc} 0 & 2\\ -2 & 0\end{array}\right)$. Then we have $ \Lambda(-\widetilde{B})=\left( \begin{array}{cc} 2 & 0 \\ 0 & 2 \\ \end{array} \right). $ Hence, one should consider the category of $KQ$-representations for a field K with $|K|=q^2.$ In this way, we obtain a quantum cluster algebra of Kronecker type without coefficients. The multiplication and the bar-invariant bases in this algebra have been thoroughly studied in [@DX]. Moreover, under the specialization $q=1$, the bases in [@DX] induce the canonical basis, semicanonical basis and generic basis of the cluster algebra of the Kronecker quiver in the sense of [@sherzel],[@calzel] and [@Dup], respectively. The case in affine types ------------------------ An *affine quiver* is an acyclic quiver whose underlying diagram in an extended Dynkin diagram. One can refer to [@dlab][@CB:lectures][@ringel:1099] for the theory of representations of affine quivers. We recall some useful background concerning representation theory of affine quivers. In this section we always assume that $Q$ is an affine quiver with trivial valuation. The category $rep(kQ)$ of finite-dimensional representations can be identified with the category of mod-$kQ$ of finite-dimensional modules over the path algebra $kQ.$ It is well-known that indecomposable $kQ$-module contains (up to isomorphism) three families: the component of indecomposable regular modules $\mathcal R(Q)$, the component of the preprojective modules $\mathcal P(Q)$ and the component of the preinjective modules $\mathcal I(Q)$. If $P \in \mathcal P(Q)$, $I \in \mathcal I(Q)$ and $R \in \mathcal R(Q)$, then $$\Hom_{kQ}(R,P) \simeq \Hom_{kQ}(I,R) \simeq \Hom_{kQ}(I,P)=0,$$ and $$\Ext^1_{kQ}(P,R)\simeq \Ext^1_{kQ}(R,I)\simeq \Ext^1_{kQ}(P,I)=0.$$ If $M$ and $N$ are two regular indecomposable modules in different tubes, then $$\Hom_{kQ}(M,N)=0 \textrm{ and } \Ext^1_{kQ}(M,N)=0.$$ There are at most $3$ non-homogeneous tubes for $Q.$ We denote these tubes by $\mathcal{T}_1, \cdots, \mathcal{T}_t$. Let $r_i$ be the rank of $\mathcal{T}_i$ and the regular simple modules in $\mathcal{T}_i$ be $E^{(i)}_{1}, \cdots E^{(i)}_{r_i}$ such that $\tau E^{(i)}_2=E^{(i)}_1, \cdots, \tau E^{(i)}_1=E^{(i)}_{r_i}$ for $i=1, \cdots, t$. If we restrict the discussion to one tube, we will omit the index $i$ for convenience. Given a regular simple module $E$ in a tube, $E[i]$ is the indecomposable regular module with quasi-socle $E$ and quasi-length $i$ for any $i\in \mathbb{N}$. Define the set $$\textbf{D}(Q)=\{\underline{d}\in \mathbb{N}^ {Q_0}\mid \exists \mbox{ a regular rigid module}\ R\ \mbox{and regular simple modules}\ E_1,\cdots,E_r$$$$\mbox{in a non-homogenerous tube with rank}\ r \mbox{ such that } \mathrm{\underline{dim}}((E_1\oplus\cdots\oplus E_r)^{n}\oplus R)=\underline{d}.$$ Set $\textbf{E}(Q)=\{\underline{d}\in \mathbb{Z}^{Q_0}\mid \exists M=M_0 \oplus P_M[1]\ \text{with}\ M_0\ \text{is}\ kQ\text{-module},\ P_M\ \text{projective}\ k\widetilde{Q}\text{-module},$ $M\ \text{rigid object in}\ \mathcal C_{\widetilde{Q}}\ \text{with}\ \mathrm{\underline{dim}} M=\underline{d}\}$. By the main theorem in [@DXX], we have that $\mathbb{Z}^{Q_0}$ is the disjoint union of $\textbf{D}(Q)$ and $\textbf{E}(Q)$. We make an assignment, i.e., a map $$\phi: \mathbb{Z}^{Q_0}\rightarrow \mathrm{obj}(\mathcal{C}_{\widetilde{Q}})$$ and set $$X_{\phi(\underline{d})}:=(X_{E_1}\cdots X_{E_r})^{n}X_{R}$$ if $\underline{d}\in\textbf{D}(Q)$ and $|Q_0|>2;$ $$X_{\phi(\underline{d})}:=X^{n}_{\delta}$$ for some $\delta$ in a homogeneous tube of degree $1$ if $\underline{d}\in\textbf{D}(Q)$ and $Q$ is the Kronecker quiver; $$X_{\phi(\underline{d})}:=X_{M}$$ if $\underline{d}\in\textbf{E}(Q)$. It is clear that the above assignment is not unique. For simplicity and without confusion, we omit $\phi$ in the notation $X_{\phi(\underline{d})}$. \[affine\] Let $Q$ be an affine quiver with $Q_0=\{1,2,\cdots, n\}$ and fix an assignment as above. Then the set $$\mathcal{B}(Q):=\{X_{\underline{d}}|\underline{d}\in \mathbb{Z}^{Q_0}\}$$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q})$. By [@Dup], there exists an orientation such that $Q'$ is a graded quiver where $Q'$ is reflection-equivalent to $Q$. When $Q'$ is a Kronecker quiver, by Remark \[kro2\], we know that $X^{n}_{\delta}(n\in\mathbb{N})$ is in $\Acal_{|k|}(\widetilde{Q}')$. If $Q'$ is not a Kronecker quiver, we consider the non-homogeneous tubes. By Theorem \[hall multi\], $X_{R}$ is in $\Acal_{|k|}(\widetilde{Q}')$. Thus $(X_{E_1}\cdots X_{E_r})^{n}X_{R}$ is in $\Acal_{|k|}(\widetilde{Q}')$. Note that for any $\underline{m}=(m_1,\cdots,m_n)\in \mathbb{Z}^n$, $X_{\underline{m}}\in\mathcal{B}(Q')$. Then by Proposition \[10\] we have $$X^{\widetilde{Q}'}_{\underline{m}}=b_{\underline{m}}\prod_{i=1}^{n}(X_{S_i}^{\widetilde{Q}'})^{m^+_i}(X^{\widetilde{Q}'}_{P_i[1]})^{m^-_i} +\sum_{\epsilon(\underline{l})< \epsilon(\underline{m})}b_{\underline{l}}\prod_{i=1}^{n}(X_{S_i}^{\widetilde{Q}'})^{l^+_i}(X^{\widetilde{Q}'}_{P_i[1]})^{l^-_i}$$ where $b_{\underline{m}},b_{\underline{l}}\in \mathbb{ZP}$. As $Q'$ is a graded quiver, then by Proposition \[prop:supportconeCK1\], Proposition \[prop:graduationCK1\], it follows that $b_{\underline{d}}$ must be some nonzero monomial in $\{q^{\pm\frac{1}{2}},X^{\pm 1 }_{n+1},\cdots,X^{\pm 1 }_{m}\}$. Therefore, $\mathcal{B}(Q')$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q}')$. By Theorem \[ref\], we obtain that $\mathcal{B}(Q)$ is a $\mathbb{ZP}$-basis of $\Acal_{|k|}(\widetilde{Q})$. By [@CR Proposition 5], the quiver Grassmannians $\mathrm{Gr}_{\underline{e}}(M)$ of a $kQ$-module $M$ are some polynomials in $\mathbb{Z}[q].$ Then by specializing $q$ and coefficients to $1$, the bases in Theorem \[affine\] induces the integral bases in affine cluster algebras ([@DXX]). Theorem \[affine\] does not provide the quantum version for generic bases of affine type in [@Dup]. In order to achieve it, one need to prove a quantum analogue of the difference property [@Dup Definition 3.24]. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Professor Jie Xiao and Dr. Fan Qin for many helpful discussions and suggestions. [99]{} A. Buan, R. Marsh, M. Reineke, I. Reiten, G. Todorov, *Tilting theory and cluster combinatorics,* Adv. Math. **204** (2006), 572–618. A. Berenstein and A. Zelevinsky, [*Quantum cluster algebras,*]{} Adv. Math. **195** (2005), 405–455. P. Caldero and F. Chapoton, *Cluster algebras as Hall algebras of quiver representations*, Comm. Math. Helv. **81** (2006), 595–616. P. Caldero, B. Keller, *From triangulated categories to cluster algebras*, Inv. Math. **172** (2008), 169–211. P. Caldero, B. Keller, *From triangulated categories to cluster algebras II*, Ann. Sci. [É]{}cole Norm. Sup. **39** (4) (2006), no. 6, 983–1009. P. Caldero and M. Reineke, *On the quiver Grassmannian in the acyclic case*, Journal of Pure and Applied Algebra , 2008, 212 (11) : 2369-2380. P. Caldero and A. Zelevinsky, *Laurent expansions in cluster algebras via quiver representations,* Moscow Math. J. **6** (2006), no. 3, 411–429. W. Crawley-Boevey, *Lectures on representations of quivers*, 1992. M. Ding and F. Xu, *Bases of the quantum cluster algebra of the Kronecker quiver*, arXiv:1004.2349. G. Dupont, *Generic variables in acyclic cluster algebras and bases in affine cluster algebras*, arXiv:0811.2909. M. Ding, J. Xiao and F. Xu, *Integral bases of cluster algebras and representations of tame quivers*, arXiv:0901.1937. V. Dlab and C. M. Ringel, Indecomposable Representations of Graphs and Algebras, *Mem. Amer. Math. Soc.* **173** (1976). S. Fomin and A. Zelevinsky, *Cluster algebras. I. Foundations,* J. Amer. Math. Soc. **15** (2002), no. 2, 497–529. S. Fomin and A. Zelevinsky, *Cluster algebras. II. Finite type classification*, Invent. Math. **154** (2003), no. 1, 63–121. J. A. Green, *Hall algebras, hereditary algebras and quantum groups*, Inv. Math. **120** (1995), 361–377. J. Guo and L. Peng, Universal PBW-Basis of Hall-Ringel Algebras and Hall Polynomials, J. Algebra. **198** (1997), 339-351. A. Hubery, *Acyclic cluster algebras via Ringel-Hall algebras*, preprint (2005). A. Hubery, *Hall polynomials for affine quivers*, Represent. Theory **14** (2010), 355-378. F. Qin, *Quantum Cluster Variables via Serre Polynomials,* arXiv:1004.4171. C. M. Ringel, Tame algebras and integral quadratic forms, [*Lecture Notes in Mathematics*]{}, **1099** (1984). C. M. Ringel, Hall polynomials for the representation-finite hereditary algebras, Adv. Math. **84** (1990), 137-178. D. Rupel, *On quantum analogue of the Caldero-Chapoton Formula,* arXiv:1003.2652. P. Sherman, A. Zelevinsky, *Positivity and canonical bases in rank 2 cluster algebras of finite and affine types*, Moscow Math. J., **4** (2004), no. 4, 947-974. B. Zhu, *Equivalence between cluster categories*, J. Algebra **304** (2006), 832-850. [^1]: The research was supported by NSF of China (No. 11071133)
--- abstract: 'Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN.' author: - | \  \ \ [^1][^2] bibliography: - 'references.bib' title: 'Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients' --- Spiking Neural Network, Reinforcement Learning, Policy Gradient, Neuromorphic Computing. Introduction {#sec:intro} ============ Artificial neural networks (ANNs) are used as parameterized non-linear models that serve as inductive bias for a large number of machine learning tasks, including notable applications of Reinforcement Learning (RL) to control problems [@jaderberg2018human]. While ANNs rely on clocked floating- or fixed-point operations on real numbers, Spiking Neural Networks (SNNs) operate in an event-driven fashion on spiking synaptic signals (see Fig. 1). Due to their lower energy consumption when implemented on specialized hardware, SNNs are emerging as an important alternative to ANNs that is backed by major technology companies, including IBM and Intel [@truenorth; @davies2018loihi]. Specifically, SNNs are considered to be important candidates as co-processors to be implemented in battery-limited mobile devices (see, e.g., [@chen2017machine]). Applications of SNNs, and of associated neuromorphic hardware, to supervised, unsupervised, and RL problems have been reported in a number of works, first in the computational neuroscience literature and more recently in the context of machine learning [@rezende2011variational; @kappel2018dynamic; @jin2018hybrid]. SNN models can be broadly classified as *deterministic*, such as the leaky integrate-and-fire (LIF) model [@gerstner2002spiking] or *probabilistic*, such as the generalized linear model (GLM) [@pillow2005prediction]. Prior work on RL using SNNs has by and large adopted deterministic SNN models to define action-value function approximators. This is typically done by leveraging *rate decoding* and either *rate encoding* [@zheng2017hardware; @nakano2015spiking], or time encoding [@bing2018end]. Under rate encoding and decoding, the spiking rates of input and output neurons represent the information being processed and produced, respectively, by the SNN. A standard approach, to be considered here as baseline, is to train an ANN offline and to then convert the resulting policy into a deterministic SNN with the aim of ensuring that the output spiking rates are close to the numerical output values of the trained ANN [@diehl2015fast; @nakano2015spiking]. There is also significant work in the theoretical neuroscience literature concerning the definition of biologically plausible online learning rules [@florian2007reinforcement; @shim2017biologically; @vasilaki2009spike]. In all of the reviewed studies, exploration is made possible by a range of mechanisms such as $\epsilon$-greedy in [@shim2017biologically] and stochasticity introduced at the synaptic level [@nakano2015spiking; @vasilaki2009spike], requiring the addition of some external source of randomness. As a related work, reference [@zheng2017hardware] discusses the addition of noise to a deterministic SNN model to induce exploration of the state space from a hardware perspective. In contrast, in this paper, we investigate the use of probabilistic SNN policies that naturally enable exploration thanks to the inherent randomness of their decisions, hence making it possible to learn while acting in an on-policy fashion. Due to an SNN’s event-driven activity, its energy consumption depends mostly on the number of spikes that are output by its neurons. This is because the idle energy consumption of neuromorphic chips is generally extremely low (see, e.g., [@truenorth; @davies2018loihi]). With this observation in mind, this work proposes the use of a probabilistic SNN, based on GLM spiking neurons, as a stochastic policy that operates according to a first-to-spike decoding rule. The rule outputs a decision as soon as one of the output neurons generates a spike, as illustrated in Fig. 1(a), hence potentially reducing energy consumption. A gradient-based updating rule is derived that leverages the analytical tractability of the first-to-spike decision criterion under the GLM model. We refer to [@bagheri2017training] for an application of the first-to-spike rule to a supervised learning classification algorithm. The rest of the paper is organized as follows. Sec. \[sec:prob def\] describes the problem formulation and the GLM-based SNN model. Sec. \[sec:polgrad ftssnn\] introduces the proposed policy gradient on-policy learning rule. Sec. \[sec:baselineSNN\] reviews the baseline approach of converting an offline trained ANN into an SNN. Experiments and discussions are reported in Sec. \[sec:results\]. Problem Definition and Models {#sec:prob def} ============================= **Problem definition.** We consider a standard RL single-agent problem formulated on the basis of discrete-time Markov Decision Processes (MDPs). Accordingly, at every time-step $t=1,2,...$, the RL agent takes an action $A_t$ from a finite set of options based on its knowledge of the current environment state $S_t$ with the aim of maximizing a long-term performance criterion. The agent’s policy $\pi(A|S,\theta)$ is a parameterized probabilistic mapping from the state space to the action space, where $\theta$ is the vector of trainable parameters. After the agent takes an action $A_t$, the environment transitions to a new state $S_{t+1}$ which is observed by the agent along with an associated numeric reward signal $R_{t=1}$, where both $S_{t+1}$ and $R_{t+1}$ are generally random functions of the state $S_t$ and action $A_t$ with unknown conditional probability distributions. An episode, starting at time $t=0$ in some state $S_0$, ends at time $t^{\text{G}}$, when the agent reaches a goal state $S^{\text{G}}$. The performance of the agent’s policy $\pi$ is measured by the long-term discounted average reward $$\begin{aligned} \label{eq:value} V_{\pi}(S_0)=\sum_{t=0}^{\infty}\gamma^{t}\text{E}_\pi [R_{t}], \end{aligned}$$where $0<\gamma<1$ is a discount factor. The reward $R_t$ is assumed to be zero for all times $t>t^{\text{G}}$. With a proper definition of the reward signal $R_t$, this formulation ensures that the agent is incentivized to reach the terminal state in as few time-steps as possible. While the approach proposed in this work can apply to arbitrary RL problems with discrete finite action space, we will focus on a standard windy grid-world environment [@sutton1998reinforcement]. Accordingly, as seen in Fig. 1(b), the state space is an $M \times N$ grid of positions and the action space is the set of allowed horizontal and vertical single-position moves, i.e., Up, Down, Left, or Right. The start state $S_0$ and the terminal state $S^{\text{G}}$ are fixed but unknown to the agent. Each column $n=1,..,N$ in the grid is subject to some unknown degree of ‘wind’, which pushes the agent upward by $\omega_n$ spaces when it moves from a location in that column. The reward signal is defined as $R_{t+1}>0\text{ if }S_{t+1}=S^{\text{G}}$ and, otherwise, we have $R_{t+1}=0$. **Probabilistic SNN model.** In order to model the policy $\pi(A|S,\theta)$, as we will detail in the next section, we adopt a probabilistic SNN model. Here we briefly review the operation of GLM spiking neurons [@pillow2005prediction]. Spiking neurons operate over discrete time $\tau=1,...,T$ and output either a “0” or a “1” value at each time, where the latter represents a spike. We emphasize that, as it will be further discussed in the next section, the notion of time $\tau$ for a spiking neuron is distinct from the time axis $t$ over which the agent operates. Consider a GLM neuron $j$ connected to $N_s$ pre-synaptic (input) neurons. At each time instant $\tau=1,...,T$ of the neuron’s operation, the probability of an output spike at neuron $j$ is given as $\sigma(u_{j,\tau})$, where $\sigma(x)=1/(1+\text{exp}(-x))$ is the sigmoid function and $u_{j,\tau}$ is the neuron’s membrane potential$$\begin{aligned} \label{eq:pot} u_{j,\tau} = \sum_{i=1}^{N_s}\alpha_{i,j}^{\dagger} x_{i,\tau-\tau_s:\tau-1}+b_j.\end{aligned}$$In (\[eq:pot\]), the $\tau_s \times 1$ vector $\alpha_{i,j}$ is the so called *synaptic kernel* which describes the operation of the synapse from neuron $i$ to neuron $j$ with $\dagger$ defined as the transpose operator here; $b_j$ is a bias parameter; and $x_{i,\tau-\tau_s:\tau-1}$ collects the past $\tau_s$ samples of the $i$th input neuron. As in [@pillow2005prediction], we model the synaptic kernel as a linear combination $\alpha_{i,j}=Bw_{i,j}$ of $K_s$ basis functions, described by the columns of $\tau_s\times K_s$ matrix $B$, with the $K_s\times 1$ weight vector $w_{i,j}$. We specifically adopt the raised cosine basis functions in [@pillow2005prediction]. \[alg:ftsreinforce\] $i=1$ Policy-Gradient Learning Using First-to-Spike SNN Rule {#sec:polgrad ftssnn} ====================================================== In this section, we propose an on-policy learning algorithm for RL that uses a first-to-spike SNN as a stochastic random policy. Although the approach can be generalized, we focus here on the fully connected two-layer SNN shown in Fig. \[fig:optimal policy\](a). In the SNN, the first layer of $N_x$ neurons encodes the current state of the agent $S_t$, as detailed below, while there is one output GLM neuron for every possible action $A_t$ of the agent, with $N_s=N_x$ inputs each. For example, in the grid world of Fig. \[fig:optimal policy\](b), there are four output neurons. The policy $\pi(A|S,\theta)$ is parameterized by the vector $\theta$ of synaptic weights $\{w_{i,j}\}$ and biases $\{b_j\}$ for all the output neurons as defined in (\[eq:pot\]). We now describe encoding, decoding, and learning rule. **Encoding.** A position $S_t$ is encoded into $N_x$ spike trains, i.e., binary sequences, with duration $T$ samples, each of which is assigned to one of the neurons in the input layer of the SNN. We emphasize that the time duration $T$ is internal to the operation of the SNN, and the agent remains at time-step $t$ while waiting for the outcome of the SNN. In order to explore the trade-off between encoding complexity and accuracy, we partition the grid into $N_x$ sections, or windows, each of size $W\times W$. Each section is encoded by one input neuron, so that increasing $W$ yields a smaller SNN at the cost of a reduced resolution of state encoding. Each position $S_t$ on the grid can be described in terms of the index $s(S_t)\in\lbrace 1,...,N_x \rbrace$ of the section it belongs to, and the index $w(S_t)\in \lbrace 1,...,W^2 \rbrace$ indicating the location within the section using left-to-right and top-to-bottom ordering. Accordingly, using rate encoding, the input to the $i$th neuron is an i.i.d. spike train with probability of a spike given by $$\begin{aligned} \label{eq:rateencode} p_i= \begin{cases} p_{\text{min}}+\left(\frac{p_{\text{max}}-p_{\text{min}}}{W^2-1}\right)\left(w(S_t)-1\right),& \text{if } i=s(S_t)\\ 0, & \text{otherwise} \end{cases} \end{aligned}$$for given parameters $p_{\text{min}},\ p_{\text{max}}\in[0,1]$ and $p_{\text{max}}\geq p_{\text{min}}$. **Decoding.** We adopt a first-to-spike decoding protocol, so that the output of the SNN directly acts as a stochastic policy, inherently sampling from the distribution $\pi(A|S,\theta)$ induced by the first-to-spike rule. If no output neuron spikes during the input presentation time $T$, no action is taken, while if multiple output neurons spike concurrently, an action is chosen from among them at random. Given the synaptic weights and biases in vector $\theta$, the probability that the $j$th output neuron spikes first, and thus the probability that the network chooses action $A=j$, is given as $\text{Pr}(A=j)=\sum_{\tau=1}^{T} p_{\tau}(j)$, where $$\begin{aligned} p_{\tau}(j)=\prod_{k\neq j}\prod_{\tau'=1}^{\tau}(1-\sigma(u_{k,\tau'}))\sigma(u_{j,\tau})\prod_{\tau'=1}^{\tau-1}(1-\sigma(u_{j,\tau'})) \label{eq:p_t}\end{aligned}$$ is the probability that the $j$th output neuron spikes first at time $\tau$, while the other neurons do not spike until time $\tau$ included. **Policy-gradient learning.** After an episode is generated by following the first-to-spike policy, the parameters $\theta$ are updated using the policy gradient method [@sutton1998reinforcement]. The gradient of the objective function (\[eq:value\]) equals $$\begin{aligned} \label{eq:stochgrad} \nabla_{\theta}V_{\pi}(S_0)= \text{E}_{\pi}[ V_t\nabla_{\theta}\log\pi(A_t|S_t,\theta)],\end{aligned}$$ where $V_t=\sum_{t'=t}^{\infty} \gamma^{t'} R_{t'}$ is the discounted return from the current time-step until the end of the episode and the expectation is taken with respect to the distribution of states and actions under policy $\pi$ (see \[19, Ch. 13\]). The gradient in (\[eq:stochgrad\]) can be computed as [@bagheri2017training] $$\begin{aligned} \label{eq:wgrad} \nabla_{w_{i,k}} \log\pi_{\theta}(A_t=j|S_t,\theta)= \begin{cases} -\sum_{\tau=1}^{T}h_{\tau}\sigma(u_{k,\tau})B^{T}x_{i,\tau-\tau_s:\tau-1} & k\neq j \\ -\sum_{\tau=1}^{T}(h_{\tau}\sigma(u_{j,\tau})-q_{\tau})B^{T}x_{i,\tau-\tau_s:\tau-1} & k =j, \end{cases}\end{aligned}$$ and $$\begin{aligned} \label{eq:bgrad} \nabla_{b_{k}}\log\pi_{\theta}(A_t=j|S_t,\theta)= \begin{cases} -\sum_{\tau=1}^{T}h_{\tau}\sigma(u_{k,\tau}) u & k\neq j \\ -\sum_{\tau=1}^{T}h_{\tau}\sigma(u_{j,\tau})-q_{\tau} & k = j \end{cases}\end{aligned}$$ where $$\begin{aligned} h_{\tau} = \sum_{\tau'=\tau}^{T}q_{\tau'},\; \text{and}\; q_{\tau}=\dfrac{p_{\tau}}{\sum_{\tau=1}^{T}p_{\tau}}.\end{aligned}$$ The first-to-spike policy gradient algorithm is summarized in Algorithm \[alg:ftsreinforce\], where the gradient (\[eq:stochgrad\]) is approximated using Monte-Carlo sampling in each episode \[19 Ch. 5\]. Baseline SNN Solution {#sec:baselineSNN} ===================== As a baseline solution, we consider the standard approach of converting an offline trained ANN into a deterministic IF SNN. Conversion aims at ensuring that the output spiking rates of the neurons in the SNN are proportional to the numerical values output by the corresponding neurons in the ANN [@diehl2015fast]. **IF neuron.** The spiking behavior of an IF neuron is determined by an internal membrane potential defined as in (\[eq:pot\]) with the key differences that: (*i*) the synaptic kernels are perfect integrators, that is, they are written as $\alpha_{i,j}=w_{i,j}1$, where $w_{i,j}$ is a trainable synaptic weight and $1$ is an all-one vector of $T$ elements; and (*ii*) the neuron spikes deterministically when the membrane potential is positive, so that parameter $b_j$ plays the role of negative threshold. **Training of the ANN and Conversion into an IF SNN.** A two-layer ANN with four ReLU output units is trained by using the SARSA learning rule with $\epsilon$-greedy action selection in order to approximate the action-value function of the optimal policy [@sutton1998reinforcement]. The input to each neuron $i$ in the first layer of the ANN during training is given by the probability value, or spiking rate, $p_i$ defined in (\[eq:rateencode\]), which encodes the environment state. Each output neuron of the ANN encodes the estimated value, i.e., the estimated long-term discounted average reward, of one of the four actions for the given input state and the action with the maximum value is chosen (under $\epsilon$-greedy action choices) with probability $\epsilon$. The ANN can then be directly converted into a two-layer IF SNN with the same architecture using the state-of-the-art methods proposed in [@diehl2015fast], to which we refer for details. The converted SNN is used by means of rate decoding: the number of spikes output by each neuron in the second layer is used as a measure of the value of the corresponding action. We emphasize that, unlike in the proposed solution, the resulting (deterministic) IF SNN does not provide a random policy but rather a deterministic action-value function approximator. Results and Discussion {#sec:results} ====================== In this section, we provide numerical results for the grid world example described in Sec. \[sec:prob def\] with $M=7$, $N=10$, $S_0$ and $S^{\text{G}}$ at positions (4,1) and (4,8) on the grid respectively, ‘wind’ level per columns defined by the values $\omega_n$ indicated in Fig. \[fig:optimal policy\](b), and $K_s=\tau_s$ for all simulations. Throughout, we set $p_\text{min}=0.5$ and $p_\text{max}=1$ for encoding in the spike domain and a learning schedule, $\eta_{i} = (\eta_{i-1})/(1-k(i-1))$ with $\eta_0=10^{-2}$. Training is done for 25 epochs of 1000 iterations each, with 500 test episodes to evaluate the performance of the policy after each epoch. Hyper-parameters for the SARSA algorithm to be used as described in the previous section are selected as recommended in [@mnih2013playing; @lin1993reinforcement]. Apart from the IF SNN solution described in the previous section, we also use as reference, the performance of an ANN trained using the same policy gradient approach as in Algorithm 1 and having the same two-layer architecture as the proposed SNN. In particular, the input to each input neuron $i$ of the ANN is again given by the probability $p_i$ defined in (\[eq:rateencode\]), while the output is given by a softmax non-linearity. The output hence encodes the probability of selecting each action. It is noted that, despite having the same architecture, the ANN has fewer parameters than the proposed first-to-spike SNN: while the SNN has $K_s$ parameters for each synapse given its capability to carry out temporal processing, the ANN has conventionally a single synaptic weight per synapse. This reference method is labeled as “ANN" in the figures. We start by considering the convergence of the learning process along the training episodes in terms of number of time-steps to reach the goal state. To this end, in Fig. \[fig:res\], we plot the performance, averaged over the 25 training epochs, of the first-to-spike SNN policy with different values of input presentation duration $T$ and GLM parameters $K_s=\tau_s=4$, as well as that of the reference ANN introduced above, both using encoding window size $W=1$, and hence $N_x=70$ input neurons. We do not show the performance of the IF SNN since this solution carries out offline learning (see Sec. IV). The probabilistic SNN policy is seen to learn more quickly how to reach the goal point in fewer time-steps as $T$ is increased. This improvement stems from the proportional increase in the number of input spikes that can be processed by the SNN, enhancing the accuracy of input encoding. It is also interesting to observe that the ANN strategy is outperformed by the first-to-spike SNN policy. As discussed, this is due to the capability of the SNN to learn synaptic kernels via its additional weights. We further investigate the behavior of the first-to-spike SNN during training in Fig. \[fig:spikesperchoice\], which plots the spike frequency as a function of the training episodes. The initially very low spike frequency can be interpreted as an exploration phase, where the network makes mostly random action choices by largely neglecting the input spikes. The spike frequency then increases as the SNN learns while exploring effective actions dictated by the first-to-spike rule. Finally, after the first one hundred episodes, the SNN learns to exploit optimal actions, hence reducing the number of observed spikes necessary to fire the neuron corresponding to the optimized action. We now turn to the performance evaluated after training. Here we consider also the performance of the conventional IF SNN trained offline as described in Sec. IV. We first analyze the impact of using coarser state encodings as defined by the encoding window size $W$. Considering only test episodes, Fig. \[fig:perfVspikef\] plots the number of time-steps to reach the goal (top) and the total number of spikes per episode across the network (bottom), as a function of the number of input neurons, or equivalently of $W$. For all schemes, it is seen that, as long as the window size is no larger than $W=4$ and $T$ is large enough for the SNN-based strategies, no significant increase of time-steps to reach the goal is incurred. Importantly, the IF SNN is observed to require $10\times$ the presentation time and more than $10\times$ the number of spikes per episode of the first-to-spike SNN to achieve the same performance. The test performance comparison between first-to-spike SNN and IF SNN is further studied in Fig. \[fig:perfVT\], which varies the presentation time $T$. In order to discount the advantages of the first-to-spike SNN due to its larger number of synaptic parameters, we set here $K_s=1$, thus reducing the number of synaptic parameters to 1 as for the IF SNN. Fig. \[fig:perfVT\] shows that the gains of the proposed policy are to be largely ascribed to its decision rule learned based on first-to-spike decoding. In contrast, the IF SNN uses conventional rate decoding, which requires a larger value of $T$ in order to obtain a sufficiently good estimate of the value of each state via the spiking rates of the corresponding output neurons. Conclusions =========== This paper has proposed a policy gradient-based online learning strategy for a first-to-spike spiking neural network (SNN). As compared to a conventional approach based on offline learning and conversion of a second generation artificial neural network (ANN) to an integrate-and-fire (IF) SNN with rate decoding, the proposed approach was seen to yield a reduction in presentation time and number of spikes by more than $10\times$ in a standard windy grid world example. Thanks to the larger number of trainable parameters associated with each synapse, which enables optimization of the synaptic kernels, performance gains were also observed with respect to a conventional ANN with the same architecture that was trained online using policy gradient. [^1]: O. Simeone is on leave from NJIT [^2]: This research was supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Program (Grant Agreement No. 725731) and the U.S. National Science Foundation through grants 1525629 and 1710009.
--- abstract: 'The scattering of random surface gravity waves by topography of small amplitude, and horizontal scales of the order of the wavelength, is investigated theoretically in the presence of a an almost uniform irrotational current. This problem is relevant to ocean waves propagation on shallow continental shelves where tidal currents are often significant. Defining the small scale bottom amplitude normalized by the mean water depth, $\eta=h/H$, a perturbation expansion of the wave action to order $\eta^2$ yields an evolution equation for the wave action spectrum. Based on numerical calculations for sinusoidal bars, a mixed surface-bottom bispectrum, that arises at order $\eta$, is unlikely to be significant in most oceanic conditions. Neglecting that term, the present theory yields a closed equation with a scattering source term that gives the rate of exchange of action between spectral wave components that have the same absolute frequency. This source term is proportional to the bottom elevation variance at the resonant wavenumbers, and thus represents a Bragg scattering approximation. With current, the source term formally combines a direct effect of the bottom topography with an indirect effect of the bottom through the modulation of the surface current and mean surface elevation. For Froude numbers of the order of 0.6 or less, the bottom topography effects dominate. For all Froude numbers, the reflection coefficients for the wave amplitudes that are inferred from the source term are asymptotically identical, as $\eta$ goes to zero, to previous theoretical results for monochromatic waves propagating in one dimension over sinusoidal bars. In particular, the frequency of the waves that experience the maximum reflection is shifted by the current, as the surface wavenumber $k$ changes for a fixed absolute frequency. Over sandy continental shelves, tidal currents are known to generate sandwaves with scales comparable to those of surface waves, with bottom elevation spectra that roll-off sharply at high wavenumbers. Application of the theory to such a real topography suggests that scattering mainly results in a broadening of the directional wave spectrum, i.e. forward scattering, while back-scattering is generally weaker. The current may strongly influence surface gravity wave scattering by selecting different bottom scales, with widely different spectral densities due the sharp bottom spectrum roll-off.' bibliography: - '../references/wave.bib' date: '8 September 2005 (draft)' nocite: - '[@Heathershaw1982]' - '[@Mei1985]' - '[@Kirby1986a]' - '[@Hasselmann1966]' - '[@Rey1992]' - '[@Rayleigh1896]' - '[@Kirby1988]' - '[@Hasselmann1962]' - '[@Hasselmann1962]' - '[@Janssen2004]' - '[@Priestley1981]' - '[@Longuet-Higgins1950]' - '[@Phillips1977]' - '[@Longuet-Higgins1967]' - '[@Hasselmann1962]' - '[@Rey1992]' - '[@Weber1991b]' - '[@Tolman1991b; @Tolman2002c]' - '[@Magne2005]' - '[@Hasselmann1962]' --- Introduction ============ Following the early observations of Heathershaw (1982), a considerable body of knowledge has been accumulated on the scattering of small amplitude surface gravity waves by periodic bottom topography. An asymptotic theory for small bottom amplitudes, that reproduces the observed scattering of monochromatic waves over a few sinusoidal bars, was put forward by Mei (1985), leading to practical phase-resolving equations that may be used to model this phenomenon for more general bottom shapes (Kirby 1986). For sinusoidal bottoms of wavenumber $l$, Mei (1985) proposed an approximate analytical solution. In two dimensions (one horizontal and the vertical) this solution yields simple expressions for the wave amplitude reflection coefficient $R$, as a function of the mismatch between the wavenumber of the surface waves $k$ and the resonant value $l/2$, for which $R$ is maximum due to Bragg resonance. Beyond a cut-off value of that mismatch, it was found that the incident and reflected wave amplitudes oscillate in space instead of decreasing monotonically from the incident region. In three dimensions the Bragg resonance condition becomes ${{\boldsymbol k}}= {\boldsymbol{l}}+ {{\boldsymbol k}}'$ and $\omega=\omega'$, with $\omega$ and $\omega'$ the wave radian frequencies corresponding to the wavenumber vectors ${{\boldsymbol k}}$ and ${\boldsymbol{k^{\prime}}}$ through the linear dispersion relation. Other contributions have shown that higher-order theories are necessary to represent the sub-harmonic resonance observed over a bottom that is a superposition of two components of different wavelengths (Guazzelli, Rey & Belzons 1992). Such sub-harmonic resonance was found to have as large an effect as the lowest order resonance for bottom amplitudes of only 25% of the water depth, due to a general stronger reflection for relatively longer waves. However, these amplitude evolution equations are still prohibitively expensive for investigating the propagation of random waves over distances larger than about 100 wavelengths, and the details of the bottom are typically not available over large areas. Besides, a consistent phase-averaged wave action evolution equation is also necessary for the investigation of the long waves associated with short wave groups (Hara & Mei 1987). The large scale behaviour of the wave field may rather be represented by the evolution of the wave action spectrum assuming random phases. Such an approach was already proposed by Hasselmann (1966) and Elter & Molyneux (1972) for the calculation of wind-wave and tsunami propagation. A proper theory for the evolution of the wave spectrum can be obtained from a solvability condition, a method similar to that of Mei (1985) and Kirby (1988), but applied to the action spectral densities instead of the amplitudes of monochromatic waves. In the absence of currents the correct form of that equation was first obtained by Ardhuin & Herbers (2002, hereinafter referred to as AH) using a two scale approach. They decomposed the water depth $H-h$ in a slowly varying depth $H$, that causes shoaling and refraction, and a rapidly varying perturbation $h$ with zero mean, that causes scattering. This equation is formally similar to general transport equations for waves in random media (e.g. Ryzhik, Papanicolaou & Keller 1996), although the waves considered here propagate only in the two horizontal dimensions. The resulting scattering was shown to be consistent with the dramatic increase of the directional width of the wave spectra observed on the North Carolina continental shelf (Ardhuin  2003a, 2003b). Recently, Magne  (2005, hereinafter referred to as MAHR) showed that AH’s theory gives the same damping of incident waves as the Green function solution of Pihl, Mei & Hancock (2002), applied to any two dimensional topography, random or not. Investigating the applicability limits of the scattering term of AH, MAHR also performed numerical calculations, comparing AH’s theory to the accurate matched-boundary model of Rey (1992) that uses a decomposition of the bottom in a series of steps, including evanescent modes. The numerical results show that AH’s theory is generally limited by the relative bottom amplitude $\eta=\max(h)/H$ rather than the bottom slope. In particular, AH’s theory predicts accurate reflections, with a relative error of order $\eta$, even for isolated steps that have an infinite slope (MAHR). The resulting expression of the Bragg scattering term is consistent with results for scattering of acoustic and electromagnetic waves obtained by the small perturbation method, valid in the limit of small $k \max(h)$ where $k$ is the wavenumber of the propagating waves (Rayleigh 1896, see Elfouhaily & Guerin 2004 for a review of this and other approximations). Since there is no scattering for $kH \gg 1$, as the waves do not ‘feel’ the bottom, the small parameter $\eta=\max(h)/H$ may be used in the context of surface gravity waves, instead of the more general $k \max(h)$. For $\eta \ll 1$, the scattering strength is thus entirely determined by the bottom elevation variance spectrum at the bottom scales resonant with the incident waves. Based on these results, Mei’s (1985) theory should yield the same reflection coefficient as AH’s theory in the limit of small bottom amplitudes. Yet, AH predict that the wave amplitude in 2D would decay monotonically, which is not compatible with the oscillatory nature of Mei’s theory for large detunings from resonance. Further, outside of the surf zone and the associated multiple bar systems, the application of AH’s theory is most relevant in areas where the bottom topography changes significantly on the scale of the wavelengths of swells. This often corresponds, over sand, to the presence of sandwaves. These sandwaves are generated by currents, and particularly by tidal currents (e.g. Dalrymple Knight & Lambiase 1978; Idier, Erhold & Garlan 2002). It is thus logical to include the effects of currents in any theory for wave scattering over a random bottom. Kirby (1988) developed such a theory for monochromatic waves over a sinusoidal bottom and a slowly varying mean current, extending Mei’s (1985) work. The geometry of the resonant wavenumbers is modified in that case, with with incident and reflected waves having the same absolute frequency, but different wavenumber magnitudes if incident and reflected waves propagate at different angles relative to the current direction. Kirby (1988) also considered the short scale fluctuations of the current, due to the sinusoidal bottom, that may be interpreted as a separate scattering mechanism, and generalized further to any irrotational current fluctuations, leading to results similar to those obtained for gravity-capillary waves by Bal & Chou (2002). The present paper thus deals with these two questions. An extension of AH’s theory for surface gravity wave scattering in the presence of irrotational currents with uniform mean velocities is provided in  § 2, and the differences between this theory and those of Mei (1985) and Kirby (1988) are discussed in detail in  § 3. Expected oceanographic effects of scattering in the presence of a current are investigated in  § 4, using a spectral phase-averaged numerical model, predicting the evolution of the wave action spectrum, and detailed measurements of the topography in the southern North Sea. Conclusions follow in  § 5. Theory ====== General formulation ------------------- The variation in the action spectral density due to wave-bottom scattering is derived following the method of AH, now including the effect of a uniform mean current. The method is identical to that of Kirby (1988) with the difference that an equation for the spectral wave action is sought instead of one for the wave amplitudes. Thus intermediate results are identical to those of Kirby (1988). Since the wave action is a quadratic function in the wave amplitude, we will naturally consider the wave potential up to second order in the normalized bottom amplitude $\eta$, in order to have all wave action terms to order $\eta^2$. The only important terms in this type of calculation are the ‘secular terms’, i.e. the harmonic oscillator solutions for the wave potential forced at resonance, with an amplitude that grows unbounded in time. We shall thus obtain a rate of change of the action from the equality of all the secular terms. The particularity of the random wave approach is also that we will consider all possible couplings between wave components, and not just two wave trains. With random waves, secularity is limited to a sub-space of the wavenumber plane that generally has a zero measure. Thus the near-resonant terms, once integrated across the resonant singularity, are the ones that provide the secular terms for random waves. This integration assumes that the spectral properties are continuous, a real theoretical problem for nonlinear wave-wave interactions (e.g. Benney & Saffmann 1966, Onorato  2004). Here we shall see that the only relevant condition is that the bottom spectrum be continuous, at least in one dimension. This is obviously satisfied by any real topography, since a truly infinite sinusoidal bottom of wavelength $L$, with an infinite spectral density at the wavenumber $2 {\pi}/ L$, is not to be found, even in the laboratory. We consider weakly nonlinear random waves propagating over an irregular bottom with a constant mean depth $H$ and mean current ${\mathbf U}$, and random topography $h({{\boldsymbol x}})$, with ${{\boldsymbol x}}$ the horizontal position vector, so that the bottom elevation is given by $z=-H+h({{\boldsymbol x}})$ where $z$ is the elevation relative to the still water level. The bottom undulations cause a stationary random small-scale current fluctuation $({\mathbf u}({{\boldsymbol x}},z),w({{\boldsymbol x}},z))$ deriving from a potential $\phi_c$. The free surface is at $z=\zeta({{\boldsymbol x}},t)=\zeta({{\boldsymbol x}},t)$. Extension to mean current and mean depth variations on a large scale follows from a standard two-scale approximation, identical to that of by Kirby (1988). This is not included in the present derivation for the sake of simplicity. The maximum surface slope is characterized by $\varepsilon$ and we shall assume that $\varepsilon^3 \ll \eta^2$ so that the bottom scattering contributions to the wave action to order $\eta^2$ are much larger than the resonant non-linear four wave interactions (Hasselmann 1962) that shall be neglected. Such interactions could also be allowed in the present calculation providing an additional source of scattering with the known form due to cubic non-linearities. For shallow water waves ($kH << 1$) a stricter inequality is needed to prevent triad wave-wave interactions to enter the action evolution equation at the same order as bottom scattering. ![Definition sketch of the mean water depth $H$, and relative bottom elevation $h$, for one particular case of a sinusoidal bottom investigated in  § 3.[]{data-label="hetH"}](bottomdef.eps){width="\textwidth"} The solution is obtained in a frame of reference moving with the mean current vector ${\boldsymbol{U}}$, which has the advantage of removing the convective terms due to the mean current velocity. The corresponding transformation of the horizontal coordinates is ${{\boldsymbol x}}'={{\boldsymbol x}}+{\boldsymbol{U}}t$, where ${{\boldsymbol x}}$ and ${{\boldsymbol x}}'$ are the coordinates in the moving and fixed frames, respectively. As a result of this transformation, the bottom is moving, and the bottom boundary condition for the velocity potential is modified. The governing equations consist of Laplace’s equation for the velocity potential, which includes both wave and current motions, the bottom kinematic boundary conditions, and Bernoulli’s equation with the free surface kinematic boundary condition. Assuming that the atmospheric pressure is zero for simplicity, and neglecting surface tension, one has $$\begin{aligned} {\boldsymbol{\nabla}}^{2} \phi +\frac{\partial ^{2}\phi}{\partial z^{2}} & = & 0\quad \mbox{for} \quad -H+h \leq z \leq \zeta, \label{laplace} \\ \frac{\partial \phi}{\partial z} &= &\frac{\partial h}{\partial t}+{\boldsymbol{\nabla}}\phi \cdot {\boldsymbol{\nabla}}h \quad \mbox{at} \quad z=-H+h, \label{fond}\\ \frac{\partial{\phi}}{\partial{t}}+g\zeta& = & -\frac{1}{2} \left|{\boldsymbol{\nabla}}\phi \right|^2 - \frac{1}{2} \left(\frac{\partial\phi}{\partial z}\right)^2 +c (t) \quad \mbox{at} \quad z=\zeta. \label{Bernoulli} \\ \frac{\partial\phi}{\partial z}&=& \frac{\partial{\zeta}}{\partial{t}}+ {\boldsymbol{\nabla}}\phi \cdot {\boldsymbol{\nabla}}\zeta \quad \mbox{at} \quad z=\zeta, \label{surf_cin}\end{aligned}$$ with $c(t)$ a function of time only, to be determined. The symbol ${\boldsymbol{\nabla}}$ represents the usual gradient operator restricted to the two horizontal dimensions. The latter two equations may be combined to remove the linear part in $\zeta$. Taking $\partial (\ref{Bernoulli})/\partial t $ +$g$(\[surf\_cin\]), yields, $$\begin{aligned} \frac{\partial^2{\phi}}{\partial{t^2}}+g\frac{\partial\phi}{\partial z}= g{\boldsymbol{\nabla}}\phi \cdot {\boldsymbol{\nabla}}\zeta - \frac{\partial \zeta}{\partial t}\frac{\partial^2 \phi}{\partial z \partial t}- \left(1+\frac{\partial \zeta}{\partial t}\frac{\partial }{\partial z}\right) \left[{\boldsymbol{\nabla}}\phi \cdot \frac{\partial{{\boldsymbol{\nabla}}\phi}}{\partial t} + \frac{\partial\phi}{\partial z}\frac{\partial^2\phi}{\partial t\partial z}\right]&+&c'(t), \nonumber \\ \quad \mbox{at} \quad z&=&\zeta. \label{comb}\end{aligned}$$ Following Hasselmann (1962), we approximate $h$ and $\phi$ with discrete sums over Fourier components, and take the limit to continuous integrals after deriving expressions for the evolution of the phase-averaged wave action. We look for a velocity potential solution in the form $$\label{phi} \phi({{\boldsymbol x}},z,t)=\sum_{{{\boldsymbol k}},s} \widehat{\Phi}^s_{{{\boldsymbol k}}}(z,\gamma t){\mathrm{e}}^{{\mathrm{i}}[{{\boldsymbol k}}{\boldsymbol{\cdot}}{{\boldsymbol x}}-s\sigma t]} = \sum_{{{\boldsymbol k}},s} \Phi^s_{{{\boldsymbol k}}}(t)\frac{\cosh\left[k(z+H)\right]}{\cosh(kH)}{\mathrm{e}}^{{\mathrm{i}}{{\boldsymbol k}}{\boldsymbol{\cdot}}{{\boldsymbol x}}}+\ldots,\label{Spectralplus}$$ where $\sigma$ is the radian frequencies in the moving frame, ${{\boldsymbol k}}$ is the surface wavenumber, with magnitude $k$, and $s$ is a sign index equal to 1 or $-1$. In the moving frame of reference, $s=1$ for wave components that propagate in the direction of the vector ${{\boldsymbol k}}$, and $s=-1$ for components that propagate in the opposite direction. Thus the radian frequency in the fixed frame is $\omega=\sigma+s {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$. The amplitudes $\widehat{\Phi}^s_{{{\boldsymbol k}}}$ are slowly modulated in time, with a slowness defined by the small parameter $\gamma$. Because $\phi$ is a real quantity we also have $\overline{\widehat{\Phi}^s_{{\boldsymbol k}}}=\widehat{\Phi}^{-s}_{-{{\boldsymbol k}}}$, where the overbar denotes the complex conjugate. Thus the double decomposition made in (\[Spectralplus\]) in wavenumber ${{{\boldsymbol k}}}$ and propagation direction $+$ or $-$ replaces a more general decomposition in wavenumber and frequency that would be necessary if nonlinear dispersive effects were included. Here the frequency $\sigma$ is always related to $k$ via the linear dispersion relation. In the alternative decomposition with amplitudes $\Phi^s_{{{\boldsymbol k}}}$ that contain the fast time variation, only the part of the solution that has the vertical structure of Airy waves has been given explicitly. The other part, represented by ‘$\ldots$’, will be found to be negligible for small bottom amplitudes. Our rather archaic use of the $s$ index to distinguish the wave propagation direction is preferred here to the more modern use of the Hamiltonian variables that combine elevation and potential at the free surface, widely used for wave-wave interaction studies (e.g. Janssen 2004). The complexity of the Hamiltonian variables appears unnecessary for the linear waves considered here. Expanding the bottom boundary condition and wave potential in powers of $\eta=\max(h)/H$, $$\phi=\phi_0+ \phi_1+\phi_2+ \ldots, \label{Taylor}$$ where each term $\phi_i$ is of order $\eta^i$. The boundary conditions (\[comb\]) and (\[fond\]) are expressed at $z=0$ and $z=-H$, respectively, using Taylor series of $\phi$ about $z=-H$ and $z=0$. Unless stated otherwise, these potential amplitudes will be random variables. Since we are solving for $\phi$ seeking an equation the for wave action $N$, we must relate $N$ to $\phi$. Accurate to second order in $\varepsilon$ and $\eta$ (see Andrews & McIntyre 1978 for the general expression of $N$) we have $N=E/\sigma$ for a monochromatic wave of surface elevation variance $E$ and intrinsic frequency $\sigma$, in which, following the common usage in non-accelerated reference frames, the gravity $g$ is left out, so that the action has units of meters squared times second. For general waves, the variance $E$ may be written as $$\begin{aligned} E(t)&=&\left<\left(\zeta_0+ \zeta_1+\zeta_2+ \ldots \right)^2\right> =\left<\zeta^2_0+ 2 \zeta_0 \zeta_1 + \left(\zeta^2_1 + 2 \zeta_0 \zeta_2\right) + \ldots \right>, \label{Eexp}\end{aligned}$$ where $\langle \cdot \rangle$ denotes an average over flow realizations, and $\zeta_i$ is the surface elevation solution of order $\eta^i$, and terms of like order in $\eta$ have been grouped. Each of these terms may be expanded in this form $$\left<\zeta^2_0\right>=\sum_{{{\boldsymbol k}},s} \left|Z^s_{0,{{\boldsymbol k}}}\right|^2 = 2 \sum_{{{\boldsymbol k}}} Z^+_{0,{{\boldsymbol k}}} Z^-_{0,-{{\boldsymbol k}}}$$ For free wave components, the elevation amplitude is proportional to the velocity potential amplitude $$Z^s_{i,{{\boldsymbol k}}}={\mathrm{i}}s \sigma \Phi_{j,{{\boldsymbol k}}}^{s}/g \label{pot_to_elev}$$ so that the elevation co-variances are proportional to the co-variances $F^{\Phi}_{i,j,k}$ of the surface velocity potential, $$\label{Fijkphi} F^{\Phi}_{i,j,k}=\langle \Phi_{i,{{\boldsymbol k}}}^{+}\Phi_{j,-{{\boldsymbol k}}}^{-} +\Phi_{i,-{{\boldsymbol k}}}^{-}\Phi_{j,{{\boldsymbol k}}}^{+} \rangle,\label{covarPhi}$$ The contribution of the complex conjugate pairs of components (${{\boldsymbol k}},+$) and ($-{{\boldsymbol k}},-$) are combined in (\[Fijkphi\]) so that the covariance $F^{\Phi}_{i,j,k}$ correspond to that of all waves with wavenumber magnitude $k$ propagating in the direction of ${{\boldsymbol k}}$. In the limit of small wavenumber separation, a continuous slowly-varying cross-spectrum can be defined (e.g. Priestley 1981, ch.11; see also AH), $$\label{cov2} F^\Phi_{i,j}({{\boldsymbol k}})=\lim _{|\Delta k| \rightarrow 0}\frac{F^{\Phi}_{i,j,k}}{\Delta k_x \Delta k_y}.$$ The definition of all spectral densities are chosen so that the integral over the entire wavenumber plane yields the total covariance of $\phi_i$ and $\phi_j$. Finally, $N_{i,j}({{\boldsymbol k}})$ is defined as the $(i+j)^{\mathrm{th}}$ order depth-integrated wave action contribution from correlation between $i^{\mathrm{th}}$ and $j^{\mathrm{th}}$ order components with wavenumber $k$. From (\[Eexp\]) and (\[pot\_to\_elev\]) one has, $$\label{nrj_ij} N_{i,j}({{\boldsymbol k}})=\frac{k}{g \sigma} F^\Phi_{i,j}({{\boldsymbol k}})\tanh(kH).\label{NfromPhi}$$ The spectral wave action is thus, $$\label{nrj} N({{\boldsymbol k}})=\sum_{i=0}^{\infty} N_i ({{\boldsymbol k}}) = \sum_{i=0}^{\infty} \sum_{j=0}^i N_{i,i-j}\label{Action_exp} ({{\boldsymbol k}}).$$ Defining $G_{{\boldsymbol{l}}}$ as the amplitude of the Fourier component of wavenumber ${\boldsymbol{l}}$, the bottom elevation is given by $$h({{\boldsymbol x}})=\sum_{{\boldsymbol{l}}} G_{{\boldsymbol{l}}} {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}[{{\boldsymbol x}}+{\boldsymbol{U}}t]},$$ with a summation on the entire wavenumber plane. Because $h$ is real, $\overline{G_{-{\boldsymbol{l}}}}=G_{{\boldsymbol{l}}}$. The bottom elevation spectrum in discrete form is given by $F^G_{{\boldsymbol{l}}}=\langle G_{{\boldsymbol{l}}} G_{-{\boldsymbol{l}}} \rangle$ and in continuous form by $$F^B({\boldsymbol{l}})= \lim _{|\Delta l| \rightarrow 0} \frac{F^G_{{\boldsymbol{l}}}}{\Delta l_x \Delta l_y},$$ and verifies, $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} F^B({\boldsymbol{l}}){\mathrm{d}}l_x {\mathrm d}l_y= \lim_{L \rightarrow \infty} \frac{1}{L^2} \int_{-L/2}^{L/2} \int_{-L/2}^{L/2}h^2(x,y) {\mathrm{d}}x {\mathrm d}y$$ Now that the scene is set, we shall solve for the velocity potential $\phi$ in the frame of reference moving with the mean current, and use (\[NfromPhi\]) to estimate the action spectra density at each successive order. In the course of this calculation, $\phi$ will appear as the sum of many terms, some of which are secular (these are the ‘resonant terms’ in Hasselmann’s terminology), i.e. with growing amplitudes in time. Most importantly among these are those that lead to resonant terms in $N$. All other terms are bounded in time and thus do not contribute to the long-term evolution of the wave spectrum, i.e. on the scale of several wave periods, and shall be neglected (see Hasselmann 1962). Zeroth-order solution --------------------- In the moving frame of reference, the governing equations for $\phi_0$ are identical to those in the fixed frame in the absence of current. The solution is thus $$\phi_0=\sum_{{{\boldsymbol k}},s}\frac{\cosh(k(z+H))}{\cosh(kH)}\Phi^s_{0,{{\boldsymbol k}}}{\mathrm{e}}^{{\mathrm{i}}[{{\boldsymbol k}}{\boldsymbol{\cdot}}{{\boldsymbol x}}-s\sigma t]},$$ where the intrinsic frequency $\sigma$ is the positive root of the linear dispersion relation, $$\label{reldispjfm} \sigma^2=gk\tanh(kH).$$ First-order solution -------------------- Surface non-linearity becomes relevant at first order due to a coupling between the zeroth order solution and current-induced first order terms. Including all powers of $\eta$, the expansion of the surface boundary condition to order $\varepsilon^2$ gives, at $z=0$, $$\begin{aligned} \frac{\partial \phi}{\partial t^2}+ g\frac{\partial \phi}{\partial z}& =&- \zeta \frac{\partial^3 \phi}{\partial t^2 \partial z} - g \zeta \frac{\partial^2 \phi}{\partial z^2} -\frac{\partial \zeta}{\partial t}\frac{\partial^2 \phi}{\partial z \partial t} +{\boldsymbol{\nabla}}\phi {\boldsymbol{\cdot}}\left( g{\boldsymbol{\nabla}}\zeta - \frac{\partial {\boldsymbol{\nabla}}\phi}{\partial t}\right) - \frac{\partial \phi}{\partial z} \frac{\partial^2 \phi}{\partial t \partial z} \nonumber\\& & + c_1'(t) + O(\varepsilon^3)\label{surflin}\end{aligned}$$ The equations at order $\eta$ are $$\begin{aligned} {\boldsymbol{\nabla}}^{2}\phi_1+\frac{\partial ^{2}\phi_1}{\partial z^{2}} & = & 0 \quad {\mathrm{for}} \quad -H \leq z \leq 0,\label{laplace2} \\ \frac{\partial \phi_1}{\partial z} & = &-h\frac{\partial ^{2}\phi_0}{\partial z^{2}}+{\boldsymbol{\nabla}}\phi_0 \cdot {\boldsymbol{\nabla}}h +\frac{\partial h}{\partial t} \qquad {\mathrm{at}} \qquad z=-H,\label{fond2}\end{aligned}$$ and, at $z=0$, expansion of (\[surflin\]) to first order in $\eta$ yields, $$\begin{aligned} \frac{\partial ^{2}\phi_1}{\partial t^{2}}&+& g\frac{\partial \phi_1}{\partial z} = \stackrel{\mathrm{I}} {\overbrace{g\left( {\boldsymbol{\nabla}}\phi_0 {\boldsymbol{\cdot}}{\boldsymbol{\nabla}}\zeta_1 - \zeta_1 \frac{\partial^2 \phi_0}{\partial z^2} \right)}} + \stackrel{\mathrm{II}} {\overbrace{g\left({\boldsymbol{\nabla}}\phi_1 {\boldsymbol{\cdot}}{\boldsymbol{\nabla}}\zeta_0 - \zeta_0 \frac{\partial^2 \phi_1}{\partial z^2} \right)}} - \stackrel{\mathrm{III}} { {\boldsymbol{\nabla}}\phi_1 {\boldsymbol{\cdot}}\frac{\partial {\boldsymbol{\nabla}}\phi_0}{\partial t}} \nonumber \\ & & \stackrel{\mathrm{IV}} {- {\boldsymbol{\nabla}}\phi_0 {\boldsymbol{\cdot}}\frac{\partial {\boldsymbol{\nabla}}\phi_1}{\partial t}} \stackrel{\mathrm{V}} {- \left( \frac{\partial \phi_1}{\partial z} +\frac{\partial \zeta_1}{\partial t} \right) \frac{\partial^2 \phi_0}{\partial t \partial z}} \stackrel{\mathrm{VI}} {- 2 \frac{\partial \phi_0}{\partial z} \frac{\partial^2 \phi_1}{\partial t \partial z}} \stackrel{\mathrm{VII}} {- \zeta_1 \frac{\partial^3 \phi_0}{\partial t^2 \partial z}} - \stackrel{\mathrm{VIII}} { \zeta_0 \frac{\partial^3 \phi_1}{\partial t^2 \partial z}} + NL_1 \nonumber \\ \label{surf2}\end{aligned}$$ where the terms $NL_1$, not written explicitly (see Hasselmann 1962 eq. 1.11–1.12), are quadratic products of the zeroth-order solution. Since no gravity waves satisfy both $\sigma=\sigma_1 \pm \sigma_2$ and ${{\boldsymbol k}}={{\boldsymbol k}}_1 \pm {{\boldsymbol k}}_2$, $NL_1$ forces a non-resonant wave solution $\phi_1^{\rm nl}$ that will be neglected because it does not modify our second order wave action balance, thanks to the choice $\varepsilon < \eta$. The spatially uniform term $c_1'(t)$ has been incorporated into $NL_1$ and is also of second order in the wave slope, and does not lead to resonances. That term, omitted by Hasselmann (1962), is responsible for generating microseisms (e.g. Longuet-Higgins 1950). The first-order system of equations is non-linear due to the surface boundary condition (\[surf2\]). However, all the right hand side terms of (\[surf2\]) are of order $\varepsilon \eta \phi_0$, and thus negligible, provided that $\phi_1$ is of order $\eta \phi_0$. Without $\partial h/\partial t$ in (\[fond2\]) this would be the case, since the other forcing terms are all proportional to $\eta \phi_0$. However, as suggested by anonymous reviewers, $\partial h/\partial t$ introduces an external forcing. We thus first give the solution $(\phi_{1c},\zeta_{1c})$ forced by $\partial h/\partial t$ only, in the right hand side of (\[fond2\]). This solution is physically identical to mean current perturbation caused by the bottom topography and given by Kirby (1988, his eq. 2.9) for a sinusoidal bottom. With a more general bottom, it is $$\phi_{1c} = {\mathrm{i}}\sum_{{\boldsymbol{l}}} {\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\frac{G_{{\boldsymbol{l}}}}{l \alpha_{{\boldsymbol{l}}}}\left\{\beta_{{\boldsymbol{l}}} \cosh \left[l(z+H)\right] + \alpha_{{\boldsymbol{l}}} \sinh \left[l(z+H)\right]\right\} {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}\left({{\boldsymbol x}}+{\boldsymbol{U}}t\right)},\label{phi1c}$$ where $$\alpha_{{\boldsymbol{l}}} = \frac{\left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\right)^2}{gl} - \tanh(lh),$$ and $$\beta_{{\boldsymbol{l}}} = 1-\tanh(lh) \frac{\left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\right)^2}{gl}.$$ The corresponding surface elevation oscillations, given by (\[Bernoulli\]), are second order in the Froude number ${\mbox{\textit{Fr}}}= U/(gh)^{1/2}$, and 180$^\circ$ out of phase with the bottom oscillations for slow currents when $\alpha<0$ (Kirby 1988, eq. 2.10), $$\zeta_{1c} = \sum_{{\boldsymbol{l}}} \frac{\left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\right)^2 G_{{\boldsymbol{l}}}}{g \alpha_{{\boldsymbol{l}}} \cosh(lh)} {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}\left({{\boldsymbol x}}+{\boldsymbol{U}}t\right)}.$$ From (\[phi1c\]), the following expression are derived, $$\phi_{1c}(z=0) = {\mathrm{i}}\sum_{{\boldsymbol{l}}} {\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\frac{G_{{\boldsymbol{l}}}}{l \alpha_{{\boldsymbol{l}}} \cosh(lh)}{\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}\left({{\boldsymbol x}}+{\boldsymbol{U}}t\right)},$$ $$\frac{\partial \phi_{1c}}{\partial z}(z=0)= \frac{\partial \zeta_{1c}}{\partial t}= {\mathrm{i}}\sum_{{\boldsymbol{l}}} \left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\right)^3 \frac{G_{{\boldsymbol{l}}}}{g \alpha_{{\boldsymbol{l}}} \cosh(lh)}{\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}\left({{\boldsymbol x}}+{\boldsymbol{U}}t\right)}.$$ These shall be particularly useful for plugging into (\[surf2\]). We can now obtain the general solution to our equations (\[laplace2\])–(\[surf2\]) by the following superposition of the previous solution with free and bound (i.e. non-resonant) wave components, with amplitudes $\Phi^{s}_{1,{{\boldsymbol k}}}$ and $\Phi^{{\rm si},s}_{1,{{\boldsymbol k}}}$ respectively, $$\phi_1=\phi_{1c}+\sum_{{{\boldsymbol k}},s}\left[ \frac{\cosh\left[k(z+H)\right]}{\cosh(kH)}\Phi^{s}_{1,{{\boldsymbol k}}}(t) + \frac{\sinh\left[k(z+H)\right]}{\cosh(kH)}\Phi^{{\rm si},s}_{1,{{\boldsymbol k}}}(t) \right] {\mathrm{e}}^{{\mathrm{i}}{{\boldsymbol k}}{\boldsymbol{\cdot}}{{\boldsymbol x}}}, \label{formphi2}$$ where the last two terms corresponds to the solution to the forcing by all the right hand side terms except for $\partial h/\partial t$. Because $\phi_{1c}$ and $\zeta_{1c}$ are the only terms that may be larger than $\varepsilon \eta \phi_0$, all others are neglected in the right-hand side of (\[surf2\]). Substitution of (\[formphi2\]) in the bottom boundary condition (\[fond2\]) yields $$\frac{k}{\cosh(kH)} \Phi^{{\rm si},s}_{1,{{\boldsymbol k}}}(t) = -\sum_{{\boldsymbol{k^{\prime}}}} \frac{{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{{\boldsymbol k}}}{\cosh(k'H)}\Phi^s_{1,{\boldsymbol{k^{\prime}}}}G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}{\mathrm{e}}^{{\mathrm{i}}\left[\left({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}\right) {\boldsymbol{\cdot}}{\boldsymbol{U}}-s\sigma'\right]t}.$$ Replacing now (\[formphi2\]) in the surface boundary condition (\[surf2\]), yields an equation for $\Phi^{s}_{1,{{\boldsymbol k}}}$. Using $\omega=\sigma+{{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$ and $\omega'=\sigma'+{{\boldsymbol k}}' {\boldsymbol{\cdot}}{\boldsymbol{U}}$, it writes $$\label{oscil2} \left( \frac{d^2}{dt^2}+\sigma^2 \right)\Phi^s_{1,{{\boldsymbol k}}}(t)= \sum_{{\boldsymbol{k^{\prime}}}} M^s({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\Phi_{0,{\boldsymbol{k^{\prime}}}}G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}{\mathrm{e}}^{{\mathrm{i}}\left[{{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-s\omega'\right]t},$$ with $$M^s({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})=\left\{ gk-\left[{{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-s\omega'\right]^2 \tanh(kH) \right\} \frac{{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{{\boldsymbol k}}}{k} \frac{\cosh(kH)}{\cosh(k'H)}+M^s_{c1}({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})$$ where $M_{c1}^s$ is given by all the right-hand side terms in (\[surf2\]) and thus corresponds to the scattering induced by current and current-induced surface elevation variations. Anticipating resonance, we only give the form of $M_{c1}^s=M_{c}^s$ for $\sigma=\sigma'-s {\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$, with ${\boldsymbol{l}}={{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}$, $$M^s_c({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})=\frac{\{s g^2 {\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}(\stackrel{\rm (a)}{\sigma' {\boldsymbol{l}}{\boldsymbol{\cdot}}{{\boldsymbol k}}} + \stackrel{\rm (b)}{\sigma {\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}}) -\left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}}\right)^2 [\stackrel{\rm (c) }{g^2 {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}} - \stackrel{\rm (d)}{\overbrace{\sigma \sigma' (\sigma \sigma' +\left({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})^2\right)}} ]\}}{l g^2 \alpha_{{\boldsymbol{l}}} \cosh(lh)},\label{Mc}$$ in which the term (a) is given by the term (II) in (\[surflin\]), (b) is given by (III) and (IV), (c) is given by (I), and (d) is given by (V)–(VIII). Because we are first solving the problem to order $\eta$, it is natural that our solution is a linear superposition of the solutions found by Kirby (1988) for a single bottom component. Indeed, $M_c({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})=-4\omega \Omega_c/D$, with $\Omega_c$ the interaction coefficient of Kirby (1988, eq. 4.22b) and $D$ his bottom amplitude, here $G_l = {\rm i}D/2$. The solution to the forced harmonic oscillator equation (\[oscil2\]) is $$\label{phi2sk} \Phi^s_{1,{{\boldsymbol k}}}(t)= \sum_{{\boldsymbol{k^{\prime}}}}M^s({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\Phi^s_{0,{\boldsymbol{k^{\prime}}}}G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}} f_1(\sigma,{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-s\sigma';t),$$ where ${\boldsymbol{l}}= {{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}$, and the function $f_1$ is defined in Appendix A. ### First order action The lowest order perturbation of the wave action by scattering involves the order $\eta$ covariances $$\label{cov21} F^\Phi_{1,0,{{\boldsymbol k}}}+F^\Phi_{0,1,{{\boldsymbol k}}}=4 {\mbox{Re}}\left(\langle \Phi_{0,{{\boldsymbol k}}}^{+}\Phi_{1,-{{\boldsymbol k}}}^{-} \rangle\right),$$ with ${\mbox{Re}}$ denoting the real part. Including only the secular terms, we get $$F^\Phi_{1,0,{{\boldsymbol k}}}+F^\Phi_{0,1,{{\boldsymbol k}}}= 4{\mbox{Re}}\left[\sum_{{\boldsymbol{k^{\prime}}}} M^+({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) \langle \Phi^{+}_{0,{\boldsymbol{k^{\prime}}}} \Phi^{-}_{0,-{{\boldsymbol k}}} G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}\rangle f_1(\sigma,{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-\sigma';t) {\mathrm{e}}^{{\mathrm{i}}\sigma t}\right]. \label{F21}$$ Although this term was assumed to be zero in AH, it is not zero for sinusoidal bottoms with partially standing waves, and may become significant at resonance due to the function $f_1$. In uniform conditions, the time evolution of the wave field requires that the non-stationarity must come into play. Thus $\gamma \approx \eta$ and the non-stationary term is given by AH (their appendix D), $$\frac{\partial \left[N_{1,0}^{\mathrm{ns}}({{\boldsymbol k}}) +N_{0,1}^{\mathrm{ns}}({{\boldsymbol k}})\right]}{\partial t}=-\frac{\partial N_{0}({{\boldsymbol k}})}{\partial t}.\label{nonstat}$$ In order to simplify the discussion, we shall briefly assume that there is no current and that the waves are unidirectional. In that case, ${{\boldsymbol k}}'=-{{\boldsymbol k}}$ and $M({{\boldsymbol k}},{{\boldsymbol k}}')=-g k^2/\cosh^2(kH)$. Replacing (\[F21\]) in (\[nrj\_ij\]) and combining it with (\[nonstat\]) yields the action balance $$\frac{\partial N_{0,{{\boldsymbol k}}}}{\partial t}= \frac{\partial }{\partial t} \left[\frac{k}{g \sigma} \tanh(kH)\left(F^\Phi_{1,0,{{\boldsymbol k}}}+F^\Phi_{0,1,{{\boldsymbol k}}}\right)\right]={\mbox{Im}}\left(\frac{-4 k^2 \sigma}{2 g \cosh^2(kH)} \langle \Phi^{+}_{0,{{\boldsymbol k}}} \Phi^{-}_{0,{{\boldsymbol k}}} G_{-2k} \rangle\right), \label{E1}$$ with ${\mbox{Im}}$ denoting the imaginary part. For directionally spread random waves, with a current, and a real bottom (e.g. random or consisting of a finite series of sinusoidal bars), the evaluation of (\[F21\]) is not simple. First of all, resonant terms given by $f_1$ only occur for $\sigma' = \sigma +s {\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$, that is $\omega=\omega'$. Using $N({{\boldsymbol k}})=N_0({{\boldsymbol k}}) \left[1+O(\eta)\right]$ and taking the limit to continuous surface and bottom spectra yields $$\frac{\partial N({{\boldsymbol k}})}{\partial t}= S_1({{\boldsymbol k}})= \int_{0}^{2 {\pi}} \frac{4 {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}}{2 g \cosh(kH) \cosh(k'H)} {\mbox{Im}}\left[ Z({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\right] {\mathrm{d}}k'_x {\mathrm{d}}\theta', \label{E3dir}$$ with the mixed surface bottom bispectrum $Z$ defined by $$Z({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) = \lim_{\Delta {{\boldsymbol k}}\rightarrow\infty}\langle \frac{ \Phi^{+}_{1,{{\boldsymbol k}}} \Phi^{-}_{1,-{\boldsymbol{k^{\prime}}}} G_{-k-k^\prime}}{\Delta {{\boldsymbol k}}\Delta \theta'} \rangle,$$ with ${{\boldsymbol k}}=k(\cos \theta, \sin \theta)$ and ${{\boldsymbol k}}^\prime=k(\cos \theta^\prime, \sin \theta^\prime)$. $Z$ is similar to a classical bispectrum (e.g. Herbers  2003) with one surface wave amplitude replaced by a bottom amplitude, and a similar expression is found for a non-zero current. The action balance (\[E3dir\]) is generally not closed, and requires a knowledge of the wave phases that are not available in a phase-averaged model. The same type of coupling, although due to the large scale topography, also occurs in the stochastic equations for non-linear wave evolution derived by Janssen, Herbers & Battjes (2006). The contribution of the mixed bispectrum will thus be evaluated below, in order to investigate in which cases it may be neglected or parameterized. It is expected that $S_1$ is generally negligible because MAHR have neglected $S_1$, and still found a good agreement of the second order action balance with exact numerical solutions for the wave amplitude reflection coefficient. ### Second order action From the expansion (\[Action\_exp\]), the second order action is $N_2({{\boldsymbol k}})= N_{1,1}({{\boldsymbol k}})+N_{0,2}({{\boldsymbol k}})+N_{2,0}({{\boldsymbol k}})$. The first term can be estimated from $\phi_1$, using the covariance of the velocity potential amplitudes (\[covarPhi\]), $$\label{cov22} F^\Phi_{1,1,{{\boldsymbol k}}}=2\langle \Phi_{1,{{\boldsymbol k}}}^{+}\Phi_{1,-{{\boldsymbol k}}}^{-} \rangle.\label{FPHI11}$$ Using (\[phi2sk\]), (\[FPHI11\]) can be re-written as $$\frac{F^\Phi_{1,1,{{\boldsymbol k}}}}{\Delta {{\boldsymbol k}}}= 2\sum_{{\boldsymbol{k^{\prime}}}}\left|M^+({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\right|^2 \frac{\langle\left|\Phi^{+}_{0,{\boldsymbol{k^{\prime}}}}\right|^2\rangle}{\Delta {\boldsymbol{k^{\prime}}}} \frac{\langle \left|G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}G_{-{{\boldsymbol k}}+{\boldsymbol{k^{\prime}}}}\right|^2 \rangle}{\Delta {{\boldsymbol k}}} \left|f_1(\sigma,{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-\sigma';t)\right|^2 \Delta {\boldsymbol{k^{\prime}}}, \label{F22}$$ Taking the limit of (\[F22\]) when ${\Delta {{\boldsymbol k}}} \rightarrow 0$, $$\begin{aligned} \label{PO} F^{\Phi}_{1,1}(t,{{\boldsymbol k}}) &=&\int_{-\infty}^\infty \int_{-\infty}^\infty \left|M^+({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\right|^2 F^{\Phi}_{1,1}({\boldsymbol{k^{\prime}}}) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}) \left|f_1(\sigma,{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-\sigma';t)\right|^2 {\mathrm{d}}k_x' {\mathrm{d}}k_y'.\nonumber \\\end{aligned}$$ Due to the singularity in $f_1$, and assuming that the rest of the integrand can be approximated by an anlytical function in the neighbourhood of the singularity $\omega'=\omega$, which requires both bottom and surface elevation spectra to be continuous, the integral can be evaluated by using $$\langle f_1(\sigma,{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-\sigma';t)f_1(\sigma,-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}+\sigma';t)\rangle =\frac{{\pi}t}{4\sigma^2}\left[\delta \left(\sigma'-(\sigma+{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})\right) + O (1)\right]. \label{f1toDirac}$$ $\delta$ is the one-dimension Dirac distribution, infinite where the argument is zero, and such that $\int \delta(x) A(x) {\mathrm{d}}x=A(0)$ for any continuous function $A$. In order to remove that singularity, the argument of $\delta$ maye be re-written as $\omega'-\omega$, making explicit all the dependencies on $k'$. Evaluation of the $\delta$ function is then performed by changing integration variables $(k_x',k_y')$ are changed to $(\omega',\theta')$, with a Jacobian $k' \partial k'/\partial \omega'=k'^2/(k'C_g'+{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}})$. We thus have $$\begin{aligned} \label{PO2} F^{\Phi}_{1,1}(t,{{\boldsymbol k}}) &=&\frac{{\pi}t }{2\sigma^2}\int_{0}^{2 {\pi}}\int_{\omega'} \left|M^+({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\right|^2 F^{\Phi}_{1,1}({\boldsymbol{k^{\prime}}}) \frac{k'F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}) }{C_g'+{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}} \delta (\omega' - \omega ) d\omega'{\mathrm{d}}\theta'+ O(1). \nonumber \\\end{aligned}$$ When $\omega=\omega'$, the integrand simplifies. $M^s({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})$ is equal to $M({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})$, defined by $$\begin{aligned} M({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})=\frac{g {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}}{\cosh(kH)\cosh(k'H)}+M_c({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) \equiv M_b({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})+M_c({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}),\label{M}\end{aligned}$$ with $M_c=M_c^+$ given by (\[Mc\]). Using the (\[nrj\_ij\]) relation between velocity potential and action, and evaluating the integral over $\omega'$, one obtains $$\label{fin2} N_{1,1}(t,{{\boldsymbol k}})=\frac{{\pi}t}{2}\int_{0}^{2 {\pi}} M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})\frac{N_{0,0}({\boldsymbol{k^{\prime}}})}{\sigma \sigma'} F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})\frac{k'^2}{k' C_g'+{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}} d\theta' + O(1).\label{N11}$$ Again we note the correspondance with the theory of Kirby (1988, eq. 4.21). Specifically, one has $M({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})=-4\omega \Omega_c/D$, with $\Omega_c$ being Kirby’s interaction coefficient. Second order potential and corresponding terms in $N_2$ ------------------------------------------------------- In order to estimate the other two terms that contribute to $N_2$, the second order potential $\phi_2$ must be obtained. It is a solution of $$\label{lap3} {\boldsymbol{\nabla}}^{2}\phi _{2}+\frac{\partial ^{2}\phi _{2}}{\partial z^{2}}=0 \qquad {\mathrm{for}} \qquad -H \leq z \leq 0,$$ $$\frac{\partial \phi_2}{\partial z}=-h\frac{\partial ^{2}\phi _{1}}{\partial z^{2}}-\frac{h^2}{2}\frac{\partial ^{3}\phi _{0}}{\partial z^{3}}+{\boldsymbol{\nabla}}\phi _{1}\cdot {\boldsymbol{\nabla}}h +{\boldsymbol{\nabla}}(h\frac{\partial \phi _{0}}{\partial z})\cdot {\boldsymbol{\nabla}}h \qquad {\mathrm{at}} \qquad z=-H, \label{bot3JFM}$$ that simplifies because odd vertical derivatives of $\phi_0$ are zero at $z=-H$, $$\label{bottombound3b} \frac{\partial \phi_2}{\partial z}=-h\frac{\partial ^{2}\phi _{1}}{\partial z^{2}}+{\boldsymbol{\nabla}}\phi _{1}\cdot {\boldsymbol{\nabla}}h \qquad {\mathrm{at}} \qquad z=-H,$$ and $$\label{surfbound3} \frac{\partial ^{2}\phi _{2}}{\partial t^{2}}+g\frac{\partial \phi _{2}}{\partial z}=\mathrm{i}\sum_{{\boldsymbol k},s} 2 s \sigma \frac{\partial \Phi_{0,{{\boldsymbol k}}}^{s}}{\partial t}\mathrm{e}^{\mathrm{i}\left( {\boldsymbol k}{\boldsymbol{\cdot}}{\boldsymbol x}-s\omega t\right)} + {\rm I-VIII} + NL_2 \qquad {\mathrm{at}} \qquad z=0.$$ The terms I–VIII are identical to those in (\[surf2\]) with $\phi_0$, $\zeta_0$, $\phi_1$, $\zeta_1$ replaced by $(\phi_1-\phi_{1c})$, $(\zeta_1-\zeta_{1c})$, $\phi_2$ and $\zeta_2$, respectively. All other non-linear terms have been grouped in $NL_2$. In order to yield contributions to the second order action $N_{2,0}$, terms must correlate with $\phi_0$ to give second-order terms in $\eta$ with non-zero means. For zeroth order components with random phases, inspection shows that $NL_2$ do not contribute to $N_{2,0}$ and will thus be neglected. The solution $ \phi_2 $ is given by the following form, $$\label{formphi3} \phi_2=\phi_2^{\mathrm{ns}}+\sum_{{{\boldsymbol k}},s}\left[ \frac{\cosh(k(z+H))}{\cosh(kH)}\Phi^{s}_{2,{{\boldsymbol k}}}(t) + \frac{\sinh(k(z+H))}{\cosh(kH)}\Phi^{{\rm si},s}_{2,{{\boldsymbol k}}}(t) \right] {\mathrm{e}}^{{\mathrm{i}}{{\boldsymbol k}}{\boldsymbol{\cdot}}{{\boldsymbol x}}}.$$ The non-stationarity term $\phi_2^{\mathrm{ns}}$ leads to the action evolution term (\[nonstat\]), now assuming $\gamma \approx \eta^2$. Following the method used at first order, substitution of (\[formphi3\]) in the bottom boundary condition (\[bottombound3b\]) leads to, $$\Phi^{{\rm si},s}_{2,{{\boldsymbol k}}}(t)=-\sum_{{{\boldsymbol k}}'}\frac{{{\boldsymbol k}}'\cdot {{\boldsymbol k}}}{k}\frac{\cosh(kH)}{\cosh(k'H)}\Phi_{1,{{\boldsymbol k}}'}^s(t) G_{{{\boldsymbol k}}-{{\boldsymbol k}}'} {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}t }.$$ After calculations detailed in Appendix B, $\phi_2$ yields the following contribution to the wave action, $$\label{dE3} N_{2,0}({{\boldsymbol k}}) + N_{0,2}({{\boldsymbol k}})=-\frac{{\pi}t}{2 } \int_0^{2{\pi}} M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})\frac{N_{0}({{\boldsymbol k}})}{\sigma \sigma'} \frac{k'^2}{k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}}{\mathrm{d}}\theta'+ O(1),\label{N20scat}$$ in which $\sigma'=\sigma -{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$, $\sigma'^2=gk'\tanh(kH)$, and $C_g'=\sigma'(1/2+k'H/\sinh(2k'H))/k'$. Action and momentum balances ---------------------------- We shall neglect the first order action contribution $N_1$ given by (\[E3dir\]). The solvability condition imposed on the action spectrum is that $N_2$ remains an order $\eta^2$ smaller than $N_0$ for all times. Thus all secular terms of order $\eta^2$ must cancel. Combining (\[nonstat\]), (\[N11\]), and (\[N20scat\]) gives $$\label{solvability} -\frac{{\mathrm{d}}N_0({{\boldsymbol k}})}{{\mathrm{d}}t} +\frac{{\pi}}{2} \int_0^{2{\pi}} M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})\frac{N_{0}({\boldsymbol{k^{\prime}}})-N_{0}({{\boldsymbol k}})}{\sigma \sigma'} \frac{k'^2}{k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}}{\mathrm{d}}\theta'$$ Since $N_2$ and $N_1$ remain small, $N({{\boldsymbol k}})=N_0({{\boldsymbol k}}) \left[1+O(\eta^2)\right]$, and one has, $$\label{nrjbalance1} \frac{{\mathrm{d}}N({{\boldsymbol k}})}{{\mathrm{d}}t}=S_{\mathrm{bscat}}({{\boldsymbol k}})\label{action_balance},$$ with the spectral action source term, $$\label{nrjbalance2} S_{\mathrm{bscat}}({{\boldsymbol k}}) = \frac{{\pi}}{2}\int_{0}^{2 {\pi}} \frac{k'^2 M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})}{\sigma \sigma'\left(k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}\right)}F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}) \left[N({\boldsymbol{k^{\prime}}})- N({{\boldsymbol k}})\right] {\mathrm{d}}\theta',$$ where $\sigma'=\sigma+{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$ and ${{\boldsymbol k}}={\boldsymbol{k^{\prime}}}+{\boldsymbol{l}}$. This interaction rule was already given by Kirby (1988). The only waves that can interact share the same absolute frequency $\omega = \sigma + {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}= \sigma'+ {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$. For a given ${{\boldsymbol k}}$ and without current, the resonant ${\boldsymbol{k^{\prime}}}$ and ${\boldsymbol{l}}$ lie on circles in the wavenumber plane (see AH). The current slightly modifies this geometric property. For $U<<C_g$ the circles become ellipses (Appendix C). For a given value of $\omega$, one may obtain the source term integrated over all directions, $$\begin{aligned} \label{intS} S_{\mathrm{bscat}}(\omega)& = &\int_{0}^{2 {\pi}} k S_{\mathrm{bscat}}({{\boldsymbol k}}) \frac{\partial k}{\partial \omega} {\mathrm{d}}\theta \\ &=&\int_{0}^{2 {\pi}}\int_{0}^{2 {\pi}}\frac{{\pi}}{2}\frac{k^2 k'^2 M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}) }{\sigma \sigma' \left(k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}\right)\left(k C_g + {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}\right)}\left[ N({\boldsymbol{k^{\prime}}})-N({{\boldsymbol k}}) \right] d\theta'{\mathrm{d}}\theta \nonumber \\ &=&\int_{0}^{2 {\pi}} \int_{0}^{2 {\pi}} \frac{{\pi}}{2}\frac{M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}})F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})}{\sigma \sigma'} \left[ \frac{k^2 N(\omega,\theta')}{k C_g + {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}}-\frac{k'^2 N(\omega,\theta)}{k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}} \right] d\theta'{\mathrm{d}}\theta. \nonumber\end{aligned}$$ This expression is anti-symmetric, multiplied by -1 when $\theta$ and $\theta'$ are exchanged. Thus $S_{\mathrm{bscat}}(\omega)$ is a substraction of two equal terms, so that for any bottom and wave spectra $S_{\mathrm{bscat}}(\omega)=0$. In other words, the ‘source term’ is rather an ‘exchange term’, and conserves the wave action at each absolute frequency. This conservation is consistent with the general wave action conservation theorem proved by Andrews & McIntyre (1978), which states that there is no flux of action through an unperturbed boundary (here the bottom). It also appears that $\omega$ and $\theta$ are natural spectral coordinates in which the scattering source term takes a symmetric form. Finally, we may consider the equilibrium spectra that satisfy $S_{\mathrm{bscat}}({{\boldsymbol k}})=0$ for all ${{\boldsymbol k}}$. Without current, an equilibrium exists when either $N(\omega,\theta)$ or $N({{\boldsymbol k}})$ is isotropic. With current, the scattering term is uniformly zero if and only if the spectral densities in ${{\boldsymbol k}}$-space, $N({{\boldsymbol k}})$, are uniform along the curves of constant $\omega$. The source term $ S_{\mathrm{bscat}}$ may also be re-written in a form corresponding to that in AH, which now appears much less elegant, $$\label{nrjbalance2bis} S_{\mathrm{bscat}}({{\boldsymbol k}}) = \int_{0}^{2 {\pi}} K(k,k',H) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}) \left[ N({\boldsymbol{k^{\prime}}})-N({{\boldsymbol k}}) \right] d\theta',$$ with $$K(k,k',H)=\frac{{\pi}k'^2 M^2(k,k')}{2 \sigma \sigma' \left(k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}\right)}=\frac{4{\pi}\sigma k k'^3 \cos^2(\theta-\theta') \left[1+O({\mbox{\textit{Fr}}})\right]}{\sinh(2kH)\left[2k'H+\sinh(2k'H)\left(1+2 {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}/\sigma' \right)\right]}.$$ One may wonder how large is the current-induced scattering represented by $M_c$, our eq. (\[Mc\]), compared to the bottom-induced scattering represented by $M_b$. Since $\sigma'=\sigma + ({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})$, the (a) and (b) terms in the numerator $M_c$ almost cancel for small Froude numbers, and the (a)+(b) part is of order $Fr^2$. Thus $M_c$ is generally an order $Fr^2$ smaller than $M_b$. For ${{\boldsymbol k}}$ and ${\boldsymbol{k^{\prime}}}$ in opposite directions (i.e. back-scattering), the (a)+(b) part is even smaller, of order $g^2 ({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})^3 {\boldsymbol{l}}{\boldsymbol{\cdot}}{{\boldsymbol k}}$, and exactly zero in the long wave limit $lH \ll 1$. Thus, for back-scattering, the numerator in $M_c$ is itself of the order of (c), i.e. $g^2 ({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})^2 {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}$. Interestingly (c) formally comes from the modulations of the surface elevation $\zeta_{1c}$ so that the $O({\mbox{\textit{Fr}}}^2)$ elevation modulation is at least as important as the $O({\mbox{\textit{Fr}}})$ current modulation for this back-scattering situation. In that case, $M_c$ is of the order of $M_b \cosh(kH) \cosh(k'H) ({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})^2 /[g l \alpha_l \cosh(lH)]$. The relative magnitudes of $M_b$ and $M_c$ thus depend on ${\mbox{\textit{Fr}}}(l)= ({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})/[g l \tanh(l H)]^{1/2}$ that appears in $({\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{l}})^2/(g l \alpha_l)={\mbox{\textit{Fr}}}^2(l)/[{\mbox{\textit{Fr}}}^2(l)-1]$. This $l$-scale Froude number may be formally close to 1, and thus $M_c$ may be larger than $M_b$. However, scattering is limited by blocking as no scattered waves can propagate when $Cg'<{\boldsymbol{U}}{\boldsymbol{\cdot}}{\boldsymbol{k^{\prime}}}/ k'$. In the long wave limit, ${\mbox{\textit{Fr}}}(l)={\mbox{\textit{Fr}}}$ and for $(1-{\mbox{\textit{Fr}}}) \ll 1$, one has $M_c > M_b$. For oblique scattering, the (a)+(b) term may dominate the numerator of $M_c$ and the situation is more complex. Nevertheless, for Froude numbers typical of continental shelf situations, say $0<{\mbox{\textit{Fr}}}<0.4$, $M_c$ may be neglected in most situations since its $O(Fr^2)$ correction corresponds to only a few percent of the reflection. Obvious exceptions are cases in which $M_b$ is zero, such as when ${{\boldsymbol k}}$ and ${\boldsymbol{k^{\prime}}}$ are perpendicular. Finally, we may also write the evolution equation for the wave pseudo-momentum ${\mathbf M}^w = \rho_w g \int {{\boldsymbol k}}N({{\boldsymbol k}}){\mathrm{d}}{{\boldsymbol k}}$ (see Andrews & McIntyre 1978), where $\rho_w$ is the density of sea water. Introducing now the slow medium and wave field variations given by Kirby (1988), that do not interfere with the scattering process, except by probably reducing the surface-bottom bispectrum $Z$, one obtains an extension of the equation of Phillips (1977) $$\frac{\partial M_\alpha^w}{\partial t} + \frac{\partial}{\partial x_\beta}\left[ (U_{\beta}+ C_{g \beta}) M_\alpha^w \right] = -\tau^{\mathrm{bscat}}_\alpha -M_\beta^w \frac{\partial U_\beta}{\partial x_\alpha} - \frac{M_\alpha^w}{k_\alpha} \frac{k \sigma}{\sinh 2kD} \frac{\partial D}{\partial x_\alpha}, \label{wavemom}$$ with the dummy indices $\alpha$ and $\beta$ denoting dummy horizontal components, and the scattering stress vector, $${\tau}^{\mathrm{bscat}}=-\rho_w g \int {{\boldsymbol k}}S_{\mathrm{bscat}} {\mathrm{d}}{{\boldsymbol k}}.\label{Tbscat}$$ This stress has dimensions of force per unit area, and corresponds to a force equal to the the divergence of the wave pseudo-momentum flux. Based on the results of Longuet-Higgins (1967) and Hara & Mei (1987), this force does not contribute to the mean flow equilibrium with a balance of the radiation stresses divergence by long waves (or wave set-up in stationary conditions), contrary to the initial proposition of Mei (1985). This force is thus a net flux of momentum through the bottom, arising from a correlation between the non-hydrostatic bottom pressure and the bottom slope. That force is likely related to the pressure under partial standing waves locked in phase with the bottom undulations. Although the part $M_c$ of the coupling coefficient $M$ given by (\[M\]) is formally due to scattering by the current modulations ${\boldsymbol{\nabla}}\phi_{1c}$, and associated surface fluctuations $\zeta_{1c}$, it should be noted that these motions and related pressures are correlated with the bottom slope in the same way as the part represented by $M_b$. Thus both terms contribute to this force ${\tau}^{\mathrm{bscat}}$ which acts on the bottom and not on the mean flow. Wave scattering in two dimensions ================================= Before considering the full complexity of the 3D wave-bottom scattering in the presence of a current, we first examine the behaviour of the source term in the case of 2D sinusoidal seabeds. Although the bottom spectrum is not continuous along the $y$-axis, continuity in $x$ is sufficient for the use of (\[f1toDirac\]) and the source term can be applied, after proper transformation to remove these singularities. MAHR have investigated the applicability limits of the source term with $U=0$. They proved that for small bottom amplitudes the source term yields accurate reflection estimates, even for localized scatterers, and verified this with test cases. It is thus expected that this also holds for $U \neq 0$. Wave evolution equation in $2$D ------------------------------- We consider here a steady wave field in two dimension with incident and reflected waves propagating along the $x$-axis. We shall consider in particular the case of $m$ sinusoidal bars of amplitude $b$ and height $2b$, with a wavelength $2{\pi}/l_0$. The bottom elevation is thus $$\begin{aligned} \label{hsin} h(x) & = & b \sin(m l_0 x) \quad {\mathrm{for}} \quad 0 < x < L \label{sinbot}\\ h(x) & = & 0 \quad {\mathrm{otherwise}}. \nonumber\end{aligned}$$ Such a bottom is shown in figure 1 for $m=4$. This form is identical to that of the bottom profile chosen by Kirby (1988) but differs, for $0<x<L$, by a ${\pi}/2$ phase shift from the bottom profile chosen by Mei (1985). The bottom spectrum is of the form $$F^B(l_x,l_y) = F^{B2D}(l_x) \delta(l_y),\label{FB2D}$$ and for the particular bottom given by (\[sinbot\]), $$F^{B2D}(l_x) = \left(\frac{1}{2 {\pi}} \int_{-\infty}^{\infty} h(x){\mathrm{e}}^{-{\mathrm{i}}l x} {\mathrm{d}}x\right)^2= \frac{2 b^2 l_0^2}{{\pi}L}\frac{\sin^2(l L/2)}{(l_0^2-l^2)^2},\label{FB2Dsin}$$ with $$F^{B2D}(\pm l_0) = \frac{m b^2}{4 l_0} = \frac{b^2 L}{8 {\pi}}.\label{specatres}$$ Note that this is a double-sided spectrum, with only half of the bottom variance contained in the range $l_x>0$. For a generic bottom, for which $h(x)$ does not go to zero at infinity, the spectrum is obtained using standard spectral analysis methods, for example, from the Fourier transform of the bottom auto-covariance function (see MAHR). In that case $F^{B2D}$ is equivalent to a Wigner distribution (see e.g. Ryzhik  1996). First, replacing (\[FB2D\]) in (\[nrjbalance1\]) removes the angular integral in the source term. Taking ${{\boldsymbol k}}=(k_x,k_y)$, we have $l_y = k_y-k'_y=k \sin \theta-k'\sin \theta' $, thus ${\rm d}l_y$= $-k'_y cos \theta' {\rm d}{\theta'}$, and $$S_{\mathrm{bscat}}\left({{\boldsymbol k}},x\right) =\frac{{\pi}k' M^2(k,k') F^{B2D}({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})}{2 \sigma \sigma' \left|\cos \theta'\right| \left(k' C'_g + {\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}\right)}\left[N({\boldsymbol{k^{\prime}}})-N({{\boldsymbol k}}) \right].$$ Second, assuming now that waves propagate only along the $x$-axis, the wave spectral densities are of the form $$N(k_x,k_y) = N(k_x,k_y) \delta(k_y)=N^{2D}(k)\delta(\theta-\theta_0)/k,$$ with $\theta_0=0$ for $k_x >0$ and $\theta_0={\pi}$ for $k_x < 0$. Integrating over $\theta$ removes the singularities on $k_y$, and assuming a steady state one obtains $$\label{energybalance1D} \left[\frac{k_x}{k} C_g + U_x \right]\frac{\partial N^{2D}}{\partial x}\left(k_x,x\right)=S_{\mathrm{bscat}}^{2D}\left(k_x,x\right),\label{Nevol_kx}$$ with $$S_{\mathrm{bscat}}^{2D}\left(k_x,x\right)=\frac{{\pi}k' M^2(k,k') F^{B2D}(k_x-k'_x)}{2 \sigma \sigma'\left(k_x' C'_g + k'_x U_x \right)} \left[N^{2D}(k_x',x)-N^{2D}(k_x,x)\right].$$ Although the present theory is formulated for random waves, there is no possible coupling between waves of different frequencies. Mathematically, it is possible to take the limit to an infinitely narrow wave spectrum, such that, $N^{2D}(k,x)=N(x) \delta(\omega-\omega_0)+N'(x) \delta(\omega'-\omega'_0)$ with $k_{0x} >0$ and $k'_{0x} <0$. Using $\partial \omega/\partial k = C_g + k_x U_x/\left|k_x\right|$, the resulting evolution equation is, omitting the 0 subscripts on $k$ and $k'$, $$\begin{aligned} & &\left[\frac{k_x}{k} C_g + U_x \right] \frac{\partial N}{\partial x} \nonumber\\ & &=\frac{{\pi}M^2(k,k') F^{B2D}(k_x-k'_x)}{2 \sigma \sigma' }\left[\frac{ k N'}{k C_g + k_x U_x} - \frac{k' N}{k' C'_g + k'_x U_x}\right],\nonumber\label{Nevol_mono}\\\end{aligned}$$ with a similar equation for $N'$ obtained by exchanging $C_g$ and $C_g'$, and $k'$ and $k$, from which it is easy to verify that the total action is conserved. The stationary evolution equation (\[Nevol\_kx\]) only couples two wave components $N(k)$ and $N(k')$. For a uniform mean depth $H$, and uniform bottom spectrum $F^B$, as considered here, we thus have a linear system of two differential equations, that may be written in matrix form for any $k>0$, $$\frac{{\mathrm{d}}}{{\mathrm{d}}x} \left(\begin{array}{c} N(k) \\ N(k')\end{array} \right) = q \sfbsQ \left(\begin{array}{c} N(k) \\ N(k')\end{array}\right),$$ with $$q=\frac{{\pi}M^2(k,k')F^{B2D}(l)}{2 \sigma \sigma' Cg Cg'}\label{defq}$$ Defining $l=k_x-k'_x$, the action advection velocities $V'=C_g'+k'_x U_x$ and $V=C_g+k_x U_x$, the terms of the non-dimensional matrix $\sfbsQ$ are given by $$\begin{aligned} (\sfbsQ)_{1,1} = -\frac{C_g C_g'}{ V^2} &\quad {\rm and} \quad& (\sfbsQ)_{1,2} = \frac{C_g C_g'}{ V V'}, \nonumber \\ (\sfbsQ)_{2,1} = -\frac{C_g C_g'}{V'^2} &\quad {\rm and} \quad& (\sfbsQ)_{2,2} =\frac{C_g C_g'}{V V'},\end{aligned}$$ where $(\sfbsQ)_{i,j}$ is the $i^{\rm th}$ row and $j^{\rm th}$ column term of $\sfbsQ$. The general solution is thus $$\left(\begin{array}{c} N(k,x) \\ N(k',x)\end{array} \right) = {\mathrm{e}}^{q \sfbsQ x} \left(\begin{array}{c} N(k,0) \\ N(k',0)\end{array}\right).$$ The matrix exponential is classically the infinite series $\sum_{n=0}^\infty \left( q \sfbsQ\right)^n/n!$, in which matrix multiplications are used. The reflection coefficient for the wave action is found using the boundary condition expressing the absence of incoming waves from beyond the bars, $N(k',L)=0$, giving, $$R_N =\frac{N(k',0)}{N(k,0)}= -\left({\mathrm{e}}^{q \sfbsQ L}\right)_{2,1}/\left({\mathrm{e}}^{q \sfbsQ L}\right)_{2,2}.$$ A reflection coefficient for the modulus of the wave amplitude predicted by the source term is thus, $$R_S = \left[\frac{\sigma' N(-k',0)}{\sigma N(k,0)}\right]^{1/2}=-\left\{\sigma'\left({\mathrm{e}}^{q \sfbsQ L}\right)_{2,1}/\left[\sigma \left({\mathrm{e}}^{q \sfbsQ L}\right)_{2,2}\right]\right\}^{1/2} \label{Krsbscat}.$$ The spatial variation of the amplitudes may be linear, oscillatory, or exponential, depending on whether the determinant of $\sfbsQ$, is zero, negative or positive, respectively. That determinant is $C_g^2 C_g'^2 (V'-V)(V'^2+3V V' + 4 V^4)/V^4V'^3$, which is always of the sign of $V'-V$. Analytical solution for $U=0$ ----------------------------- In the absence of a mean current, $k'=-k$, and $$\begin{aligned} (\sfbsQ)_{1,1}=(\sfbsQ)_{1,2}=-(\sfbsQ)_{2,1}=(\sfbsQ)_{1,1}=1,\end{aligned}$$ Thus $\sfbsQ^2=0$ so that its exponential is only the sum of two terms, ${\mathrm{e}}^{q \sfbsQ x}= \left(\sfbsI+q \sfbsQ\right) x$, where $\sfbsI$ is the identity matrix. The solution to (\[Nevol\_mono\]) is simply, $$\begin{aligned} N(k,x)& = & N(k,0) \left[\frac{- q \left(x-L\right) +1}{1+q L}\right] \\ N(-k,x) &= & N(k,0)\left[\frac{- q \left(x-L\right) }{1+q L}\right]. \end{aligned}$$ An example of spatial variation of the wave spectrum from $x=0$ to $x=L$ is shown in Figure \[Sp\_evol\], for $U=0$, and a uniform (white) incident spectrum. The reflected wave energy (at $k<0$ in figure \[Sp\_evol\].*a*) compensates the loss of energy in the transmitted spectrum (at $k>0$ in figure \[Sp\_evol\].*b*). ![Bottom spectrum and evolution of a surface wave spectrum along a field of sinusoidal bars for $U=0$, $b=0.05$ m, $H=0.156$ m, so that $\eta=b/H=0.32$, and $l_0=2 {\pi}$, $n=4$, so that $L=4~m$ (bottom shown in figure 1). (a) square root of the bottom spectrum, (b) and (c) normalized square root wave spectrum upwave (at $x<0$) and downwave (at $x>L$) of the bars, respectively. The incident spectrum ($k>0$ at $x=0$) is specified to be white (unform in wavenumbers).[]{data-label="Sp_evol"}](figure2_bspec.eps){width="\textwidth"} For $k=l/2$, in the limit of small bar amplitudes, and replacing (\[specatres\]) in (\[Krsbscat\]) yields $$R_{S} = \left(q L\right)^{1/2} + O (qL) = \frac{k^2 b L}{2 kH + \sinh (2kH)}+ O (qL)$$ which is identical to Mei’s (1985) equation (3.21)–(3.22) for exact resonance, in the limit of $qL \ll 1$, and also converges to the result of Davies & Heathershaw (1984) for that same limit. For large bar amplitudes, the reflection is significant if the bars occupy a length $L$ longer than the localization length $1/q$. However, the reflection coefficient for the wave amplitude only increases with $L$ as $\left[q L / (1+q L)\right]^{1/2}$, which is slower than the exponential asymptote given by Mei (1985) for sinusoidal bars, and predicted by (Belzons  1988) from the lowest-order theory applied to a random bottom. The present inclusion of the correlations of second-order and zeroth order terms may be thought as the representation of multiple reflections that tend to increase the penetration length in the random medium. A deeper understanding of this question is provided by the comparison of numerical estimations of the reflection coefficients for the wave amplitudes $R$. A benchmark estimation for linear waves is provided by the step-wise model of Rey (1995) using integral matching conditions for the free propagating waves and three evanescent modes at the step boundaries. This model is known to converge to the reflection coefficents given by an exact solution of Laplace’s equation and the boundary conditions, in the limit of an infinite number of steps and evanescent modes. Calculations are performed here with 70 steps and 3 evanescent modes. These numbers are chosen because a larger number of steps or evanescent modes gives indistinguishable results in figure \[CompMei\]. Results of the benchmark model are in good agreement with the measurements of Davies & Heathershaw (1984), except for wave components for which the reflection over the beach, not included in the model, is comparable to the reflection over the bars. An analytical expression $R_{\mathrm{Mei}}$ is given by Mei (1985). $R$ for the present second order theory is given by $R_{S}$ (\[Krsbscat\]). We further compare these estimates to the reflection coefficient $R_{E,{\mathrm{Mei}}}$ that is deduced from the energy evolution given by Hara & Mei (1987), using the approximate solutions of Mei (1985, his equations 3.8–3.23). One may prefer to reformulate the energy evolution from the amplitude evolution equations of Kirby (1988) because he used a continuous water depth $h=\sin (ml_0)$, instead of Mei’s $h=\cos(ml_0)$ which is discontinuous at $x=0$ and $x=L$[^1]. Yet both Mei’s and Kirby’s equations lead to the same energy exchange between the incident and reflected components. Using Mei’s (1985) notations, the amplitudes of the incident waves, reflected waves, and bottom undulations are $A=2 \sigma \Phi^{+}_{0,{{\boldsymbol k}}}/g$, $B=2\sigma \Phi^{-}_{0,{{\boldsymbol k}}}/g$, and $D=-2{\mathrm{i}}G_{-2k}$, and the ‘cut-off’ frequency is $$\Omega_0=\frac{\sigma k D}{2 \sinh(2kH)}.\label{Om0}$$ The energy evolution of waves propagating over sinusoidal bars along the $x$-axis is given by Hara & Mei (1987). The reflected wave energy ${BB^\star}/{2}$ should be a solution of $$\frac{\partial }{\partial t} \left(\frac{BB^\star}{2}\right) - C_g \frac{\partial }{\partial x} \left(\frac{BB^\star}{2}\right) = {\mbox{Re}}\left({\mathrm{i}}\Omega_0 B^\star A \right), \label{EMei}$$ where $B^\star$ denotes the complex conjugate of $B$. This is identical to (\[E1\]) for a *monochromatic* bottom except that the imaginary part replaced by a real part. Equation (\[EMei\]) yields a corresponding energy reflection coefficient, given by the fraction of energy lost by the incoming waves, $$R_{E,{\rm Mei}} =-\frac{1}{C_g}\int_0^L {\mbox{Re}}\left({\mathrm{i}}\Omega_0 B^\star A \right) {\mathrm{d}}x.$$ Simple analytical expressions can be obtained at resonance, where Mei’s (1985) eq. (3.20)–(3.21) give, $$\frac{AB^\star}{A^2(0)}=\frac{-\mathrm{i} \sinh \left(2\tau(1-x/L)\right)}{2\cosh^2\tau}$$ with $\tau = \Omega_0 L / C_g$, so that $$R_{E,{\rm Mei}} = \frac{\cosh 2\tau -1}{4 \cosh^2\tau} = \frac{1}{2} \tanh^2 \tau = \frac{1}{2} R_{{\mathrm{Mei}}}^2,\label{RES1Mei}$$ and $$R_{E,{\rm Mei}} ={2}^{-1/2} R_{{\mathrm{Mei}}}.\label{RS1Mei}$$ It is not surprising that the energy transfer thus computed differs from the energy computed from the amplitude evolution equations. This is typical of small perturbation methods, and was discussed by Hasselmann (1962), among others. Yet, it is remarkable that the ratio of the two is exactly one half. The transfer of energy given by ${\mathrm{i}}\Omega_0 B^\star A$ in (\[EMei\]) thus correspond to an amplitude reflection coefficient $R_{E,{\rm Mei}}$ that is smaller by a factor $2^{-1/2}$, at resonance, compared to $R_{\mathrm{Mei}}$ (figure 3). This underprediction of the the reflexion of the energy by (\[RES1Mei\]) also has consequences for the analysis and calculation of wave set-up due to wave group propagation over a reflecting bottom. Indeed, the estimation of the scattering stress (\[Tbscat\]), that contribute to the driving of long waves, was analyzed by Hara & Mei (1987) using a calculation similar to (\[RES1Mei\]), which is a factor 2 too small. This may explain, in part, their under-prediction of the observed elevation of the long wave travelling with the incident wave group. However, the present theory, compared to that of Hara & Mei (1987), is limited to small bar amplitudes, and fails to reproduce their observation of the transition from oscillatory to exponential decay in the spatial evolution of the wave amplitude. Effects of wave and bottom relative phases ------------------------------------------ The energy exchange coefficient given by the source term always gives energy to the least energetic components (in the absence of currents), and thus the energy evolution is monotonic. The action source term (\[E1\]) of order $\eta$, that was neglected so far, may have any sign, and thus lead to oscillatory evolutions for the wave amplitudes, as predicted by Mei (1985) and observed by Hara & Mei (1987). At resonance, and for $U=0$, it can be seen that the first-order energy product $\Phi^{+}_{0,{{\boldsymbol k}}} \Phi^{-}_{0,{{\boldsymbol k}}} G_{-2k}$ in (\[E1\]) is equal to ${\mathrm{i}}A B^\star D /8$, in the limit of a large number of bars. Based on Mei’s (1985) approximate solution, in the absence of waves coming from across the bars, this quantity is purely real so that its imaginary part is zero and the corresponding reflection coefficient $R_{S1}$ is zero. For $U\neq 0$ this property remains as can be seen by replacing Mei’s (1985) solution with Kirby’s (1988). However, similar correlation terms were also neglected in the second order energy (Appendix B), so that the oscillations of the amplitude across the bar field, observed by Hara and Mei (1987) may occur due to terms of the same order as the scattering source term, including interactions of the sub-harmonic kind (Guazzelli  1992). Further, the bottom-surface bispectrum in $S_1$ may become significant if there is a large amount of wave energy coming from beyond the bars. This kind of situation, e.g. due to reflection over a beach, was discussed by Yu & Mei (2000). ![Reflection coefficients for the wave amplitudes for $U=0$, $H=0.156$ m, $l_0=2 {\pi}$, $n=4$. In (a) $b=0.05$ so that $\eta=b/H=0.32$, corresponding to one of the experiments of Davies & Heathershaw (1984), and in (b), $b=0.01$, so that $\eta=b/H=0.064$.[]{data-label="CompMei"}](fig3_rev.eps){width="70.00000%"} In the absence of such a reflection, and away from resonance but for small values of the scattering strength parameter $\tau=(qL)^{1/2}=\Omega_0 L/C_g$, the imaginary part of $\Phi^{+}_{0,{{\boldsymbol k}}} \Phi^{-}_{0,{{\boldsymbol k}}} G_{-2k}$ is an order $(qL)^{1/2}$ smaller than the real part and thus contributes a negligible amount to the reflection. Source term and deterministic results for sinusoidal bars --------------------------------------------------------- For large bar amplitudes, such as $\eta=b/H=0.32$ (figure 3.*a*), all theories with linearized bottom boundary conditions fail to capture the shift of the reflection pattern to lower wavenumbers. This effect was discussed by Rey (1992), and attributed to the non-linear nature of the dispersion relation and the rapid changes in the water depth. Reflection coefficients are still relatively well estimated. For these large amplitudes Mei’s (1985) approximate solution is found to be more accurate at resonance compared to the source term. As expected from MAHR and proved here, $R_{{\mathrm{Mei}}}$ and $R_{S}$ become identical as $\eta=b/H$ goes to zero (figure \[CompMei\].*b*). This fact provides a verification that the first order scattering term $S_1$ is different from Hara and Mei’s (1987) energy transfer term, and only accounts for a small fraction of the reflection, a fraction that goes to zero as $\eta \rightarrow 0$. It is also found that for all bottom amplitudes, the source term expression provides a simple and accurate solution away from resonance. Nevertheless, the scattering source term cannot give an accurate description of the spatial variation of the wave amplitude over a deterministic bottom, as shown in figure \[CompDH10\]. This is related to the fact that, in MAHR, the present reflection coefficient was obtained from the theory of Pihl  (2002) after averaging over the auto-correlation scale of the bottom topography. The present theory can only provide an accurate description of the spatial evolution of the wave field over scales larger than this bottom auto-correlation distance. ![Spatial evolution of the incident and reflected wave amplitudes represented by transmission ($T$) and reflection ($R$) coefficients, in the near-resonant case $U=0$, $H=0.156$ m, $l_0=2 {\pi}$ m$^{-1}$, $m=10$ bars, $b=5$ cm, $\eta=b/H=0.12$ and with a wave period $T=1.23$ s. This situation corresponds to one of the experiments of Davies & Heathershaw (1984), and their measurements lie in the shaded area.[]{data-label="CompDH10"}](DH_m10_012.eps){width="70.00000%"} Effects of currents ------------------- A prominent feature of solutions with current is the modification of the resonant condition from $k=k'$ and $l=2k$, to $\sigma'=\sigma + l U$ and $l=k+k'$, discussed in detail by Kirby (1988). This shift was verified in the laboratory by Magne, Rey & Ardhuin (2005). The magnitude of the resonant peak is also largely enhanced for waves against the current, due to a general conservation of the action fluxes and the variation in the action transport velocity, from $C_g+U$ for the incident waves, to $C_g'-U$ for the reflected waves. Further, the modulation of the current and the surface elevation also introduce an additional scattering, via the $M_c$ term in the coupling coefficent (\[M\]). Notations here assume that ${{\boldsymbol k}}$ is in the direction of the current and ${{\boldsymbol k}}'$ is opposite to the current. At resonance, in the limit $\eta\rightarrow 0$, the amplitude reflection coefficient $R_S$ given by (\[Krsbscat\]) converges to the reflection coefficient given by Kirby (1988). Using our notations, he obtained $$R_{\rm Kirby} =\left[\frac{\sigma'\left(Cg+U\right)}{\sigma\left(Cg'-U\right)}\right]^{1/2}\tanh(QL)\label{KKirby},$$ with $$Q=\frac{\Omega_c \omega}{\left[\sigma \sigma'\left(Cg+U\right)\left(Cg'-U\right) \right]^{1/2}}$$ and $\Omega_c = -M(k,k') b /\left[4 \omega F^B(k-k') \right]$. Our amplitude reflection coefficient $R_{S}$ is estimated with the approximation ${\mathrm{e}}^{q \sfbsQ L} = \left(\sfbsI+q \sfbsQ\right) L + O\left((qL)^2\right)$, so that, to first order in $qL$, $$R_{S} \approx \left[\frac{\sigma' C_g C_g' q L }{\sigma} \right]^{1/2}.$$ Replacing the analytical expression (\[specatres\]) in (\[defq\]) yields $$R_{S} \approx \frac{bLM(k,k')}{4 \left[\sigma^2 (C_g'-U)^2 \right]^{1/2}},$$ which is clearly identical to(\[KKirby\]) at first order in $qL$. For finite values of $qL$, the reflection coefficient (\[Krsbscat\]) corresponding to the solution of (\[Nevol\_mono\]) is obtained by calculating the proper matrix exponential. Anticipating oceanographic conditions with a water depth of 20 m, a strong 2 m s$^{-1}$ current corresponds to a Froude number of 0.17 only. For such a low value of ${\mbox{\textit{Fr}}}$ in the context of Davies & Heathershaw’s (1984) laboratory experiments, the convergence of the present theory and that of Kirby (1988) is illustrated in figure 5. The reflection coefficient is largely increased for following currents due to the general conservation of the wave action flux. In that case $R$ is enhanced by the factor $\left\{\sigma (Cg+U) / \left[\sigma'(Cg'-U)\right]\right\}^{1/2}$. The overall increase in $R$ for following waves amounts to about 60% at ${\mbox{\textit{Fr}}}=0.17$, for the laboratory sinusoidal bars of Davies & Heathershaw (1984) shown before (figure 3), with a reflected wave energy multiplied by a factor 2.5, compared to the case without current. ![Amplitude reflection coefficients for monochromatic waves over sinusoidal bars for the same settings as in figure 3, with a following (left) or opposing (right) current of magnitude $U=0.2$ m s$^{-1}$. For reference the reflection coefficient without current, as given by the exact model of Rey (1995), is also shown. The position of the resonant wavenumber is indicated with the grey vertical dash-dotted line.[]{data-label="figKirby"}](fig4_rev.eps){width="90.00000%"} For this mild current the contribution of the current fluctuation to the coupling coefficient is small, with a maximum increase of 16% on the action reflection coefficent, 8% for the wave amplitude. However, for larger Froude numbers, this additional scattering may become significant as illustrated by figure \[figKirby2\]. The present theory and that of Kirby (1988) agree reasonably well for finite values of $\eta$, and we thus expect the source term to represent accurately the scattering of waves over bottom topographies in cases of uniform currents. For $m=4$ sinusoidal bars, the energy reflection coefficients was found to be within 10% of the exact solution for over 90% of the wavenumber range shown in figure 3, for $\eta<0.1$ and ${\mbox{\textit{Fr}}}=0$, and this conclusion is expected to hold for ${\mbox{\textit{Fr}}}< 0.2$, given the agreement with Kirby’s (1988) approximate solution. This accuracy is twice better than what was found for a rectangular step with ${\mbox{\textit{Fr}}}= 0$ (MAHR). The present method has the advantage of a large economy in computing power. This method is also well adapted for natural sea beds, for which continuous bathymetric coverage is only available in restricted areas, and thus only the statistical properties of the bottom topography are accessible, assuming homogeneity. ![Amplitude reflection coefficients for monochromatic waves over sinusoidal bars for the same settings as in figure 3 and 4, with a stronger following current of magnitude $U=0.6$ m s$^{-1}$. The position of the resonant wavenumber is indicated with the grey vertical dash-dotted line. The vertical dashed line corresponds to the wavenumber for which $Cg'=U$. For larger wavenumbers the reflected waves are blocked and cannot propagate against the current.[]{data-label="figKirby2"}](fig4c_rev.eps){width="70.00000%"} Scattering with current on a realistic topography ================================================= Sandwaves in the North Sea -------------------------- A real ocean topography, at least on the continental shelf, generally presents a continuous and broad bottom elevation spectrum. The effects of a mean current on wave scattering are now examined using a bottom spectrum estimated from a detailed bathymetric survey of an area centered on the crest of a sand dune, in the southern North Sea (figure \[bspectra\]). In this region, tidal currents are known to generate a wide array of bedforms, from large scale tidal Banks to sand dunes and sand waves (e.g. Dyer & Huntley 1999; Hulscher & van den Brink 2001). Although sand dunes present a threat to navigation and are closely monitored (Idier  2002), dunes are much larger than typical wind sea and swell wavelengths. These dunes, however, are generally covered with shorter sandwaves. In the surveyed area the sandwaves have a peak wavelength of 250 m, and an elevation variance of 1.7 m$^2$, which should lead to strong oblique scattering of waves with periods of 10 s and longer. Over smaller areas of 3 by 3 km the variance can be as large as 3.3 m$^2$ with a better defined spectral peak, so that our chosen spectrum is expected to be representative of the entire region, including high and low variances on dunes crests and troughs, respectively. The southern North Sea is also known for the attenuation of long swells, generated in the Norwegian Sea. This attenuation has been generally attributed to the dissipation of wave energy by bottom friction (Weber 1991). The bottom spectrum of the chosen area, like the spectra that were obtained by AH from the North Carolina shelf, rolls off sharply at high wavenumbers, typically like $l^{-3}$ for the directionally-integrated bottom spectrum $F^{B2D}$, and proportional to $l^{-4}$ for the full spectrum $F^B$. Here the maximum variance is found for bottom wavelengths of the order of or larger than 250 m (figure \[bspectra\]). For a typical swell period of 10 s, this corresponds to 2 times the wavelength in 20 m depth, and thus a rather small scattering angle, 30$^\circ$ off from the incident direction. Swells propagating from a distant storm, with fixed absolute frequency $\omega=\sigma + {{\boldsymbol k}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$, should be reflected by bottom undulations with widely different variances as the current changes. ![(a) high-resolution bathymetry of a sand wave field in the southern North Sea with depths relative to chart datum, and (b) corresponding bottom elevation spectrum with contour values representing $\log_{10}\left( 4{\pi}^{2}F^{B}\right)$. The locus of the interacting bottom and surface wave components are indicated for a 12.5 s waves from the North-East in 25 m depth, with $U=0$ (middle circle), $U=2$ m s$^{-1}$ (smaller ellipse), and $U=-2$ m s$^{-1}$ (larger ellipse), $U$ is positive from the North-East. (c) Direction-integrated bottom variance spectra from the North Carolina shelf and the southern North Sea. Vertical lines indicate $k/l$ ratios and incident resonant directions $\theta_I$, assuming an incident wave field of 12.5 s period in 25 m depth and bedforms parallel to the $y$-axis. For such bedforms, the angle between incident and scattered waves is $180^\circ-2\theta_I$.[]{data-label="bspectra"}](Spectre_fond_compresse.eps){width="\textwidth"} Given this bottom spectrum and the mean water depth, simple solutions are available for uniform conditions, because the scattering source term is a linear function of the directional spectrum at a given value of the absolute frequency $\omega$ (see AH for numerical methods). We consider the wave directional spectrum for a frequency $f_0$ and discretize it in $N_a$ directions. This spectrum is thus a vector ${\mathbf E}$ in a space with $N_a$ dimensions. The square matrix $ {\sfbsS}$ such that ${\rm d} {\mathbf E}/{\rm d} t = {\sfbsS} {\mathbf E}$ is symmetric and positive, and can thus be diagonalized, which gives $N_a$ eigenvalues $\lambda_n$ and corresponding eigenvectors ${\mathbf V}_n$, such that ${\sfbsS} {\mathbf V}_n = \lambda_n {\mathbf V}_n$. Thus the time evolution is easily obtained by a projection of ${\mathbf E}$ on the basis $\{{\mathbf V}_n, 1 \leq n \leq N_a \}$, giving a decomposition of ${\mathbf E}$ in elementary components. Each of these components of the directional spectrum decays exponentially in time, except for the isotropic part of the spectrum which remains constant because that eigenvector corresponds to $\lambda =0$. The eigenvalues thus give interesting timescales for the evolution of the spectrum toward this isotropic state, with a half-life time of each eigenvector given by $-\ln 2/\lambda_n$. Numerical results are shown here for a mean water depth of 20 m, in order to make the result more visible. For that depth, waves with a period $T=10$ s have a dimensionless depth $kH=1.04$, which is close the value for which the coupling coefficient $M_b$ is maximum (AH). As a result, scattering is probably stronger than in real conditions where the mean water depth is 30 m. The following results should still provide some understanding of the likely real effects, at least for larger wave periods with similar values of $kH$. Without current, if $kH$ is kept constant, the magnitude of the coupling coefficient $K(k,k',H)$ decreases like $H^{-9/2}$ (AH), but it is compounded by a higher bottom elevation spectral density for small values of $k$. For back-scattering, the bottom wavenumbers are generally in the range where the bottom spectrum rolls off like $l^{-4}$ (figure \[bspectra\]). Therefore, for these back-scattering directions, the evolution time scale of waves with the same value of $kH$, e.g. $T=11.2$ s in 25 m depth or $T=13.2$ s in 35 m depth, is larger by a factor $(25/20)^{1/2}\simeq 1.1$ or $(35/20)^{1/2}\simeq 1.3$, respectively. For incident wave and scattering directions for which the bottom spectrum is more uniform and does not compensate for the reduction in the coupling coefficient, such as forward scattering of waves from the North-West, the time scales increase by $(25/20)^{9/2}\simeq 3.7$ or $(35/20)^{9/2}\simeq 77$, respectively. With $N_a=120$, corresponding to a directional resolution of $3^\circ$, figure 8 shows that the shortest time scales (large negative values of $\lambda_n$) correspond to directional spectra (eigenvectors) with strong local variations. These eigenvectors are thus associated with scattering at small oblique angles (forward scattering). Only the last 10 eigenvalues have a rather broad support, corresponding to scattering at much larger angles. Besides, the strongest scattering corresponds to a half-life time of 430 s, and mostly affects waves from the North-West or South-East, i.e. propagating in a direction along the sandwave crests. The timescale for waves from the North-East or South-West is about five times larger (the corresponding range of indices is $80<n<110$). ![Eigenvalues ordered by magnitude (top), and corresponding eigenvectors (bottom right) of the scattering matrix $ {\sfbsS}$ for $U=0$, $f_0=0.1$ Hz, and $H=20$ m. The first three and last three eigenvectors are shown in more detail in the bottom left.[]{data-label="figEigen"}](eigenvectors_U0.eps){width="80.00000%"} The $n=118$ eigenvector corresponds to an exchange of wave energy between waves travelling in opposite directions across the sandwaves, but the corresponding half-life is of 3 hours and 15 minutes. Similar results were found for $N_a=180$ and $N_a=72$ and appear little sensitive to the discretization. Instead of this idealized horizontally uniform situation, practical situations rather correspond to quasi-stationary conditions with spatial gradients in at least one dimension. In this case the simple steady solutions found above for 2D topography are not physical. Indeed, a 3D bottom causes scattering along the transversal direction $y$, and the energy propagating in that direction builds up slowly up to the point where it becomes as large as the incident wave energy. This process can take a time much longer than the typical duration of a storm or swell arrival, and dissipative processes are likely to be important as the wave energy increases (e.g. Ardhuin  2003). In order to go beyond qualitative statements on time and spatial scales of spectral relaxation, and short of simulating an actual storm in two dimensions, the effects on the wave spectrum are illustrated with a one-dimensional model configuration. The source term $S_{\mathrm{bscat}}$ was introduced in the version $2.22$ of the wave model WAVEWATCH III (Tolman 1991, 2002), based on the wave action evolution equation (\[action\_balance\]) in which the time derivative on the left hand side is now a Lagrangian derivative following a wave packet in physical and spectral space. Bottom scattering is the only source term activated in the present calculation. The model was run with a spectral grid of $30$ frequencies ranging from $0.04$ to $0.788$ Hz and a directional resolution of $3^\circ$. Unfortunately the model spectrum is discretized with components at fixed intrinsic frequencies $\sigma$ and directions $\theta$, which is most appropriate for other processes. Therefore a small amount of numerical diffusion leads to a change of action at each absolute frequencies when $U\neq 0$, and the total action is only approximately conserved in that case, with a net change of about $1\%$ of the integral of the absolute value of the source term for $U=\pm 2$ m s$^{-1}$, and four orders of magnitudes smaller, i.e. at the round-off error level, for $U=0$. We have chosen to show cases with significant back-scatter, corresponding to waves normally incident over the sandwaves. This choice also corresponds to a weaker forward scattering, compared to waves propagating along the the sandwave crests. Scattering of waves normally incident on the sandwaves ------------------------------------------------------ To simplify the interpretation of the results, and the processing of the boundary conditions, a one dimensional (East-West) propagation grid is used for the computations, assuming that the wave field, still fully directional, is uniform in the North-South direction. The waves are propagated over a model grid $100$ km long, with a mean depth of $H=20$m, and a spatial grid step of 5 km (figure \[incidentwave\].a). As discussed above, this water depth is chosen to make the result more visible, and a significant broadening of the incident peak with a (weaker) back-scatter of waves is also found for $H=35$ m and $f_p=0.1$ Hz (not shown). ![(a) Schematic of the model grid and (b) incident wave spectrum specified at point $F$. Model output is shown below for point $O$. Please note that waves are represented with their arrival direction (direction from, contrary to the standard wind sea convention). The frequency is the relative frequency $\sigma / 2{\pi}$.[]{data-label="incidentwave"}](Incident_spec.eps){width="\textwidth"} A Gaussian incident surface wave spectrum is imposed, with a mean direction from the North-East, a narrow peak directional spread of $12^\circ$, and a peak frequency of $0.01$ Hz (figure \[incidentwave\].b). The source term is integrated with a time step of $120$ s, and the advection in space uses a third order scheme with a time step of $120$ s (Tolman 2002). The scattering source term acts as a diffusion operator with a typical 3-lobe structure, negative at the peak of the wave spectrum, and positive in directions of about 30$^\circ$ on both sides of the peak. This is identical, but with a larger magnitude, to the effect described by AH. In general the scattering effects are relatively stronger at the lowest frequencies, at least in the range of frequencies used here. For still lower frequencies the scattering coefficient $K$ decreases (see also AH) so that, on these spatial scales, very little scattering occurs for infra-gravity waves ($f< 0.05$ Hz). In addition to this grazing-angle forward scattering, a significant back-scatter is found, in particular in the case of following currents. ![Computed source terms at the boundary forcing point $F$, (a) for $U=0$, (b) for a following current $U=2$ m s$^{-1}$, (c) for an opposing current $U=-2$ m s$^{-1}$. The frequency is the relative frequency $f=\sigma / 2{\pi}$.[]{data-label="source"}](Sourceterm_rev.eps){width="80.00000%"} ![Computed wave spectra at point $O$, 40 km inside of the model domain, after 5 hours of propagation, (a) for $U=0$, (b) for a following current $U=2$ m s$^{-1}$, (c) for an opposing current $U=-2$ m s$^{-1}$. The frequency is the relative frequency $f=\sigma / 2{\pi}$.[]{data-label="spectra"}](suite_spectrum.eps){width="80.00000%"} For an absolute wave frequency of $0.08$ Hz, the curves followed by the bottom resonant wavenumbers are overlaid on the bottom spectrum (figure \[bspectra\].b). The wavenumbers ${\boldsymbol{l}}$ along these curves satisfy both the relations ${\boldsymbol{k^{\prime}}}+{\boldsymbol{l}}={{\boldsymbol k}}$ and $\sigma'=\sigma+{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}$. Without current the curve is exactly a circle, and transforms to an ellipse for relatively weak currents (Appendix C). This approximation is used in the model to compute the source term. The current imposed here shifts significantly the resonant configuration for the bottom and surface wavenumbers. A current opposed to the waves enlarges the ellipse towards higher wavenumbers, while a following current will lead to a ‘sampling’ of shorter wave numbers, i.e. bottom features of larger scales. Since the bottom topography has the largest variance at low wavenumbers, scattering is strongest for following currents (figure \[source\]). With our choice of parameters, there is about a factor 10 reduction in the bottom variance that causes backscatter as $U$ is changed from $2$ m s$^{-1}$ to $-2$ m s$^{-1}$. Besides, the coupling coefficient $K(k',k,H)$ is increased in the case of a following current, as discussed above for the 2D cases. The resulting wave spectra are also modified due to the conservation of the wave action flux, enhancing the reflected wave energies for $U>0$ (figure \[spectra\]). This effect is similar to what was found in the 2D cases considered above, due to the different energy flux velocities $U+C_g$ for the incident waves, and $U-C_g'$ for the reflected waves. In all cases investigated here, the narrow incident wave spectrum is significantly broadened in directions, and that effect is most pronounced for frequencies in the range 0.07–0.10 Hz. Without current or with following currents, spectra in the middle of the model domain exhibit a significant level of back-scattered energy, which increases the significant wave height and the directional spread on the up-wave side of the sandwave field (figure \[spectra\]). This effect should not be very sensitive to the directional spread of the incident wave field, because the projection of the directional spectrum on the corresponding ‘smooth’ eigenvectors of the scattering matrix (figure 8) is insensitive to local variations in the directional spectrum. This reflection should thus occur for a wide range of sea states. At the same time, the incident peak of the wave field broadens in directions as it propagates to the down-wave end of the model domain. This broadening is fast close the the forcing boundary (point F), with values of the peak frequency directional spreads $\sigma_{\theta,p}$ larger than $35^\circ$ at a point 5 km inside the domain (not shown), and becomes more gradual as the waves propagate, due to the slower evolution of broad spectra that are associated with smaller eigenvalues in the scattering matrix (see also Ardhuin  2003a, Ardhuin & Herbers 2005). It was also verified that this broadening of the main spectral peak is strongest for waves propagating along the main sandwave crest directions (e.g. from the North-West in our case) due to the larger bottom variance at ${\boldsymbol{l}}={{\boldsymbol k}}-{{\boldsymbol k}}'$ with ${{\boldsymbol k}}\simeq{{\boldsymbol k}}'$, resulting in a significant modification of the mean direction (Magne 2005). Finally, a decrease in significant wave height is found along the grid, indicating an attenuation due to wave-bottom scattering. In reality, bottom friction would likely induce a stronger decay, and that decay would be stronger than in the absence of scattering. Essentially the scattering increases the average time taken by wave energy to cross the domain, and, because of that longer time, bottom friction together with scattering would lead to a larger dissipation than friction alone (Ardhuin  2003). Conclusion ========== The effect of a uniform current on the scattering of random surface gravity waves was investigated theoretically, extending the derivations of Ardhuin & Herbers (2002). Wave scattering may thus be represented by a scattering source term $S_{\rm bscat}({{\boldsymbol k}})$ for each wave component ${{\boldsymbol k}}$, in a closed spectral action balance equation. That term gives the rate of exchange of wave action between wave components ${{\boldsymbol k}}$ and ${\boldsymbol{k^{\prime}}}$ that have the same absolute frequency, as a result of both water depth variations on the scale of the surface gravity waves wavelength, and current and mean free surface inhomogeneities induced by the bottom topography. The exchange of action between any two wave component pairs ${{\boldsymbol k}}$ and ${\boldsymbol{k^{\prime}}}$ is proportional to the bottom elevation spectrum at the wavenumber vector ${\boldsymbol{l}}={{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}$, which is characteristic of Bragg scattering. The spectral integral of the corresponding wave pseudo-momentum source term ${{\boldsymbol k}}S_{\rm bscat}$ gives a recoil force exerted by the bottom on the water column, in addition to the hydrostatic pressure force. After Magne  (2005a) proved that the source term was applicable to non-random topography and accurate in the limit of small bottom amplitudes, just like Bragg scattering approximations for acoustic or electromagnetic waves (e.g. Elfouhaily & Guerin 2004), it is further found here that monochromatic wave results are recovered by taking the limit to narrow incident and reflected wave spectra. In absence of current, for a finite sinusoidal bottom and monochromatic waves, the reflection coefficients given by the source term converges to Mei’s (1985) theory in the limit of the small bottom amplitudes. The range of maximum reflection and the side lobe pattern of the reflection coefficient as a function of the incident wavenumber is thus a direct consequence of the shape of the bottom spectrum in that case. With this point of view, there is resonance at all wavenumbers but its strength is proportional to the bottom elevation variance at the corresponding scale. In the presence of a current, reflections converge in the same manner to the more general theory of Kirby (1988). In two dimensions, the main effects of a current is an enhancement of reflected wave amplitudes when the incident waves propagate with the current, due to a conservation of the wave action flux, and a Doppler-like shift of the resonant wave frequencies that undergo maximum reflection. The two scale approximation was found to hold very well, even for a relatively fast evolutions of the wave amplitudes over two wavelengths (e.g. figure 3). However, the source term does not give a good representation of the spatial evolution of the wave field on scales shorter that the bottom correlation length, nor can it give reasonable results when another wave train propagates from beyond the bars. In that latter case, a lower order source term must be considered, and a closed action balance cannot be obtained since that extra term depends on the phase relationship between the incident waves, reflected waves and bottom undulations. In three dimension and over the shallow areas of the southern North Sea, where large sand waves are found with strong tidal currents, wave scattering is expected to be significant, and largely influenced by currents. Over natural topographies, the bottom typically de-correlates over scales shorter than the scattering-induced attenuation scales, so that a modification of the reflection due to a phase locking of the incident and reflected waves with the bottom can be neglected. The wave scattering theory presented in this paper is thus one more piece in the puzzle of wave propagation over shallow continental shelves, and this process may account for a significant part of the observed attenuation of swells in the southern North Sea. The representation of this phenomenon with a source term in the wave action balance equation is expected to be accurate in many conditions of interest. It is consistent with the wide use of phase-averaged models for engineering and scientific purposes when such large scales are involved. The alternative use of phase-resolving elliptic refraction-diffraction models (e.g. Belibassakis 2001), is much more expensive in terms of computer resources, due to the necessity to resolve the wave phase and the ellipticity of the problem when back-scattering occurs. For applications to rotational currents, the mean current $U$ should be regarded as the wave advection velocity (Andrews & McIntyre 1978, see Kirby & Chen 1989 for practical approximate expressions), but a detailed derivation including scattering by rotational current fluctuations should be the next logical extension of the present theory. This is probably achievable by coupling the rotational part of the flow to the irrotational part, giving a modified Bernoulli equation (e.g. McWilliams  2004). In practice, non-homogeneities in the bottom spectrum will probably have to be addressed due the sharp decrease of the coupling coefficient with water depth, and the generally higher bottom elevation variances in the shallower parts of the sea floor. In particular our limited bathymetric survey shows that sandwaves are modulated by sand dunes, very much like short water waves are modulated by long waves. This research was supported by a joint grant from CNRS and DGA. Bathymetric data was acquired by the French Hydrographic and Oceanographic Service (SHOM). Discussions with Michael McIntyre, Kostas Belibassakis, Vincent Rey, and Thierry Garlan and gratefully acknowledged. The results of the relative effects of current modulations and water depths changes owes much to remarks made by anonymous reviewers, without whom the present paper would have been limited to small Froude numbers. Harmonic oscillator equation for the first order potential ========================================================== The harmonic oscillator equation (\[oscil2\]) can be written as a linear superposition of equations of the type $$\label{A1} \frac{d^2 f_1}{dt^2}+\omega^2 f_1={\mathrm{e}}^{{\mathrm{i}}\omega't}.$$ In order to specify a unique solution to (\[A1\]), initial conditions must be prescribed. In the limit of the large propagations distances, the initial conditions contribute a negligible non-secular term to the solution. Following Hasselmann (1962), we choose $f_1(0)=0$ and $df_1/dt(0)=0$, giving, $$f_1(\omega,\omega'; t)= \frac{{\mathrm{e}}^{{\mathrm i}\omega' t}-{\mathrm{e}}^{{\mathrm i}\omega t}+i(\omega-\omega')\sin(\omega t)/\omega}{\omega^2-\omega'^2} \mbox{ for }\omega'^2\neq\omega^2,$$ $$f_1(\omega,\omega'; t)= \frac{t{\mathrm{e}}^{{\mathrm i}\omega't}}{2i\omega'}-\frac{\sin'\omega t)}{2i\omega'\omega} \mbox{ for }\omega'=\pm\omega$$ Harmonic oscillator equation and energy for the second order potential ====================================================================== Replacing $\phi_1$ (\[formphi2\]) in the surface boundary condition (\[surfbound3\]), $$\left( \frac{ d^2}{dt^2}+\sigma^2\right) \Phi^s_{2,{{\boldsymbol k}}}(t)= -gk\Phi^{{\rm si},s}_{2,{{\boldsymbol k}}} -\tanh(kH) \frac{\partial^2\Phi^{{\rm si},s}_{2,{{\boldsymbol k}}}}{\partial t^2}+{\rm I-VIII},$$ and conserving only the resonant terms of $\Phi_{1,{{\boldsymbol k}}'}^s$, one obtains $$\begin{aligned} \frac{\partial^2\Phi^{{\rm si},s}_{2,{{\boldsymbol k}}}}{\partial t^2}=&& \nonumber\\ -\sum_{{\boldsymbol{k^{\prime}}},{{\boldsymbol k}}''} \frac{{\boldsymbol{k^{\prime}}}\cdot {{\boldsymbol k}}}{k}& &\frac{\cosh(kH)}{\cosh(k'H)} M({\boldsymbol{k^{\prime}}},{{\boldsymbol k}}'')G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}G_{{\boldsymbol{k^{\prime}}}-{{\boldsymbol k}}''} \Phi_{0,{{\boldsymbol k}}''}\frac{\partial^2}{\partial t^2} \left(f_1(\sigma',{\boldsymbol{l}}' {\boldsymbol{\cdot}}{\boldsymbol{U}}-s\sigma''){\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}t} \right), \nonumber\\ \end{aligned}$$ with ${\boldsymbol{l}}'=({{\boldsymbol k}}''-{\boldsymbol{k^{\prime}}}){\boldsymbol{\cdot}}{\boldsymbol{U}}$. In order to simplify the algebra we assume that the zeroth-order waves are random, with no correlation between $\Phi_{0,{{\boldsymbol k}}}^s$ and $\Phi_{0,{{\boldsymbol k}}''}^{s''}$ unless ${{\boldsymbol k}}=\pm {{\boldsymbol k}}''$ and $s=\pm s''$. Thus the only contributing terms to $N_{2,0}$ must verify ${{\boldsymbol k}}''={{\boldsymbol k}}$. Only those terms are now written explicitly, the others being grouped in the ’$\ldots$’. The amplitude $\Phi^+_{2,{{\boldsymbol k}}}$ satisfies the following forced harmonic oscillator equation, $$\begin{aligned} \label{dd1} \left( \frac{\partial^2}{\partial t^2}+ \sigma^2\right) \Phi^+_{2,{{\boldsymbol k}}}(t) =\sum_{{\boldsymbol{k^{\prime}}}} M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) \left|G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}\right|^2 \Phi_{1,{{\boldsymbol k}}''}f_1(\sigma',-\sigma-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}) {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}t}+\ldots \nonumber\\\end{aligned}$$ This is a sum of equations of the form, $$\label{f2} \left( \frac{ d^2}{dt^2}+\sigma^2\right)f_2=f_1(\sigma',{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}-\sigma;t) {\mathrm{e}}^{{\mathrm{i}}{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}t}.$$ The solution $f_2$ may be written as $$f_2=f_{2,a}+f_{2,b},$$ where $$\label{f2a} f_{2,a}=-\frac{t{\mathrm{e}}^{-{\mathrm{i}}\sigma t}-\sin(\sigma t)/\sigma}{2 {\mathrm{i}}\sigma \left[\sigma'^2-({\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}+\sigma)^2\right]},$$ $$\begin{aligned} \label{f2b} f_{2,b}& =&-\frac{1}{2\sigma'\left[\sigma'-({\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}}+\sigma )\right]} \times \nonumber\\ & & \left[ \frac{{\mathrm{e}}^{-{\mathrm{i}}(\sigma'-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})t}}{\sigma^2-(\sigma'-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})^2} -\frac{1}{2\sigma} \left( \frac{{\mathrm{e}}^{{\mathrm i}\sigma t}}{\sigma+(\sigma'-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})}+\frac{{\mathrm{e}}^{-{\mathrm{i}}\sigma t}} {\sigma-(\sigma'-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})} \right) \right] \end{aligned}$$ The second order action contribution from correlation between the zeroth and first order velocity potential is given by, $$F^{\Phi}_{2,0,{{\boldsymbol k}}}= F^{\Phi}_{0,2,{{\boldsymbol k}}}=2\langle \Phi^{+}_{2,{{\boldsymbol k}}} \Phi^{-}_{0,-{{\boldsymbol k}}}\rangle.$$ This correlation imposes that all non-zero terms must have ${{\boldsymbol k}}''={{\boldsymbol k}}$, which removes the ’$\ldots$’ terms, so that (\[dd1\]) becomes $$\label{dd1b} \frac{F^{\Phi}_{2,0,{{\boldsymbol k}}}}{\Delta {{\boldsymbol k}}} =2\sum_{{\boldsymbol{k^{\prime}}}} M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) \frac{\langle \left|G_{{{\boldsymbol k}}-{\boldsymbol{k^{\prime}}}}\right|^2\rangle}{\Delta {{\boldsymbol k}}} \frac{\langle\Phi^+_{0,{{\boldsymbol k}}}\Phi^-_{0,-{{\boldsymbol k}}}\rangle}{\Delta {{\boldsymbol k}}} \langle f_2 {\mathrm{e}}^{{\mathrm{i}}\sigma t}\rangle \Delta {{\boldsymbol k}},$$ with $$\label{lim3} \langle f_2 {\mathrm{e}}^{{\mathrm{i}}\sigma t}\rangle =\frac{{\pi}t}{8 \sigma \sigma'}\left\{\delta\left[\sigma'-(\sigma-{\boldsymbol{l}}{\boldsymbol{\cdot}}{\boldsymbol{U}})\right] + O(1)\right\}.$$ Taking the limit when ${\Delta {{\boldsymbol k}}}\rightarrow 0$, and neglecting $O(1)$ terms yields $$\label{dE3b} F^{\Phi}_{2,0}(t,{{\boldsymbol k}})=-\int_{{\boldsymbol{k^{\prime}}}}\frac{{\pi}t }{4 \sigma } M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})\frac{F^{\Phi}_{0,0}({{\boldsymbol k}})}{\sigma'} \delta\left(\omega'-\omega\right) {\mathrm{d}}{\boldsymbol{k^{\prime}}}.$$ Changing the spectral coordinates from ${{\boldsymbol k}}'$ to $(\omega',\theta')$ allows a simple removal of the singularity, $$\label{dE3c} F^{\Phi}_{2,0}(t,{{\boldsymbol k}})=-\int_{0}^{2{\pi}}\frac{{\pi}t }{4 \sigma } M^2({{\boldsymbol k}},{\boldsymbol{k^{\prime}}}) F^B({{\boldsymbol k}}-{\boldsymbol{k^{\prime}}})\frac{F^{\Phi}_{0,0}({{\boldsymbol k}})}{\sigma'} \frac{k'}{Cg'+{\boldsymbol{k^{\prime}}}{\boldsymbol{\cdot}}{\boldsymbol{U}}/k'} {\mathrm{d}}\theta'.$$ Resonant wavenumber configuration for $U<<C_g$ ============================================== Under the assumption $U<<C_g$, and for a current in the $x$ direction, the resonant conditions $$\sigma'-\sigma = l_x U,\mbox{ and}$$ yields the following Taylor expansion to first order in $\sigma'-\sigma$, $$k'-k =(k_x'-k_x) \frac{U}{C_g} +O\left[k \left(\frac{U}{C_g} \right)^2\right].$$ We define, $r=k'$, $r_0=k$, $r\cos \theta=k'_x$, so that $$r=r_0+\frac{U}{C_g}(r_0\cos \theta_0 -r\cos\theta),$$ and thus $$\label{E2} r=\frac{P}{1+e\cos \theta}.$$ This is the parametric equation of an ellipse of semi-major axis $a$, semi-minor axis $b$, half the foci distance $c$, and eccentricity $e$, with $ P=r_0+U/C_g r_0\cos \theta_0=b^2/a$, and $e=U/C_g=c/a$. The interaction between a surface wave with wavenumber ${\boldsymbol{k^{\prime}}}$ and a bottom component with wavenumber ${\boldsymbol{l}}$ excites a surface wave with the sum wavenumber ${{\boldsymbol k}}={\boldsymbol{k^{\prime}}}+{\boldsymbol{l}}$. For a fixed ${{\boldsymbol k}}$ and current $U$, in the limit of $U<<C_g$ the resonant ${\boldsymbol{k^{\prime}}}$ and ${\boldsymbol{l}}$ follow ellipses described by their polar equation (\[E2\]), that reduce to circles for $U=0$. [^1]: Such a discontinuous bottom has a markedly different spectrum at low and high frequencies. The present theory, confirmed by calculations with Rey’s (1995) numerical model, yield very different reflection coefficients for waves much shorter and much longer than the resonant waves
--- address: - 'Lebedev Physical Institute, Leninsky Prospekt 53, 117924 Moscow, Russia' - 'Institut des Sciences Nucléaires, 53 av. des Martyrs, 38026 Grenoble Cedex, France' author: - 'V.A. Karmanov, J. Carbonell$^{\rm b}$ and M. Mangin-Brinet' title: 'Relativistic wave functions and energies for nonzero angular momentum states in light-front dynamics' --- Light-front dynamics (LFD) is a powerful approach to the theory of relativistic composite systems (hadrons in the quark models and relativistic nucleons in nuclei). Its explicitly covariant version [@cdkm] has been recently applied with success [@ck99] to describe the new CEBAF/TJNAF data on the deuteron electromagnetic form factors. The solutions used in [@ck99] were however not obtained from solving exactly the LFD equations but by means of a perturbative calculation with respect to the non relativistic wave function. Since, a consequent effort has been made to obtain exact solutions of LFD equations. The first results concerning $J=0$ states in a scalar model have been published in [@MC_00]. The construction of $J \ne 0$ states in LFD is complicated by the two following facts. First, the generators of the spatial rotations contain interaction and are thus difficult to handle. Second, one is always forced to work in a truncated Fock space, and consequently, the Poincaré group commutation relations between the generators – ensuring the correct properties of the state vector under rotation – are in practice destroyed. In the standard approach, with the light-front plane defined as $t+z=0$, this violation of rotational invariance manifests by the fact that the energy depends on the angular momentum projection on $z$-axis [@cmp]. We present here a method to construct $J\ne0$ states in the explicitly covariant formulation of LFD and show how it leads to a restoration of rotational invariance. In this approach [@cdkm] the equation for relativistic two-body bound state wave function $\psi$ reads: $$\label{eq1} [4(\vec{k}\,^2 +m^2)-M^2] \psi(\vec{k},\hat{n})=-{m^2 \over 2 \pi^3} \int {d^3k' \over \varepsilon_{k'} }V(\vec{k},\vec{k'},\hat{n},M^2)\psi(\vec{k'},\hat{n}),$$ where $V$ is the interaction kernel, $\hat{n}$ the three-dimensional unit vector determining the orientation of the light-front plane, $M$ the total mass of the system and $\varepsilon_{k}=\sqrt{m^2+{\vec{k}}^2}$. In general, the wave function depends both on the relative momentum $\vec{k}$ and $\hat{n}$. In order to solve the problem, we replace the dynamical angular momentum operator by the kinematical one: $$\label{eq2} \vec{J} = -i[\vec{k}\times \partial/\partial\vec{k}\,] -i[\hat{n}\times \partial/\partial\hat{n}].$$ The eigenstates of $\vec{J}^2$ are constructed from vectors $\vec{k},\hat{n}$ – taken on equal ground – by using the standard techniques of angular momentum theory. It turns out that the kernel and $\vec{J}$ commute not only with each other, but also with the operator $A^2=(\hat{n}\cdot\vec{J})^2$. Hence, the solutions of (\[eq1\]) are labelled by their mass, angular momentum and projection on $z$-axis, but also by the eigenvalues $a^2=0,1,\ldots,J^2$ of $A^2$. In a truncated Fock space, the states with different $a$ correspond to different masses $M_a$. In the $J^{\pi}=1^-$ case for instance, the solutions $\vec{\psi}_{a}$ with $a=0,1$ can be written in the form: $$\vec{\psi}_{a}(\vec{k},\hat{n})= \chi_{a}(\hat{k},\hat{n}) g_{a}(k,z),$$ where $\chi_{a}$ are the normalized eigenfunctions of the operator $A^2$: $$A^2\chi_a(\hat{k},\hat{n}) =a^2 \chi_a(\hat{k},\hat{n}), \qquad \chi_0(\hat{k},\hat{n})=3z\hat{n}, \qquad \chi_1(\hat{k},\hat{n}) =\frac{3\sqrt{2}}{2}(\hat{k}-z\hat{n}),$$ $z=\hat{k}\cdot\hat{n}$ and functions $g_{a}$ satisfy the equations $$\begin{aligned} &&[4(\vec{k}\,^2 +m^2)-M_0^2] zg_0(k,z)=-\frac{m^2}{2 \pi^3} \int \frac{d^3 k'}{\varepsilon_{k'}} V(\vec{k},\vec{k}\,',\hat{n},M_0^2)z'g_0(k',z'), \nonumber\\ &&[4(\vec{k}\,^2 +m^2)-M_1^2](1-z^2)g_1(k,z) =-\frac{m^2}{2 \pi^3} \int \frac{d^3k'}{ \varepsilon_{k'}} V(\vec{k},\vec{k}\,',\hat{n},M_1^2) (\hat{k}\cdot\hat{k}'-zz') g_1(k',z'). \end{aligned}$$ obtained by inserting $\vec{\psi}_{0,1}$ into (\[eq1\]). As mentioned above, due to the Fock space truncation, the masses $M_{0,1}$ determined in this way differ from each other. The key point of our method is based on the fact that, in order to ensure the equivalence between the dynamical angular momentum operator and the kinematical one (\[eq2\]), the physical state has to satisfy the so called angular condition [@VAK_82]. This condition imposes the physical wave function to be a superposition of eigenstates of $A^2$: $$\label{eq2a} \psi(\vec{k},\hat{n})=\sum_a c_a \psi_a(\vec{k},\hat{n}) = c_0\chi_0(\hat{k},\hat{n})g_0(k,z)+ c_1\chi_1(\hat{k},\hat{n})g_1(k,z)$$ The unknown coefficients are in principle determined by the angular condition itself but they can be unambiguously fixed by imposing that in the limit $k\to 0$, the solution does not depend on $\hat{n}$. The corresponding mass is then given by $$\label{eq3} M^2=\sum_a c^2_a M^2_a.$$ The validity of this solution has been checked in the Wick-Cutkosky model, i.e. for two scalar particles interacting by a massless scalar exchange. For $J=1^-$ state, the values of $M_{a}$, $M$ given by (\[eq3\]) and the mass of the 2s states are shown in Figure (a) as functions of the coupling constant $\alpha$. One can see that despite the big split between $M_0$ and $M_1$, $M$ for the 1p state is very close to the mass of the 2s one, which in its turn is very close to the results obtained by solving Bethe-Salpeter equation (denoted by dots). The curves for $M$ (solid line) and the 2s mass (long-dashed line) are hardly distinguishable. This means that our solution restores quite accurately the 2s-1p degeneracy of the relativistic Coulomb problem which in the Bethe-Salpeter approach is an exact result. The same situation also takes place in Figure (b) for 3s-2p states. It is worth noticing that in the whole range of coupling constants considered – implying large binding energies – the coefficients $c_{0,1}$ are very close to $\sqrt{1/3}$ and $\sqrt{2/3}$. To summarize, we emphasize that the physical state (\[eq2a\]) is found as a superposition of non physical solutions $\psi_a$. In the exact case, these solutions would be degenerated, but they are split in a truncated Fock space due to the effective violation of rotational invariance. Taking as a solution a superposition satisfying the angular condition – in spite of the mass split, we restore the general properties of the wave function, violated by the truncation of the Fock space. This solution, which fulfills the correct transformation laws under rotation, approaches the exact one. Our method thus provides a high accuracy solution of the relativistic two-body problem.\ =7.5cm \[2s1p\_Mphys\] =7.5cm \[3s2p\_Mphys\] [9]{} J. Carbonell, B. Desplanques, V.A. Karmanov and J.-F. Mathiot, Phys. Reports, 300 (1998) 215; nucl-th/9804029. J. Carbonell, V.A. Karmanov, Eur. Phys. J. A 6 (1999) 9; nucl-th/9902053. M. Mangin-Brinet, J. Carbonell, Phys. Lett. B474 (2000) 237; nucl-th/9912050 J.R. Cooke, G.A. Miller and D.R. Phillips, Phys. Rev. C 61 (2000); nucl-th/9910013. V.A. Karmanov, Sov. Phys. JETP 56 (1982) 1.
--- abstract: 'We propose dynamical optimal transport (OT) problems constrained in a parameterized probability subset. In applied problems such as deep learning, the probability distribution is often generated by a parameterized mapping function. In this case, we derive a formulation for the constrained dynamical OT.' address: 'Department of Mathematics, University of California, Los Angeles.' author: - Wuchen Li and Stanley Osher bibliography: - 'LF.bib' title: Constrained dynamical optimal transport and its Lagrangian formulation --- [^1] Introduction ============ Dynamical optimal transport problems play vital roles in fluid dynamics [@SE] and mean field games [@MFG]. They provide a type of statistical distance and interesting differential structures in the set of probability densities [@vil2008]. The full probability set is often intractable, when the dimension of sample space is large. For this reason, parameterized probability subsets have been widely considered, especially in machine learning problems and information geometry [@Amari; @IG; @IG2; @tryphon; @modin; @GP]. We are interested in studying dynamical OT problems over a parameterized probability subset. In this note, we follow a series of work found in [@NG2; @LiG; @LiG2; @NG1; @LNG2], and introduce general constrained dynamical OT problems in a parametrized probability subset. As in deep learning [@WGAN], the probability subset is often constructed by a parameterized mapping. In these cases, we demonstrate that the constrained dynamical OT problems exhibit simple variational structures. We arrange this note as follows: In section \[section2\], we briefly review the dynamical OT in both Eulerian and Lagrangian coordinates[^2]. Using Eulerian coordinates, we propose the constrained dynamical OT over a parameterized probability subset. In section \[section3\], we next derive an equivalent Lagrangian formulation for the constrained problem. Constrained dynamical OT {#section2} ======================== In this section, we briefly review the dynamical OT in a full probability set via both Eulerian and Lagrangian formalisms. Using the Eulerian coordinates, we propose the dynamical OT in a parameterized probability subset. For the simplicity of exposition, all our derivations assume smoothness. Consider densities $\rho^0,\rho^1\in \mathcal{P}_+(\Omega)=\Big\{\rho(x)\in C^{\infty}(\Omega)\colon \rho(x)>0,~\int_\Omega\rho(x)dx=1\Big\}$, where $\Omega$ is a $n$-dimensional sample space. Here $\Omega$ can be $\mathbb{R}^n$, or a convex compact region in $\mathbb{R}^n$ with zero flux conditions or periodic boundary conditions. Dynamical OT studies a variational problem in density space, known as the Benamou-Brenier formula [@BB]. Given a Lagrangian function $L\colon T\Omega \rightarrow [0,+\infty)$ under suitable conditions, consider \[BBL\] $$\label{BB} C(\rho^0,\rho^1):=\inf_{v_t}~\int_0^1 \mathbb{E}L(X_t(\omega), v(t,X_t(\omega))) dt,$$ where $\mathbb{E}$ is the expectation operator over realizations $\omega$ in event space and the infimum is taken over all vector fields $v_t=v(t,\cdot)$, such that $$\label{BB2} \dot X_t(\omega)=v(t, X_t(\omega)),\quad X_0\sim\rho^0,\quad X_1\sim\rho^1.$$ Here $X_i\sim \rho^i$ represents that $X_i(\omega)$ satisfies the probability density $\rho^i(x)$, for $i=0,1$. Equivalently, denote the density of particles $X_t(\omega)$ at space $x$ and time $t$ by $\rho(t,x)$. Then problem refers to a variational problem in density space: \[BBE\] $$\label{BB} C(\rho^0,\rho^1):=\inf_{v_t}~\int_0^1\int_\Omega L(x, v(t,x))\rho(t,x) dx dt,$$ where the infimum is taken over all Borel vector fields $v_t=v(t,\cdot)$, such that the density function $\rho(t,x)$ satisfies the continuity equation: $$\label{BB2} \frac{\partial \rho(t,x)}{\partial t}+\nabla\cdot (\rho(t,x)v(t,x))=0,\quad \rho(0,x)=\rho^0(x),\quad \rho(1,x)=\rho^1(x).$$ Here $\nabla\cdot$ is the divergence operator in $\Omega$. In the language of fluid dynamics, problem refers to the Lagrangian formalism, while problem is the associated Eulerian formalism; see details in [@vil2008]. The Lagrangian formalism focuses on the motion of each individual particles, while the Eulerian formalism describes the global behavior of all particles. Here and are equivalent since they represent the same variational problem using different coordinate systems. In addition, one often considers $L(x,v)=\|v\|^p$, $p\geq 1$, where $\|\cdot\|$ is the Euclidean norm. In this case, the optimal value of the variational problem defines a distance function in the set of probability space. Denote $W_p(\rho^0,\rho^1):=C(\rho^0,\rho^1)^{\frac{1}{p}}$, where $W_p$ is called the $L^p$-Wasserstein distance. We next study the variational problem constrained on a parameterized probability density set. In other words, consider a parameter space $\Theta\subset \mathbb{R}^d$ with $$\mathcal{P}_{\Theta}=\Big\{\rho(\theta, x)\in C^{\infty}(\Omega)\colon \theta\in \Theta,~\int_\Omega \rho(\theta,x)dx=1,~\rho(\theta,x)>0, ~x\in \Omega\Big\}.$$ Here we assume that $\rho\colon \Theta \rightarrow \mathcal{P}_+(\Omega)$ is an injective mapping[^3]. We introduce the constrained dynamical OT as follows: \[cons\] $$c(\theta_0,\theta_1):=\inf_{v_t}~\int_0^1\int_\Omega L(x, v(t,x))\rho(\theta_t,x) dx dt,$$ where $\theta_t=\theta(t)\in \Theta$, $t\in[0,1]$, is a path in parameter space, and the infimum is taken over the Borel vector fields $v_t=v(t,\cdot)$, such that the [*constrained continuity equation*]{} holds: $$\label{3b} \frac{\partial}{\partial t}\rho(\theta_t,x)+\nabla\cdot (\rho(\theta_t,x)v(t,x))=0,\quad\textrm{$\theta_0, \theta_1$ are fixed}.$$ We notice that the infimum of problem is taken over density paths lying in the parameterized probability set, i.e. $\rho(\theta_t, \cdot)\in \rho(\Theta)$. Here the changing ratio of density is reduced into a finite dimensional direction, i.e. $ \frac{\partial}{\partial t}\rho(\theta_t,x)=(\nabla_{\theta_t}\rho(\theta_t, x), \frac{d\theta_t}{dt})$, where $(\cdot,\cdot)$ is an inner product in $\mathbb{R}^d$. A natural question arises. Variational problem , together with its constrained problem , are written in Eulerian coordinates. They evolve unavoidably with the entire probability density functions. For practical reasons, can we find the Lagrangian coordinates for constrained problem ? In other words, what are analogs of in $\rho(\Theta)$? We next demonstrate an answer to this question. We show that there is an expression for the motion of particles, whose density path moves according to the constrained continuity equation . Lagrangian formulations {#section3} ======================= In this section, we show the main result of this note, that is the constrained dynamical OT has a simple Lagrangian formulation in Proposition \[thm\]. Consider a parameterized mapping or implicit generative model as follows. Given a input space $Z\subset \mathbb{R}^{n_1}$, $n_1\leq n$, let $$g_\theta\colon Z\rightarrow \Omega,\quad x=g(\theta, z)\in \Omega,\quad\textrm{for $z\in Z$.}$$ Here $g_\theta$ is a mapping function depending on parameters $\theta\in \Theta$. Given realizations $\omega$ in event space, we assume that the random variable $z(\omega)$ satisfies a density function $\mu(z)\in \mathcal{P}_+(Z)$, and denote $x(\omega)=g(\theta, z(\omega))$ satisfying the density function $\rho(\theta,x)$. This means that the map $g_\theta$ pushes forward $\mu(z)$ to $\rho(\theta,x)$, denoted by $\rho(\theta, x)=g_\theta\sharp\mu(z)$: $$\label{a} \int_{Z}f(g(\theta, z))\mu(z)dz=\int_{\Omega}f(x)\rho(\theta,x)dx, \quad\textrm{for any $f\in C_c^{\infty}(\Omega)$.}$$ In this case, the parameterized probability set is given as follows: $$\rho(\Theta)=\Big\{\rho(\theta, x)\in C^{\infty}(\Omega)\colon \theta\in \Theta,~\rho(\theta, x)=g_\theta\sharp\mu(z)\Big\}.$$ We next present the constrained dynamical OT in Lagrangian coordinates. We notice a fact that, the vector field in optimal density path of problem or satisfies $$v(t,x)=D_pH(x,\nabla\Phi(t,x)),$$ where $$H(x,p)=\sup_{v\in T_x\Omega} p v- L(x,v)$$ is the Hamiltonian function associated with $L$. \[thm\] The constrained dynamical OT has the following formulation: $$\label{Lag} \begin{split} c(\theta_0, \theta_1)=\inf \Big\{&\int_0^1\mathbb{E}_{z\sim\mu}L(g(\theta_t,z), \frac{d}{dt}g(\theta_t, z))dt\colon\\ & \frac{d}{dt}g(\theta_t, z)=D_p H(g(\theta_t,z), \nabla_x\Phi(t, g(\theta_t,z))),~\theta(0)=\theta_0,~\theta(1)=\theta_1\Big\}, \end{split}$$ where the infimum is taken over all feasible potential functions $\Phi\colon [0, 1]\times \Omega\rightarrow \mathbb{R}$ and parameter paths $\theta\colon [0,1]\rightarrow \mathbb{R}^d$. Denote $$\frac{d}{dt}g(\theta_t,z)=v(t,g(\theta,z)), \quad \textrm{with}\quad v(t, g(\theta_t,z))=D_p H(g(\theta_t,z), \nabla\Phi(t, g(\theta_t,z))).$$ We show that the probability density transition equation of $g(\theta_t,z)$ satisfies the [*constrained continuity equation*]{} $$\label{1} \frac{\partial}{\partial t}\rho(\theta_t, x)+\nabla\cdot(\rho(\theta_t, x)v(t,x))=0,$$ and $$\label{2} \mathbb{E}_{z\sim\mu}L(g(\theta_t,z), \frac{d}{dt}g(\theta_t,z))=\int_{\Omega}L(x, v(t,x))\rho(\theta_t,x)dx.$$ On the one hand, consider $f\in C^{\infty}_c(\Omega)$, then $$\label{form1} \begin{split} \frac{d}{dt}\mathbb{E}_{z\sim\mu} f(g(\theta_t,z))=&\frac{d}{dt}\int_{Z} f(g(\theta_t,z))\mu(z)dz\\ =&\frac{d}{dt}\int_\Omega f(x)\rho(\theta_t,x)dx\\ =&\int_\Omega f(x) \frac{\partial}{\partial t}\rho(\theta_t,x)dx, \end{split}$$ where the second equality holds from the push forward relation . On the other hand, consider $$\label{form2} \begin{split} \frac{d}{dt}\mathbb{E}_{z\sim\mu} f(g(\theta_t,z))=&\lim_{\Delta t\rightarrow 0}\mathbb{E}_{z\sim\mu}\frac{f(g(\theta_{t+\Delta t}, z)-f(g(\theta_t,z))}{\Delta t}\\ =&\lim_{\Delta t\rightarrow 0}\int_Z\frac{f(g(\theta_{t+\Delta t}, z))-f(g(\theta_t, z))}{\Delta t}\mu(z)dz\\ =&\int_Z \nabla f(g(\theta_t,z)) \frac{d}{dt}g(\theta_t,z)\mu(z)dz\\ =&\int_Z \nabla f(g(\theta_t, z)) v(t, g(\theta_t, z))\mu(z)dz\\ =&\int_\Omega \nabla f(x) v(t,x)\rho(\theta_t, x)dx\\ =&-\int_\Omega f(x) \nabla\cdot(v(t,x)\rho(\theta_t,x))dx, \end{split}$$ where $\nabla$, $\nabla\cdot$ are gradient and divergence operators w.r.t. $x\in\Omega$. The second to last equality holds from the push forward relation , and the last equality holds using the integration by parts w.r.t. $x$. Since $\eqref{form1}= \eqref{form2}$ for any $f\in C_c^{\infty}(\Omega)$, we have proven . In addition, by the definition of the push forward operator , we have $$\begin{split} \mathbb{E}_{z\sim\mu} L(g(\theta_t, z), \frac{d}{dt}g(\theta_t,z))=&\int_{Z} L(g(\theta_t,z), v(t, g(\theta_t,z)))\mu(z) dz\\ =&\int_\Omega L(x, v(t,x))\rho(\theta_t,x)dx. \end{split}$$ Thus we prove . It is interesting to compare variational problems with . We can view $g(\theta_t, z)\in \Omega$ as “parameterized” particles, whose density function is constrained in the parameterized probability set $\rho(\Theta)$. Their motions result at the evolution of probability transition densities in $\rho(\Theta)$, satisfying the constrained continuity equation . For this reason, we call the Lagrangian formalism of constrained dynamical OT. It is also worth noting that each movement of $g(\theta_t, z)$ results a motion in density path $\rho(\theta_t,x)$. The change of density path will identify a potential function $\Phi(t,x)$ depending on $\theta_t$. In additon, the cost functional in dynamical OT can involve general potential energies, such as linear potential energy: $$\mathcal{V}(\rho)=\int_{\Omega}V(x)\rho(x)dx,$$ and interaction energy: $$\mathcal{W}(\rho)=\int_{\Omega}\int_{\Omega}w(x,y)\rho(x)\rho(y)dxdy.$$ Here $V(x)$ is a linear potential, and $w(x,y)=w(y,x)$ is a symmetric interaction potential. If $\rho(\theta, x)\in \rho(\Theta)$, then $$\begin{split} \mathcal{V}(\rho(\theta, \cdot))=&\int_{\Omega}V(x)\rho(\theta, x)dx\\ =&\int_{Z} V(g(\theta,z))\mu(z)dz\\ =&\mathbb{E}_{z\sim\mu}V(g(\theta, z)), \end{split}$$ and $$\begin{split} \mathcal{W}(\rho(\theta, \cdot))=&\int_{\Omega}\int_{\Omega} w(x,y) \rho(\theta, x)\rho(\theta, y)dxdy\\ =&\int_{Z}\int_{Z}w(g(\theta, z_1), g(\theta, z_2))\mu(z_1)\mu(z_2)dz_1dz_2\\ =&\mathbb{E}_{(z_1,z_2)\sim \mu\times \mu}w(g(\theta, z_1), g(\theta, z_2)), \end{split}$$ where each second equality in the above two formulas hold because of the constrained mapping relation and $\mu\circ\mu$ represents an independent joint density function supported on $Z\times Z$ with marginals $\mu(z_1)$, $\mu(z_2)$. Similarly in proposition \[thm\], we have $$\label{MFG} \begin{split} c(\theta_0,\theta_1)=&\inf_{v_t,~\theta(0)=\theta_0,~\theta(1)=\theta_1}~\Big\{\int_0^1\Big[\int_\Omega L(x, v(t,x))\rho(\theta_t,x)dx-\mathcal{V}(\rho(\theta_t, \cdot))-\mathcal{W}(\rho(\theta_t,\cdot))\Big]dt\colon\\ &\hspace{4cm} \frac{\partial}{\partial t}\rho(\theta_t,x)+\nabla\cdot (\rho(\theta_t,x)v(t,x))=0\Big\} \\ =&\inf_{\Phi_t,~\theta_t~\theta(0)=\theta_0,~\theta(1)=\theta_1}~\Big\{\int_0^1\Big[\int_\Omega L(x, D_p H(x,\nabla\Phi(t,x)))\rho(\theta_t,x)dx-\mathcal{V}(\rho(\theta_t, \cdot))-\mathcal{W}(\rho(\theta_t,\cdot))\Big]dt\colon\\ &\hspace{4cm} \frac{\partial}{\partial t}\rho(\theta_t,x)+\nabla\cdot (\rho(\theta_t,x)v(t,x))=0\Big\} \\ =&\inf_{\Phi_t,~\theta_t,~\theta(0)=\theta_0,~\theta(1)=\theta_1}\Big\{\int_0^1~[\mathbb{E}_{z\sim\mu}L(g(\theta_t,z), \frac{d}{dt}g(\theta_t, z))-\mathbb{E}_{z\sim\mu}V(g(\theta_t, z))\\ &\hspace{3.5cm}-\mathbb{E}_{(z_1,z_2)\sim \mu\times \mu}w(g(\theta_t, z_1), g(\theta_t, z_2))]dt\colon\\ &\hspace{4cm}\frac{d}{dt}g(\theta_t, z)=D_p H(g(\theta_t,z), \nabla_x\Phi(t, g(\theta_t,z)))\Big\}. \end{split}$$ We next demonstrate an example of constrained dynamical OT problems. \[example2\] Let $L(x,v)=\|v\|^2$ and denote $d_{W_2}(\theta_0,\theta_1)=c(\theta_0,\theta_1)^{\frac{1}{2}}$, then $$\begin{split} d_{W_2}(\theta_0, \theta_1)^2=\inf \Big\{&\int_0^1\mathbb{E}_{z\sim\mu}\|\frac{d}{dt}g(\theta_t, z))\|^2dt\colon\frac{d}{dt}g(\theta_t, z)=\nabla\Phi(t, g(\theta_t,z)),~\theta(0)=\theta_0,~\theta(1)=\theta_1\Big\}. \end{split}$$ Observe that forms a geometric action energy function in parameter space $\Theta$, in which the metric tensor can be extracted explicitly. In other words, denote $G(\theta)\in\mathbb{R}^{d\times d}$ by $$\dot\theta^{{\mathsf{T}}}G(\theta)\dot\theta=\dot\theta^{{\mathsf{T}}}\mathbb{E}_\mu(\nabla_\theta g(\theta,z)\nabla_\theta g(\theta,z)^{{\mathsf{T}}})\dot\theta,$$ with the constraint $$(\dot\theta, \nabla_\theta g(\theta ,z))= \nabla_x\Phi(g(\theta,z)).$$ Here $\nabla_\theta g(\theta,z)\in \mathbb{R}^{d\times n}$, $\Phi$ is a potential function satisfying $$-\nabla\cdot(\rho(\theta,x)\nabla\Phi(x))=(\nabla_\theta\rho(\theta,x), \dot\theta),$$ and $G(\theta)=\mathbb{E}_{z\sim\mu}(\nabla_\theta g(\theta,z) \nabla_\theta g(\theta, z)^{{\mathsf{T}}})\in \mathbb{R}^{d\times d}$ is a semi-positive definite matrix. [^1]: The research is supported by AFOSR MURI proposal number 18RT0073. [^2]: In fluid dynamics, the Eulerian coordinates represent the evolution of probability density function of particles, while the Lagrangian coordinates describe the motion of particles. In learning problems, the Eulerian coordinates naturally connect with the minimization problem in term of probability densities, while Lagrangian coordinate refers to the variational problem formulated in samples, whose analog are particles. “In learning, we model problems in Eulerian, and compute them in Lagrangian”. In other words, we often write the objective function in term of densities and compute them via samples. [^3]: We abuse the notation of $\rho$. Notice that $\rho(\theta, x)$ is a probability distribution parameterized by $\theta\in \Theta$, while $\rho(x)$ is a probability distribution function in the full probability set.
--- abstract: 'Direct imaging searches have revealed many very low-mass objects, including a small number of planetary mass objects, as wide-orbit companions to young stars. The formation mechanism of these objects remains uncertain. In this paper we present the predictions of the disc fragmentation model regarding the properties of the discs around such low-mass objects. We find that the discs around objects that have formed by fragmentation in discs hosted by Sun-like stars (referred to as [*parent*]{} discs and [*parent*]{} stars) are more massive than expected from the ${M}_{\rm disc}-M_*$ relation (which is derived for stars with masses $M_*>0.2~{\rm M}_{\sun}$). Accordingly, the accretion rates onto these objects are also higher than expected from the $\dot{M}_*-M_*$ relation. Moreover there is no significant correlation between the mass of the brown dwarf or planet with the mass of its disc nor with the accretion rate from the disc onto it. The discs around objects that form by disc fragmentation have larger than expected masses as they accrete gas from the disc of their parent star during the first few kyr after they form. The amount of gas that they accrete and therefore their mass depend on how they move in their parent disc and how they interact with it. Observations of disc masses and accretion rates onto very low-mass objects are consistent with the predictions of the disc fragmentation model. Future observations (e.g. by ALMA) of disc masses and accretion rates onto substellar objects that have even lower masses (young planets and young, low-mass brown dwarfs), where the scaling relations predicted by the disc fragmentation model diverge significantly from the corresponding relations established for higher-mass stars, will test the predictions of this model.' author: - | Dimitris Stamatellos$^{1,}$[^1], Gregory J. Herczeg$^{2}$\ $^1$ Jeremiah Horrocks Institute for Mathematics, Physics & Astronomy, University of Central Lancashire, Preston, PR1 2HE, UK\ $^2$ Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Lu 5, Haidian District, Beijing 100871, China\ bibliography: - '../../bibliography.bib' date: 'Accepted 2014 . Received 2014 July 14; in original form 2014 July 14' title: The properties of discs around planets and brown dwarfs as evidence for disc fragmentation --- \[firstpage\] Stars: formation, low-mass, brown dwarfs – accretion, accretion discs, protoplanetary discs – Methods: Numerical, Hydrodynamics Introduction ============ Many very low-mass objects, including a small number of planetary-mass objects, have been observed by direct imaging as companions to young stars at distances from a few tens to a few hundred AU [@Kraus:2008a; @Kraus:2013a; @Marois:2008a; @Faherty:2009a; @Ireland:2011a; @Kuzuhara:2011a; @Kuzuhara:2013a; @Aller:2013a; @Bailey:2013a; @Rameau:2013a; @Naud:2014a; @Galicher:2014a]. The dominant mechanism for the formation of low-mass stellar and substellar objects (low-mass hydrogen-burning stars, brown dwarfs and giant planets) is still uncertain [e.g. @Chabrier:2014a; @Stamatellos:2014a]. It is believed that such objects may form in three ways: (i) by collapsing molecular cloud cores, i.e. the same way as Sun-like stars [@Padoan:2004a; @Hennebelle:2008c; @Hennebelle:2009b; @Hopkins:2013b], (ii) by fragmentation of protostellar discs [@Boss:1997a; @Stamatellos:2007c; @Attwood:2009a; @Stamatellos:2009a; @Boley:2009a], which may not even be centrifugally supported [@Offner:2010d; @Offner:2012a], and (iii) by ejection of proto-stellar embryos from their natal cloud cores [@Reipurth:2001a; @Bate:2002a; @Goodwin:2004a]. Additionally, gas giant planets also form by core accretion, i.e. by coagulation of dust particles to progressively larger bodies [@Safronov:1969a; @Goldreich:1973a; @Mizuno:1980a; @Bodenheimer:1986a; @Pollack:1996a]. Objects formed by core accretion may even become deuterium-burning brown dwarfs [e.g. @Molliere:2012a]. However, gas giants on wide orbits ($\stackrel{>}{_\sim}100-300~{\rm AU}$) are not believed to be able to form, at least in-situ, by core accretion. Substellar objects are difficult to form similarly to Sun-like stars, and it has been argued that a different mechanism may in fact be at play [e.g. @Whitworth:2007a; @Thies:2007a; @Reggiani:2013a]. A low-mass pre-(sub)stellar core has to be very dense and compact in order to be gravitationally unstable. Up to now, only one clear-cut self-gravitating brown dwarf-mass core has been observed [@Andre:2012a], but such cores have small size and they are faint, making them difficult to observe. Another way to reach the high densities that are required for the formation of substellar objects is in the discs around young stars. This model has been studied extensively and has been shown to reproduce critical observational constraints such as the low-mass IMF, the brown dwarf desert, and the binary statistics of low-mass objects [@Stamatellos:2009a; @Lomax:2014a; @Lomax:2014b]. In the third formation scenario mentioned in previous paragraph, formation by ejection of proto-stellar embryos, objects that were destined to become Sun-like stars fail to fulfil their potential as they are ejected from their natal cloud before they accrete enough mass to become hydrogen-burning stars. The presence of discs around substellar objects (and associated phenomena, i.e. accretion and outflows) was initially thought to favour a Sun-like formation mechanism (i.e. turbulent fragmentation and collapse of pre-substellar cores). However, all three main formation mechanisms produce substellar objects that are surrounded by discs, albeit with different disc fractions. In the turbulent fragmentation scenario substellar objects almost always form with discs [e.g. @Machida:2009a]. Substellar objects that form by disc fragmentation also most likely form with discs but these discs may be disrupted as these objects are liberated from the disc in which they formed [@Stamatellos:2009a]. In the ejection scenario discs are also likely to be disrupted but quite a few still survive. [@Bate:2009b] finds that at least 10% of the very-low mass objects formed in his simulations have discs with sizes larger $>40$ AU. Although the presence of discs around substellar objects is consistent with all three formation theories, the properties of these discs may hide clues regarding their formation mechanism. Recently, many authors [@Andrews:2013a; @Mohanty:2013a; @Ricci:2014a; @Kraus:2014a] have estimated the masses of discs around young low-mass stellar and substellar objects down to a limit of $\sim 10^{-3}~{\rm M}_{\sun}$, using submillimetre observations . The accretion rates around many low-mass objects have also been determined down to $10^{-13}$ M$_{\sun}\ {\rm yr}^{-1}$ [@Natta:2004a; @Calvet:2004a; @Mohanty:2005a; @Muzerolle:2005a; @Herczeg:2008a; @Antoniucci:2011a; @Rigliaco:2011a; @Biazzo:2012a]. The goal of this paper is to compare these observations with the theoretical predictions of the disc fragmentation model. This is particularly topical as the discovery of many planetary-mass objects at wide separations (a few tens to a few hundred AU) from their host stars by direct imaging [@Kraus:2008a; @Kraus:2013a; @Marois:2008a; @Faherty:2009a; @Ireland:2011a; @Kuzuhara:2011a; @Kuzuhara:2013a; @Aller:2013a; @Bailey:2013a; @Rameau:2013a; @Naud:2014a; @Galicher:2014a] has renewed the debate whether these objects have formed by core accretion or by fragmentation in the discs of their parent stars, or have formed otherwise and were later captured by the parent stars [@Perets:2012a]. It is also uncertain whether such companions may have formed differently than field objects. More wide-orbit substellar objects are bound to be discovered with focused surveys looking for giant planets (Gemini Planet Imager, @Macintosh:2014a 2014; SPHERE/VLT, @Beuzit:2008a 2008; HiCIAO/SUBARU, @Suzuki:2009a 2009) and therefore their properties and the properties of their probable discs may be better determined in the near future, providing tighter constraints for theoretical models. In this paper we present the predictions of the disc fragmentation model regarding the masses of discs around low-mass stellar and substellar objects (brown dwarfs and planets) that are either companion to higher mass stars or free-floating. We also determine the accretion rates onto low-mass objects and compare them with observations. In Section \[sec:model\] we briefly review the hydrodynamic simulations that we use for this study, and in Section \[disc:evolution\] we discuss how we compute the evolution of the discs around brown dwarfs and planets, after these discs have separated from the discs of their parent stars. In Section \[sec:discmass\] we present the results of the model regarding the disc masses of low-mass objects and discuss how they fit with observations, and in Section \[sec:accretion\] we discuss the accretion rates onto low-mass objects. Finally in Section \[sec:conclusions\] we summerize the main results of this work. Simulations of the formation of wide-orbit planets and brown dwarfs by disc fragmentation {#sec:model} ========================================================================================= Overview -------- The properties of the low-mass stellar and sub-stellar objects (planets, brown dwarfs, and low-mass hydrogen burning stars) formed by disc fragmentation have been studied in detail by Stamatellos et al. in a series of papers [@Stamatellos:2007c; @Stamatellos:2009a; @Stamatellos:2009d; @Stamatellos:2011a]. In this paper we use the results of the simulations of [@Stamatellos:2009a] to determine the properties of the discs around wide-orbit planets, brown-dwarfs and low-mass stars that form in the discs of Sun-like stars, and the accretion rates onto these objects. Initial Conditions ------------------ [@Stamatellos:2009a] performed 12 simulations of gravitationally unstable discs around Sun-like stars. These simulations are different realisations of the same star-disc system, i.e. the properties of the system are the same in all simulations; the only difference is the random seed used to construct each disc. The star has an initial mass of $M_*=0.7$ M$_{\sun}$. The disc around it has an initial mass of $M_{\rm D}=0.7$ M$_{\sun}$ and a radius of $R_{\rm D}=400~$AU. The surface density of the disc is $$\Sigma_{_0}(R)=\frac{0.014\,{\rm M}_{\sun}}{{\rm AU}^2}\,\left(\frac{R}{\rm AU}\right)^{-7/4}\,,$$ and its temperature $$\label{EQN:TBG} T_{_0}(R)=250\,{\rm K}\,\left(\frac{R}{\rm AU}\right)^{-1/2}+10~{\rm K}\,.$$ The disc has an initial Toomre parameter $Q\sim 0.9$ and therefore it is gravitationally unstable by construction. In a realistic situation the disc forms around a young protostar and grows in mass by accreting infalling material from the envelope [e.g @Attwood:2009a; @Stamatellos:2011e; @Stamatellos:2012a; @Lomax:2014a]. The disc fragments once it has grown enough to become gravitationally unstable at distance $\sim 100$ AU from its parent star and this happen before it can reach the mass assumed by [@Stamatellos:2009a]. In fact even discs with masses $\sim0.25$ M$_{\sun}$ and radii $100~$AU can fragment [@Stamatellos:2011d]. Such disc masses are comparable to the observed disc masses in young (Class 0, Class I) objects [e.g. @Jorgensen:2009a; @Tobin:2012b; @Murillo:2013a; @Favre:2014a]. In any case, any evolutionary period with such a massive disc is short-lived as the disc quickly (within a few thousand years) fragments. The large disc mass and size assumed by [@Stamatellos:2009a] ensure that more low-mass objects form in the disc to improve the statistical analysis of the results, but the properties of these objects (mass, disc mass, disc size) are similar to the ones formed in lower mass discs [@Stamatellos:2011d]. This is because the characteristic initial mass of objects formed by disc fragmentaton is set by the opacity limit, which is thought to be $\sim1-5~{\rm M_{\rm J}}$ [@Low:1976a; @Rees:1976a; @Silk:1977a; @Boss:1988a; @Boyd:2005a; @Whitworth:2006a; @Boley:2010b; @Kratter:2010b; @Forgan:2011b; @Rogers:2012a]. Therefore, the typical initial mass of the objects formed by fragmentation is the same for lower and higher parent disc masses. The parent disc mass (in lower mass discs) is distributed among fewer objects and therefore the masses of these objects and the masses of their discs are similar to the ones that form in higher mass discs. The simulations that we use start off with already formed discs; therefore disc loading and other interactions with the star forming cloud (which may lead to non-axisymmetric discs) are ignored. Simulations that take these effects into account [e.g. @Tsukamoto:2015a] have given similar results to the simulations of [@Stamatellos:2009a] used in the present paper. We therefore do not anticipate the choice of the specific set of disc simulations to significantly alter the main conclusions of this paper. Numerical Method ---------------- The evolution and fragmentation of the disc of the parent star is followed using the SPH code [dragon]{} which treats the radiation transport within the disc with the diffusion approximation of [@Stamatellos:2007b][see also @Forgan:2009b]. The radiation feedback from the parent star is also taken into account. The code uses time-dependent viscosity with parameters $\alpha=0.1$, $\beta=2\alpha$ (Morris & Monaghan 1997) and a Balsara switch (Balsara 1995). Results ------- The parent disc is unstable and therefore within a few kyr it fragments into 5-11 secondary objects. In the 12 simulations a total of 96 objects are formed. Some of them escape and others remain bound to the parent star at wide orbits [see @Stamatellos:2009a; @Stamatellos:2011a]. Most of these objects are brown dwarfs (67%; $13~{\rm M}_{\rm J}< M<80~{\rm M}_{\rm J}$) and the rest are low-mass hydrogen burning stars (30%; $M>80~{\rm M}_{\rm J}$), and planets (3%; $M<13~{\rm M}_{\rm J}$). These mass ranges are set by the hydrogen-burning limit ($\sim 80~{\rm M}_{\rm J}$), and the deuterium-burning limit ($\sim 13~{\rm M}_{\rm J}$). Stars can sustain hydrogen burning, whereas brown dwarfs can sustain only deuterium burning. Planets cannot sustain deuterium burning. However, there is no reason for gas fragmentation to stop either at the hydrogen-burning limit or the deuterium-burning limit: the minimum mass of an object that forms by gas fragmentation is given from the opacity limit for fragmentation ($\sim1-5~{\rm M_{\rm J}}$). On the other hand, planets that form by core accretion may have masses $>13~{\rm M}_{\rm J}$ [e.g. @Molliere:2012a]. In this paper, we use the term [*planet*]{} to refer to objects with mass $<13~{\rm M}_{\rm J}$ regardless on their formation mechanism. About 70% of the secondary objects that form in the parent disc are attended by their own individual discs. These discs have masses up to a few tens of ${\rm M}_{\rm J}$ and radii of a few tens of AU [see @Stamatellos:2009a]. Out of these secondary objects with discs we select 34 single objects (i.e. they are not in a binary system with another secondary object formed in the parent disc but they may still be bound to the parent star) for which the properties of the discs can be determined (i.e. the discs are nearly Keplerian). The rest of the objects either were binaries, or were attended by disc-like structures whose properties could not be obtained (e.g. discs that were perturbed). Almost all of the objects in the sample (33 out of 34) are still bound to the parent star albeit in most cases at very wide-orbits (see Fig. \[fig:starmass.r\]). Eventually many of these will be liberated and will become field objects [@Stamatellos:2009a]. Therefore, in Sections \[sec:discmass\] & \[sec:accretion\], the properties of the discs of these objects and the accretion rates onto them will be compared with the observed properties of objects that are either wide-orbit companions to other stars, or field objects. Fig. \[fig:starmass.r\] presents the relation between the masses and the semi-major axes of the orbits of these objects. Most of these objects are brown dwarfs (with a few of them near the brown dwarf-planet boundary of 13 ${\rm M}_{\rm J}$) and a few of them planets and low-mass hydrogen-burning stars. Low-mass hydrogen burning stars tend to be closer to their parent stars than brown dwarfs and planets [the brown dwarf desert; @Marcy:2000a; @Grether:2006a; @Sahlmann:2011a; @Ma:2014a]. There are many brown dwarf companions to Sun-like stars but at these tend to be at wide separations [@Kraus:2008a; @Faherty:2009a; @Faherty:2010a; @Kraus:2011a; @Evans:2012a; @Reggiani:2013a; @Duchene:2013b]. As the above types of objects all form by the same mechanism in the disc irrespective of their mass we will analyse their disc properties collectively. Fig. \[fig:discmass.r\] presents the mass of each disc versus the semi-major axis of its host object. There is no significant correlation between the two. The disc masses are determined by how these objects move within the disc of the parent star and accrete mass from it, rather than where they form in the parent disc. The evolution of the discs around brown dwarfs and planets {#disc:evolution} ========================================================== The hydrodynamic simulations provide the properties of the discs around wide-orbit companions to Sun-like stars at the time when 70-80% of the parent disc around the parent Sun-like star has been accreted, either onto the parent star or onto the low-mass objects that form in the parent disc. This typically happens within $10-20$ kyr from the start of each simulation. By this point the mass of the parent disc has been reduced to $<0.01~{\rm M}_{\sun}$. Considering that the secondary objects that formed in the parent disc are on wide-orbits around the parent star, we do not expect interactions between the parent disc and the secondary discs to be important. Additionally, in a cluster environment they are likely to be disrupted by stellar flybys and become free-floating objects[@Heggie:1975a; @Kroupa:2003a; @Parker:2009a; @Parker:2009b; @Spurzem:2009a; @Malmberg:2011a; @Hao:2013a]. Therefore, we assume that at this point (i) that the secondary discs (i.e. the discs around the low-mass objects that form in the parent disc) have separated from their parent disc, (ii) that they evolve independently (i.e. there are no dynamical interactions between them and the parent disc, or other objects the form in the parent disc), and (iii) that no further mass from the parent disc is accreted onto them. These assumptions are not critical as the accretion of additional material onto the secondary disc reinforces our conclusions. To compare the properties of the discs around these low-mass companions with the observed disc properties of companions in nearby young stellar clusters (age $\sim 1-15$ Myr) these properties need to be evolved in time. As this is not possible to be done by hydrodynamic simulations due to the large computational cost, we have employed an analytic model of viscous disc evolution. We ignore any disc clearing due to photo-evaporation from radiation from the low-mass object hosting the disc [see @Alexander:2013a and references therein]. Photo-evaporation of discs around low-mass objects ($\stackrel{<}{_\sim}0.15$ M$_{\sun}$) could happen [e.g. @Alexander:2006a] but because of our limited knowledge on how UV and X-ray emission from low-mass objects would affect their discs, it is difficult to ascertain how important photo evaporation is for disc dispersion. The analytic model we employ assumes the disc (around a secondary object) is geometrically thin and evolves viscously under the influence of the central object’s gravity [e.g. @Lynden-Bell:1974a], which in this case is the planet or brown dwarf (represented as point masses in the model). The surface density of such a disc $\Sigma(R,t)$ at polar radius $R$ and time $t$, evolves as follows $$\frac{\partial\Sigma}{\partial t}=\frac{3}{R}\frac{\partial}{\partial R}\left[ R^{1/2}\frac{\partial}{\partial R}(\nu\Sigma R^{1/2})\right]\,,$$ where $\nu(R,t)$ is the kinematic viscosity [@Pringle:1981a]. In this equation (and in subsequent equations) $t=0$ corresponds to the time where these discs are decoupled from their parents discs (i.e. the end of the hydrodynamic simulations). Assuming that the viscosity is independent of time and can be expressed as a power law in $R$, $\nu\propto R^\gamma$, then the above evolution equation has a similarity solution [@Lynden-Bell:1974a; @Hartmann:1998a] $$\Sigma(R,t)=\frac{M_{\rm d}(0)(2-\gamma)}{2\pi R^2_0 r^\gamma}\tau^{(5/2-\gamma)/(2-\gamma)} exp\left[-\frac{r^{2-\gamma}}{\tau}\right]\,,$$ where $r=R/R_0$ ($R_0$ is the radius within which 60% of the disc mass is contained initially), and $$\label{eq:time} \tau=t/t_\nu+1 \,,$$ where $$\label{eq:tvisc} t_\nu=R_0^2/ [3(2-\gamma)^2\nu(R_0)]\, .$$ The accretion rate onto the central object is then $$\label{eq:accretion} \dot{M}_*=\frac{M_{\rm d}(0)}{2(2-\gamma) t_\nu}\tau^{-(5/2-\gamma)/(2-\gamma)}\,,$$ and the disc mass $$\label{eq:dmass} M_{\rm d}(t)=M_{\rm d}(0) \tau^{-1/[2(2-\gamma)]}\,.$$ It has been argued that observations of the discs of T Tauri stars suggest that $\gamma\sim1$ [@Hartmann:1998a] (i.e. $\nu\propto R$), and therefore we will adopt this value in the present study. The choice of $\gamma$ is not critical for the conclusions of this paper. We use the $\alpha$-viscosity parametarisation [@Shakura:1973a] $$\label{eq:visc} \nu=\alpha c_{\rm s} H\,,$$ where $c_{\rm s}$ is the sound speed in the disc, $H$ is the disc scale-height, and $\alpha$ the viscosity parameter. Assuming that the disc is locally vertically isothermal we obtain $ H={c_{\rm s}}/{\Omega(R)}, $ which when used in Eq. (\[eq:visc\]) and assuming Keplerian rotation, i.e. $\Omega(R)=(GM_*/R^3)^{1/2}$, gives $$\label{eq:visc2} \nu \propto \alpha\ T_d\ R^{3/2} M_*^{-1/2}\,.$$ Using Eq. (\[eq:visc2\]) in Eq. (\[eq:tvisc\]) and assuming $T_d(R)\propto R^{-1/2}$ (consistent with $\gamma=1$) we obtain $$\label{eq:tvisc2} \mathclap{t_\nu=8\! \times\! 10^4 \! \left(\frac{\alpha}{10^{-2}}\right)^{-1}\! \! \left(\frac{R_0}{10{\rm AU}}\right)\!\! \left(\frac{M_*}{0.5{\rm M}_{\sun}}\right)^{1/2}\!\! \! \left(\frac{T_d}{10~{\rm K}}\right)^{-1} {\rm yr}}$$ where $T_d$ is the disc temperature at 100 AU. We can therefore calculate the disc mass and the accretion rate onto the central object that hosts the disc (planet or brown dwarf) at any given time, using the initial disc mass $M_{\rm d}(0)$, obtained by the SPH simulations, and using Eqs. (\[eq:dmass\]),(\[eq:accretion\]), and (\[eq:tvisc2\]). The masses of discs around low-mass stellar and substellar objects {#sec:discmass} ================================================================== Observations of disc masses [e.g @Andrews:2013a; @Mohanty:2013a] over a wide range of host stellar and substellar masses from intermediate mass stars to planetary-mass objects suggest a linear correlation between object mass and disc mass, i.e. $M_{\rm d}\propto M_*$. [@Andrews:2013a] using 3 different evolutionary models for calculating stellar masses they find that stellar[^2] mass scales almost linearly with the disc mass, $M_{\rm d}\approx 10^\kappa M_*^{\lambda}$, where $ \kappa=-2.3 \pm 0.3, -2.7 \pm 0.2, -2.5 \pm 0.2$, and $\lambda=1.4\pm0.5, 1.0\pm0.4, 1.1\pm0.4$, when using the [@DAntona:1997a] (hereafter DM97), [@Baraffe:1998a] (BCAH98) and [@Siess:2000a] (SDF00) models, respectively. [@Mohanty:2013a] follow a similar approach using the SDF00 models for stars with mass $>1.4~{\rm M}_{\sun}$, the BCAH98 model for stars with masses $0.08-1.4~{\rm M}_{\sun}$ and the dusty models of [@Chabrier:2000a] for stellar masses $<0.08~{\rm M}_{\sun}$, and similarly find that $M_{\rm d}\approx 10^{-2.4}M_*$. We note however that both [@Andrews:2013a] and [@Mohanty:2013a] have assumed that the scatter in disc mass is constant for all objects irrespective of their mass; this may not be the case [@Alexander:2006c]. It is evident (see Fig. 9 in @Andrews:2013a 2013 and Fig. 9 in @Mohanty:2013a 2013) that (i) there is a considerable scatter in the ${M}_{\rm disc}-M_*$ relation, and (ii) there are only a few definite detections of discs around stars with masses $<0.1~{\rm M}_{\sun}$. For example in the sample of [@Andrews:2013a] using the DCAH98 model there are just 15 definite disc detections around stars with mass $<0.1~{\rm M}_{\sun}$ (for 42 objects only upper limits for the disc masses were derived; these upper limits vary from $6\times10^{-4}$ to $1.3\times10^{-2}~{\rm M}_{\sun}$). The large scatter in the data points and the small number of data points at low masses cast doubt to the suggestion that there is a simple relation between stellar and disc mass that holds from intermediate-mass stars all the way to brown dwarfs and planetary-mass objects. In other words, these data do not exclude different scaling relations for low- and high-mass objects. Another complication comes from the fact that when calculating the relation between stellar and disc masses the ages of these objects are not taken into account: disc masses are getting smaller with time either due to viscous evolution or due to photoevaporation from the host star. Thus, considering that discs around low-mass objects have masses that are low and near the detection limits of current observational facilities, it is more likely to observe them when they are still young (and therefore have more mass). Therefore, it may be expected that the discs around brown dwarfs and planets are more massive than what a simple extrapolation from the ${M}_{\rm disc}-M_*$ relation for higher mass stars would suggest. The exact effect that the object ages have on the analyses of [@Andrews:2013a] and [@Mohanty:2013a] is difficult to estimate as stellar ages cannot be determined accurately enough [@Soderblom:2013a]. The disc masses of the objects formed by disc fragmentation in the [@Stamatellos:2009a] simulations are plotted against the masses of the objects in Fig. \[fig:sdmass\]. The disc masses are calculated from the disc masses in the hydrodynamic simulations of [@Stamatellos:2009a] assuming that the discs evolve viscously and using Eq. (\[eq:dmass\]) with $\alpha=0.01$. The time for which each disc is evolved is chosen randomly between $1-10$ Myr, so as to emulate the age spread of observed discs. The same figure shows the relations derived by [@Andrews:2013a] using different evolutionary models to calculate the masses of the host star (DM97, BCAH98, and SDF00 as marked on graph). There is scatter in the calculated disc masses of objects formed by disc fragmentation due to differences in the initial disc masses (i.e. the mass they have when they separate from the disc of the parent star) and their ages. Most of these discs are more massive than expected from the scaling ${M}_{\rm disc}-M_*$ relation (which is mainly determined by higher mass stars) by more than a order of magnitude in a few cases. Additionally, there is no significant dependence between disc mass and stellar mass in contrast with higher-mass systems; we find a relation $\log (M_{\rm d})=-3.7-0.005\, \log(M_*)$ with a standard deviation of $\sigma=0.27$. Both of the above characteristics are consequences of formation by disc fragmentation. When a low-mass object forms from gas condensing out in the parent disc, its properties (and its disc properties) are initially similar to an object that forms from a collapsing core in isolation. However as this object/disc system moves within the parent disc (but before it separates from the parent disc) it accretes more gas, and therefore its mass increases. This mass is initially accreted onto the object’s disc and then slowly flows onto object. Therefore, when a young object that has formed by disc fragmentation separates from its parent disc and evolves independently, has a more massive disc than it would have if it had formed in isolation in a collapsing core. This scenario is consistent with the observations of [@Andrews:2013a] and [@Mohanty:2013a]; at least a few discs around young low-mass objects are more massive than expected. In their samples the detection limit is around $\sim 10^{-3}~{\rm M}_{\sun}$ and a few of the low-mass objects that they observed either have lower-mass discs or no discs at all. These may be objects that have either formed by the collapse of a low-mass pre-(sub)stellar core like Sun-like stars, or objects that have formed by disc fragmentation but have lost their discs (through evolution with time, see Fig. \[fig:dmasst\], or due to interactions within the disc). The presence of low-mass discs around low-mass objects are consistent with both formation scenarios but the presence of relatively high-mass discs are indicative of formation by disc fragmentation. Observations of disc masses around very low-mass objects ($\stackrel{<}{_\sim}10~{\rm M}_{\rm J}$), where the predicted ${M}_{\rm disc}-M_*$ relation for young objects diverges significantly from the established ${M}_{\rm disc}-M_*$ relation derived for higher mass stars, will further test the disc fragmentation model. ALMA has the required sensitivity and spatial resolution to observe such small discs. For example, [@Ricci:2014a], have estimated disc masses down to $\sim 0.8-2.1~{\rm M}_{\rm J}$ in three young low-mass objects in the Taurus star forming region. Accretion rates onto wide-orbit low-mass objects {#sec:accretion} ================================================ The accretion rates onto low- and higher-mass objects may also relate to their formation mechanism. In some cases it is possible to derive accretion rates even when the disc that provides the material for accretion is not detectable in the sub-mm, where disc masses are usually measured [@Herczeg:2009a; @Joergens:2013a; @Zhou:2014a]. For example, [@Herczeg:2009a] and [@Zhou:2014a] estimate the accretion luminosity from the excess line and continuum emission; for low-mass objects they can estimate accretion rates down to $\sim10^{-13}$ M$_{\sun}\ {\rm yr}^{-1}$. It has been argued that, similarly to the ${M}_{\rm disc}-M_*$ relation mentioned in the previous section, there is a relation between accretion rate onto a star and its mass. It has been suggested that this relation holds from intermediate-mass stars down to brown dwarfs, namely that $\dot{M}_*\propto M_*^a$, where $\alpha\sim1.0-2.8$, albeit with a large scatter [@Natta:2004a; @Calvet:2004a; @Mohanty:2005a; @Muzerolle:2005a; @Herczeg:2008a; @Antoniucci:2011a; @Biazzo:2012a]. The accretion rates onto stars for a wide range of stellar masses are plotted against the stellar masses in Fig. \[fig:accretion\]. The accretion rates shown here have all been measured directly from excess Balmer continuum emission in the U-band [@Gullbring:1998a; @Herczeg:2008a; @Herczeg:2009a; @Rigliaco:2011a; @Rigliaco:2012a; @Ingleby:2013a; @Alcala:2014a; @Zhou:2014a]. In the same figure the best fit line that was calculated by [@Zhou:2014a] is also plotted. It is evident from the graph that there is considerable scatter in $\dot{M}_*-M_*$ relation, that may reflect a difference in the disc initial conditions [@Alexander:2006c; @Dullemond:2006a]. A part of the scatter could also be attributed to the different ages of the systems plotted in Fig. \[fig:accretion\]; accretion rates drop as stars age (see Fig. \[fig:t-mdot\]). The detection limits of accretion rates are relatively low for planets and brown dwarfs. Most objects with excess emission in the IR also have measured U-band accretion rates; thus it is expected that there is no bias towards detecting only younger objects with higher accretion rates. In fact most of the observed objects exhibit low accretion rates. The estimated accretion rates for most of the low-mass objects ($<0.1~{\rm M}_{\sun}$) are consistent with the $\dot{M}_*-M_*$ scaling relation derived for higher-mass stars. In fact in a few cases the accretion rates are lower than expected. However in a few cases, like the three planetary-mass companions observed by [@Zhou:2014a] (GSC 06214-00210 b, GQ Lup b, and DH Tau b) the accretion rates are higher than expected; these accretion rates are an order of magnitude higher than what is expected from the $\dot{M}_*-M_*$ relation. In Fig. \[fig:accretion\] we also plot the accretion rates of the objects formed by disc fragmentation in the simulations of [@Stamatellos:2009a]. These accretion rates are calculated using the viscous evolution model (Eq. (\[eq:accretion\])) with $\alpha=0.01$. The time for which each disc is evolved is chosen randomly between $1-10$ Myr, so as to emulate the age spread of observed discs. There is no significant correlation between the accretion rate and the mass of the object; we find a relation $\log (\dot{M}_*)=-10.5-0.12\, \log(M_*)$, with a standard deviation of $\sigma=0.3$. Moreover, in a few cases the accretion rates are higher than expected from the $\dot{M}_*-M_*$ scaling relation. In the model that we present here, this is due to the higher initial mass of the discs of these objects. As mentioned in the previous section, these secondary discs grow in mass as they move within the discs of their parent stars (before they start evolving independently). Therefore, we suggest that the relatively high accretion rates are indicative of formation by disc fragmentation. On the other hand, low accretion rates are consistent with both formation by disc fragmentation or formation by the collapse of low-mass pre-(sub)stellar cores. In the former case low accretion rates could be due to time evolution (accretion rate drops with time; see Fig. \[fig:t-mdot\]) or due to disruption by interactions with other objects in the parent disc. Observations of accretion rates around very low-mass objects [$\stackrel{<}{_\sim}~10~{\rm M}_{\rm J}$; e.g. @Zhou:2014a], where the predicted $\dot{M}_*-M_*$ relation relation diverges significantly from the established $\dot{M}_*-M_*$ relation relation derived for higher mass stars, will further test the model presented here. The effect of the viscosity of secondary discs ============================================== We have so far assumed in our analysis that the physical processes for redistributing angular momentum are the same for discs of T Tauri stars and for discs of lower mass objects (brown dwarfs, planets). However, this may not be the case. It has been argued that the magneto-rotational instability may not be effective in discs around low-mass objects [@Keith:2014b; @Szulagyi:2014a; @Fujii:2014a], which means that the effective viscosity in such discs should be smaller than the one presumed for T Tauri star discs ($\alpha=0.01$). However, these studies have focused on discs around Jovian planets on Jovian orbits, i.e. orbits relatively close to the central stars [e.g. @Gressel:2013b]. In our study we focus on wide-orbit low-mass companions (see Fig. \[fig:discmass.r\]), whose discs are more extended as they not limited by the Hill radii of their host secondary objects [see @Stamatellos:2009a]. These discs could be massive enough so that angular momentum can be effectively transported by gravitational torques. Nevertheless, our knowledge of the effective viscosity in such discs is limited, and it is important to examine the effect that the assumed disc viscosity has on the conclusions of our study. In Figs. \[fig:vdmass\] and \[fig:vaccretion\] we present the predictions of our model for low-viscosity discs ($\alpha=0.001$) and for high-viscosity discs ($\alpha=0.05$). As expected, low-viscosity discs evolve slower and their masses and accretion rates remain higher for longer. Therefore, in this case the differences between the predicted ${M}_{\rm disc}-M_*$ and $\dot{M}_*-M_*$ relations for disc fragmentation and the observed relations for higher mass stars are more pronounced (see black lines in Figs. \[fig:vdmass\], \[fig:vaccretion\]). The opposite holds for high-viscosity discs ($\alpha=0.05$; see brown lines in Figs. \[fig:vdmass\], \[fig:vaccretion\]). Conclusions {#sec:conclusions} =========== We suggest that substellar (planetary-mass objects and brown dwarfs) and low-mass stellar objects (low-mass hydrogen burning stars) that form by disc fragmentation, have disc masses and accretion rates that (i) are independent of the mass of the host object, and (ii) are higher than what is expected from scaling relations derived from their intermediate and higher-mass counterparts. These low-mass objects form similarly to higher-mass objects by self-gravitating gas but as they move within the gas-rich parent disc their individual discs accrete additional material; therefore before these objects separate from their parent discs and evolve independently (i.e. within a few kyr), their discs grow more massive and the accretion rates onto them are higher than if they were formed in isolation in collapsing low-mass pre-(sub)stellar cores. The assumption of independent evolution is not critical as if these secondary discs were still interacting with their parent disc they would accrete additional material reinforcing the above conclusion. However, we do not expect additional accretion to be important. Observations of disc masses and accretion rates of low-mass objects are consistent with the predictions of the disc fragmentation model. Although the presence of low-mass discs (or lack of discs) and low accretion rates (or no accretion at all) may be attributed to disc evolution and/or disc disruption due to interactions with other objects within the parent disc, relatively high disc masses and high accretion rates are suggestive of formation due to disc fragmentation. We therefore suggest that low-mass objects that have discs with masses higher than expected (or equivalently accretion rates onto them higher than expected), such as GSC 06214-00210 b, GQ Lup b, and DH Tau b [@Zhou:2014a], are young objects that have formed by disc fragmentation. The disc fragmentation model can further be tested by observations of disc masses and accretion rates of very low-mass objects ($\stackrel{<}{_\sim}10{\rm M}_{\rm J}$). At these very low-masses the ${M}_{\rm disc}-M_*$ and $\dot{M}_*-M_*$ relations predicted by the model presented here diverge significantly from the corresponding relations established for higher-mass stars. We suggest that future analyses of the ${M}_{\rm disc}-M_*$ and $\dot{M}_*-M_*$ relations should separate the sample into two subgroups, low-mass ($<0.2~{\rm M}_{\sun}$) and higher-mass ($>0.2~{\rm M}_{\sun}$) objects, so as to test whether these objects obey different scaling relations. The intense interest in wide-orbit and free-floating planets has given momentum to the development of instruments with high sensitivity and good spacial resolution. Therefore observations in the near future are expected to deliver many more such low-mass objects. ALMA is already delivering such observations [@Ricci:2014a; @Kraus:2014a]. The study of these objects, their disc properties and the accretion rates onto them (if they are still young) will provide further constraints regarding their formation mechanism. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the referee for many insightful comments that helped to improve the paper and clarify our main conclusions. We thank Yifan Zhou for important input in the paper and Richard Alexander for useful comments on photo-evaporation. DS thanks Thijs Kouwenhoven for his hospitality during a visit to the Kavli Institute for Astronomy & Astrophysics at Peking University, where part of this work was completed. \[lastpage\] [^1]: E-mail:D.Stamatellos@astro.cf.ac.uk [^2]: In this context the terms [*star*]{} and [*stellar*]{} are used to refer to any objects formed by gravitational instability, therefore including brown dwarfs and planets, as well as hydrogen-burning stars [@Whitworth:2007a].
--- abstract: 'Based on the chiral perturbation theory (ChPT) with the hidden local symmetry, we propose a methodology to calculate a part of the large $N_c$ corrections in the holographic QCD (HQCD). As an example, we apply the method to an HQCD model recently proposed by Sakai and Sugimoto. We show that the $\rho$-$\pi$-$\pi$ coupling becomes in good agreement with the experiment due to the $1/N_c$-subleading corrections.' author: - '[**Masayasu Harada**]{}' - '[**Shinya Matsuzaki**]{}' - '[**Koichi Yamawaki**]{}' title: '\' --- Introduction ============ Recently the duality in string/gauge theory [@Maldacena:1997re] has provided us with a new perspective for solving the problem of strongly coupled gauge theories: Strongly coupled gauge theory can be reformulated from the weakly coupled string theory based on the AdS/CFT correspondence [@AdS/CFT]. Some important qualitative features of the dynamics of QCD such as the confinement and chiral symmetry breaking have been reproduced from this holographic point of view, so-called holographic QCD (HQCD), although the theory in the UV region is substantially different from QCD, i.e., lack of asymptotic freedom. Several authors [@SaSu; @HQCD] proposed a model of HQCD where the chiral symmetry breaking is realized. In particular, starting with a stringy setting, Sakai and Sugimoto (SS) [@SaSu] have succeeded in producing the realistic chiral symmetry breaking $U(N_f)_L\times U(N_f)_R$ down to $U(N_f)_V$ and also a natural emergence of the hidden local symmetry (HLS) [@Bando:1984ej] for vector/axialvector mesons. Moreover, most of them [@SaSu; @HQCD] analyze observables of QCD related to the pion and the vector mesons in the large $N_c$ limit such as $m_{\rho}^2/(g_{\rho\pi\pi}^2F_{\pi}^2) \simeq 3.0$, where $g_{\rho\pi\pi}$ denotes the $\rho$-$\pi$-$\pi$ coupling, and $F_{\pi}$ does the pion decay constant. This, however, substantially deviates from one of the celebrated KSRF relations (KSRF II), $m_{\rho}^2/(g_{\rho\pi\pi}^2F_{\pi}^2) = 2$, which agrees with the experiment. Since the holographic result is only the one at the leading order in the $1/N_c$ expansion, the deviation may be cured by subleading effects in the $1/N_c$ expansion. So far, however, no holographic models succeed in including the effect of the $1/N_c$ corrections. On the other hand, it is well known that meson loops yield the next order in the $1/N_c$ expansion. In this paper [^1] we propose a methodology for calculating a part of the $1/N_c$ corrections to the HQCD through the meson loop, based on the “HLS chiral perturbation theory (ChPT)” [@Harada:1992bu; @Tanabashi:1993sr; @Harada:1999zj; @HY:PRep] which incorporates the vector meson loops into the ChPT [@Gas:84] through the HLS model [@Bando:1984ej]. It is important to note [@Georgi:1989gp] that the [*HLS is crucial for the systematic power counting*]{} when the vector meson mass is light (see, for a review, Ref.[@HY:PRep]). As an example, we apply our method to an HQCD model proposed by SS [@SaSu]. We show that the $1/N_c$ corrections make the ratio $m_{\rho}^2/(g_{\rho\pi\pi}^2F_{\pi}^2)$ in good agreement with the experimental value or the KSRF II relation. Our formalism proposed in this paper is applicable to other models holographically dual to strongly coupled gauge theories, which will give us implications of HQCD. Review of a holographic model ============================= Let us start with the low-energy effective action on the 5-dimensional space-time induced from a holographic model, based on the $N_f$ D8-$\overline{\rm D8}$ branes transverse to the $N_c$ D4-branes, proposed by the authors in Ref. [@SaSu]: $$\begin{aligned} S_{D8}&=& N_c G \int d^4x dz \Bigg( - \frac{1}{2} K^{-1/3}(z) {\rm tr}[ F_{\mu\nu} F^{\mu\nu} ] \nonumber \\ && + K(z) M_{\rm KK}^2 {\rm tr }[ F_{\mu z} F^{\mu z} ] + \mathcal{O}(F^3) \Bigg) \,, \label{D8 brane action}\end{aligned}$$ where $K(z)$ is the induced measure of 5-dimensional space-time given by $$K(z) = 1 + z^2 \,.$$ The coupling $G$ is the rescaled ’t Hooft coupling expressed as $$G = \frac{N_cg_{\rm YM}^2}{108 \pi^3} \,,$$ where $g_{\rm YM}$ is the gauge coupling of the $U(N_c)$ gauge symmetry on the $N_c$ D4-branes. It should be noted that the mass scale $M_{KK}$ in Eq.(\[D8 brane action\]) is related to the scale of the compactification of the $N_c$ D4-branes onto the $S^1$. The five dimensional gauge field $A_{M}$ transforms as $$\begin{aligned} A_{M}(x^{\mu},z) & \rightarrow & g(x^{\mu},z) \cdot A_{M}(x^{\mu},z) \cdot g^{\dagger}(x^{\mu},z) \nonumber \\ && - i \partial_{M} g(x^{\mu},z) \cdot g^{\dagger}(x^{\mu},z) \,, \end{aligned}$$ where $g(x^{\mu}, z)$ is the transformation matrix of the 5-dimensional gauge symmetry. We choose the same boundary condition of the 5-dimensional gauge field $A_{M}$ as done in Ref. [@SaSu]: $$A_{M}(x^{\mu},z = \pm \infty) = 0 \,,$$ which makes the local chiral symmetry be a global one $g_{R,L}\in \mbox{U}_{R,L}(N_f)$. The chiral field $U$ defined in Ref. [@SaSu]: $$U(x^{\mu}) = {\rm P} \exp \left[ i \int_{-\infty}^{\infty} dz' A_z(x^{\mu},z') \right] \,$$ is parameterized by the NG boson field $\pi$ as $$U(x^{\mu}) = e^{\frac{2i \pi(x^{\mu})}{F_{\pi}}} \, ,$$ where $F_\pi$ denotes the decay constant of $\pi$. $U$ is divided as $$U(x^{\mu}) = \xi_L^{\dagger}(x^{\mu}) \cdot \xi_R(x^{\mu}) \,,$$ where $$\begin{aligned} \xi_{R,L}(x^{\mu}) = {\rm P} \exp \left[ i \int_{0}^{\pm\infty} dz' A_z(x^{\mu},z') \right] \,.\end{aligned}$$ $\xi_{R,L}$ transform as [@Bando:1984ej] $$\xi_{R,L}^\prime = h(x^{\mu}) \cdot \xi_{R,L} \cdot g_{R,L}^{\dagger} \, ,$$ where $h(x^\mu)=g(x^{\mu}, z=0)$ is a transformation of the HLS. We further parameterize $\xi_{R,L}$ as [@Bando:1984ej] $$\xi_{R,L}(x^\mu) = e^{\frac{i\sigma(x^\mu)}{F_\sigma}} \cdot e^{\pm \frac{i\pi(x^\mu)}{F_\pi}} \,,$$ where $\sigma$ denote the NG bosons associated with the spontaneous breaking of the HLS, and $F_\sigma$ the decay constant of $\sigma$. The $\sigma$ are absorbed into the gauge bosons of the HLS which acquire the mass through the Higgs mechanism. It is convenient to work in the $A_z(x^\mu,z)\equiv 0$ gauge [@SaSu]. There still exists a residual gauge symmetry, $h(x^\mu)=g(x^{\mu}, z=0)$, which was identified with the hidden local symmetry (HLS) in Ref. [@SaSu]. The 5-dimensional gauge field $A_{\mu}$ transforms under the residual gauge symmetry (HLS) as $$\begin{aligned} A_{\mu}(x^{\mu},z) &\rightarrow& h(x^{\mu}) \cdot A_{\mu}(x^{\mu},z) \cdot h^{\dagger}(x^{\mu}) \nonumber \\ && \hspace{20pt} - i \partial_{\mu} h(x^{\mu}) \cdot h^{\dagger}(x^{\mu}) \,. \label{trans amu}\end{aligned}$$ In this gauge the NG boson fields are included in the boundary condition for the 5-dimensional gauge field $A_{\mu}$ as $$A_{\mu}(x^{\mu}, z= \pm \infty) = \alpha^{R,L}_{\mu}(x^{\mu}) \,,$$ where $$\alpha^{R,L}_{\mu} (x^{\mu}) = i \xi_{R,L} (x^{\mu}) \partial_{\mu} \xi^{\dagger}_{R,L}(x^{\mu}) \, ,$$ which transform under the HLS as in the same way as in Eq. (\[trans amu\]). Relation to HLS in large $N_C$ limit ==================================== In contrast to Ref. [@SaSu] where vector meson fields are identified with the CCWZ matter fields transforming [*homogeneously*]{} under HLS, we here introduce vector meson fields as an infinite tower of the HLS [*gauge fields*]{} $V_\mu^{(k)}$ ($k=1,2,\ldots$), which transform [*inhomogeneously*]{} under the HLS as in Eq. (\[trans amu\]) [@HY:PRep]. Using $V_\mu^{(k)}$ together with $\alpha^{R,L}_{\mu}$, we expand the 5-dimensional gauge field $A_{\mu}$ as $$\begin{aligned} A_{\mu}(x^{\mu},z) &=& \alpha^R_{\mu}(x^{\mu}) \phi^r(z) + \alpha^L_{\mu}(x^{\mu}) \phi^l(z) \nonumber \\ && + \sum_{k \ge 1} V^{(k)}_{\mu}(x^{\mu}) \phi_k(z) \, , \label{A expansion}\end{aligned}$$ where the functions $\phi^r$, $\phi^l$ and $\phi_k$ ($k=1,2,\ldots$) form a complete set in the $z$-coordinate space. These functions {$\phi^r$, $\phi^l$, $\phi_k$} are different from the eigenfunctions $\psi_n$ in [@SaSu] which satisfy the eigenvalue equation $$-K^{1/3} \partial_z \left( K \partial_z \psi_n \right) = \lambda_n \psi_n \, ,$$ with the eigenvalues $\lambda_n$. Then, the functions {$\phi^r$, $\phi^l$, $\phi_k$ } are not separately the solutions of the eigenvalue equation but are expressed by linear combinations of the solutions, as we will see later. Substituting Eq.(\[A expansion\]) into the action (\[D8 brane action\]), we obtain the 4-dimensional theory with an infinite tower of the massive vector and axialvector mesons and the NG bosons associated with the chiral symmetry breaking. We would like to stress that, since the 5-dimensional gauge field $A_\mu$ is expanded in terms of the HLS gauge fields $V_{\mu}^{(k)}$, the action (\[D8 brane action\]) is expressed as the form [*manifestly gauge invariant*]{} under the HLS, which enables us to calculate the $1/N_c$-subleading correction in a systematic way. Let us concentrate on the lightest vector meson together with the NG bosons by integrating out the heavy vector and axialvector meson fields [^2]. As a result, the HLS gauge field $V_{\mu}$ corresponding to the lightest vector meson is embedded into $A_{\mu}$ as $$\begin{aligned} A_{\mu}(x^{\mu},z) &=& \alpha_{\mu}^R (x^{\mu}) \varphi^r(z) + \alpha^L_{\mu}(x^{\mu}) \varphi^l(z) \nonumber \\ && + V_{\mu}(x^{\mu}) \varphi(z) \, , \label{A expansion1}\end{aligned}$$ where $\varphi^r$, $\varphi^l$ and $\varphi$ denote the wave functions modified by integrating out the heavier mesons. Note that they satisfy the following constraint: $$\begin{aligned} \varphi^r(z) + \varphi^l(z) + \varphi(z) = 1 \,, \label{constraint for n equal 1} \end{aligned}$$ which follows from the consistency condition between the transformation properties ([*inhomogeneous term*]{}) of the left and right hand sides of Eq.(\[A expansion1\]). The relations between $\{\varphi^r , \varphi^l , \varphi \}$ and the eigenfunctions of the eigenvalue equation are obtained in the following way: We introduce the 1-forms $\widehat{\alpha}_{\mu||}$ and $\widehat{\alpha}_{\mu\perp}$ defined as $$\begin{aligned} {\widehat{\alpha}}_{\mu||}(x^{\mu}) &=& \frac{\alpha^R_{\mu}(x^{\mu})+\alpha^L_{\mu}(x^{\mu})}{2} - V_{\mu}(x^{\mu}) \,, \\ {\widehat{\alpha}}_{\mu\perp}(x^{\mu}) &=& \frac{\alpha^R_{\mu}(x^{\mu})-\alpha^L_{\mu}(x^{\mu})}{2} \,. \end{aligned}$$ Then Eq.(\[A expansion1\]) is rewritten into the following form: $$\begin{aligned} A_{\mu}(x^{\mu},z) &=& {\widehat{\alpha}}_{\mu\perp}(x^{\mu}) \left(\varphi^r(z)- \varphi^l(z) \right) \nonumber \\ && + \left( \widehat{\alpha}_{\mu||}(x^{\mu})+V_{\mu}(x^{\mu})\right) \left(\varphi^r(z)+ \varphi^l(z) \right) \nonumber \\ && +V_{\mu}(x^{\mu}) \varphi(z) \,. \label{A expansion2}\end{aligned}$$ Since the 1-form $\widehat{\alpha}_{\mu\perp}$ includes the NG boson field as $\widehat{\alpha}_{\mu \perp} = \frac{1}{F_{\pi}} \partial_{\mu} \pi + \cdots$, we identify the combination $\varphi^r-\varphi^l$ with the eigenfunction $\psi_0$ for the zero eigenvalue as $$\varphi^r(z) - \varphi^l(z) = \psi_0(z)= \frac{2}{\pi} \tan^{-1}z \,.$$ On the other hand, since the HLS gauge field $V_{\mu}$ corresponds to the lightest vector meson, we identify the wave function $\varphi$ with the eigenfunction of the first excited KK mode, $$\varphi(z) = - \psi_1(z) \, .$$ Then, by using Eq.(\[constraint for n equal 1\]), the wave functions $\varphi^r$ and $\varphi^l$ are expressed in terms of the eigenfunctions $\psi_0$ and $\psi_1$ as $$\begin{aligned} \varphi^{r,l}(z) &=& \frac{1}{2} \pm \frac{1}{2} \psi_0(z) + \frac{1}{2}\psi_1(z) \,. \end{aligned}$$ By using this, Eq.(\[A expansion2\]) is rewritten into the following form: $$\begin{aligned} A_{\mu}(x^{\mu},z) &=& \widehat{\alpha}_{\mu\perp}(x^{\mu}) \psi_0(z) + \left( \widehat{\alpha}_{\mu||}(x^{\mu}) + V_{\mu}(x^{\mu}) \right) \nonumber \\ && + \widehat{\alpha}_{\mu||}(x^{\mu}) \psi_1(z) \,. \end{aligned}$$ It should be noticed that neither the wave function $\varphi^r$ nor $\varphi^l$ is the eigenfunction for the zero eigenvalue. This is the reflection of the well-known fact that the massless photon field is given by a linear combination of the HLS gauge field and the gauge field corresponding to the chiral symmetry [@Bando:1984ej; @HY:PRep]. Now, since we introduce the vector meson field as the gauge field of the HLS, the derivative expansion of the Lagrangian becomes possible. This is an important difference compared with the formulation done in Ref. [@SaSu]. Then, the leading Lagrangian counted as $\mathcal{O}(p^2)$ in the derivative expansion is constructed by the terms generated from the $F_{\mu z}F^{\mu z}$ term in the action (\[D8 brane action\]) together with the kinetic term of the HLS gauge field $V_{\mu}$ from the $F_{\mu\nu}F^{\mu\nu}$ term. On the other hand, the $\mathcal{O}(p^4)$ terms come from the remainder of the $F_{\mu\nu}F^{\mu\nu}$ term in the action (\[D8 brane action\]). The resultant Lagrangian takes the form of the HLS model [@Bando:1984ej; @HY:PRep]: $$\begin{aligned} \mathcal{L} &=& F_{\pi}^2 {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\mu}_{\perp} ] + F_\sigma^2 {\rm tr}[ \widehat{\alpha}_{\mu||} \widehat{\alpha}^{\mu}_{||} ] \nonumber \\ && - \frac{1}{2g^2}{\rm tr}[ V_{\mu\nu} V^{\mu\nu} ] + \mathcal{L}_{(4)} \,, \label{HLS Lag} \end{aligned}$$ where $\mathcal{L}_{(4)}$ is constructed by the ${\mathcal O}(p^4)$ terms [@Tanabashi:1993sr; @HY:PRep]: $$\begin{aligned} \mathcal{L}_{(4)} &=& y_1 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\mu}_{\perp} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\nu}_{\perp} ] + y_2 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\mu}_{\perp} \widehat{\alpha}^{\nu}_{\perp} ] \nonumber \\ && + y_3 \, {\rm tr}[ \widehat{\alpha}_{\mu||} \widehat{\alpha}^{\mu}_{||} \widehat{\alpha}_{\nu||} \widehat{\alpha}^{\nu}_{||} ] + y_4 \, {\rm tr}[ \widehat{\alpha}_{\mu||} \widehat{\alpha}_{\nu||} \widehat{\alpha}^{\mu}_{||} \widehat{\alpha}^{\nu}_{||} ] \nonumber \\ && + y_5 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\mu}_{\perp} \widehat{\alpha}_{\nu||} \widehat{\alpha}^{\nu}_{||} ] + y_6 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\mu}_{||} \widehat{\alpha}^{\nu}_{||} ] \nonumber \\ && + y_7 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\nu}_{||} \widehat{\alpha}^{\mu}_{||} ] \nonumber \\ && + y_8 \, \Bigg\{ {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\mu}_{||} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\nu}_{||} ] + {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\nu}_{||} \widehat{\alpha}_{\nu\perp} \widehat{\alpha}^{\mu}_{||} ] \Bigg\} \nonumber \\ && + y_9 \, {\rm tr}[ \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\nu}_{||} \widehat{\alpha}_{\mu\perp} \widehat{\alpha}^{\nu}_{||} ] \nonumber \\ && + i z_4 \, {\rm tr}[ V_{\mu\nu} \widehat{\alpha}^{\mu}_{\perp} \widehat{\alpha}^{\nu}_{\perp} ] + i z_5 \, {\rm tr}[ V_{\mu\nu} \widehat{\alpha}^{\mu}_{||} \widehat{\alpha}^{\nu}_{||} ] \,. \end{aligned}$$ Note that all the parameters in the Lagrangian are expressed in terms of the parameters of the 5-dimensional gauge theory as $$\begin{aligned} F_{\pi}^2 &=& N_c GM_{KK}^2 \int dz K(z) \left[ \dot{\psi}_0(z) \right]^2 \,, \label{Fpi} \\ F_{\sigma}^2 &=& N_c GM_{KK}^2 \lambda_1 \langle \psi^2_1 \rangle \, , \label{Fsigma} \\ \frac{1}{g^2} &=& N_cG \langle \psi_1^2 \rangle \,, \label{g} \\ y_1 &=&-y_2 = -N_cG \cdot \langle 1 + \psi_1 - \psi_0^2 \rangle \,, \label{y1} \\ y_3 &=& -y_4= -N_cG \cdot \langle \psi^2_1 \left( 1 + \psi_1 \right)^2 \rangle \,, \label{y3} \\ y_5 &=&2 y_8=-y_9= -2N_cG \cdot \langle \psi_1^2 \psi_0^2 \rangle \, , \label{y5} \\ y_6 &=& -y_5-y_7 \,, \label{y6} \\ y_7 &=& 2N_cG \cdot \langle \psi_1 \left ( 1 + \psi_1 \right) \left( 1 + \psi_1 - \psi_0^2 \right) \rangle \,, \label{y7} \\ z_4 &=& -2N_cG \cdot \langle \psi_1 \left( 1 + \psi_1 - \psi_0^2 \right) \rangle \,, \label{z4} \\ z_5 &=& -2N_cG \cdot \langle \psi_1^2 \left( 1 + \psi_1 \right) \rangle \,, \label{z5} \end{aligned}$$ with $\lambda_1$ being the eigenvalue determined by solving the eigenvalue equation, and $$\langle A \rangle \equiv \int dz K^{-1/3}(z) A(z)$$ for a function $A(z)$. In Eq.(\[Fsigma\]), we used an identity $$\int dz K(z) \dot{\psi}_1^2(z) = \lambda_1 \int dz K^{-1/3}(z) \psi_1^2(z) \,.$$ We should note that the normalization of the eigenfunction $\psi_1$ is not solely determined from the eigenvalue equation and the boundary condition $\psi_1(\pm \infty)=0$. In addition, the values of the ’t Hooft coupling $G$ and the mass scale $M_{KK}$ are not fixed in the model. As a result, none of three parameters of the HLS at the leading order, ($F_{\pi}, F_{\sigma}, g$), are fixed in the present model: We need three phenomenological inputs to fix their values. However, this implies that several physical predictions can be made from only three phenomenological inputs. It should be also noticed that the Lagrangian (\[HLS Lag\]) has all the parameters consistently with the large $N_c$ counting rule, although several ones are absent since the external gauge fields are not incorporated in the model: As is well known, terms including two or more traces are suppressed by $1/N_c$ compared with terms of just one trace in the large $N_c$ limit [@Gas:84]. We note that all the terms in Eq.(\[HLS Lag\]) are of $\mathcal{O}(N_c)$ as one can easily see in Eqs.(22)-(31), which are constructed by just one trace of the product. $\rho$-$\pi$-$\pi$ coupling and KSRF II relation at large $N_c$ limit ===================================================================== As usual in the HLS model [@Bando:1984ej; @HY:PRep], from the Lagrangian (\[HLS Lag\]), we can easily read off the $\rho$ mass square and the $\rho$-$\pi$-$\pi$ coupling: $$\begin{aligned} m_{\rho}^2 & =& a g^2 F_{\pi}^2 \,, \nonumber \\ g_{\rho\pi\pi} &=& \frac{1}{ 2}ag \left( 1 + \frac{1}{2} g^2 z_4 \right) \,, \end{aligned}$$ where phenomenologically important parameter $a$ is defined by $$a \equiv \frac{F_\sigma^2}{F_\pi^2} \ .$$ It should be noted that these quantities are expressed in terms of the parameters of the 5-dimensional gauge theory, by using Eqs.(\[Fpi\])-(\[g\]) and (\[z4\]), as $$\begin{aligned} m_\rho^2 &=& \lambda_1 M_{KK}^2 \,, \\ g_{\rho\pi\pi} &=& \frac{\pi}{4} \frac{\lambda_1 }{\sqrt{N_c G}} \sqrt{ \frac{ \langle \psi_1(1-\psi_0^2) \rangle^2 }{ \langle \psi_1^2 \rangle}} \,. \end{aligned}$$ Since they are independent of the normalization of the eigenfunction $\psi_1$, $m_\rho^2$ and $g_{\rho\pi\pi}$ are completely determined, once the values of $G$ and $M_{KK}$ are fixed. Moreover, the following ratio related to the KSRF II relation is calculable even independently of these inputs: $$\begin{aligned} \frac{m_{\rho}^2}{g_{\rho\pi\pi}^2 F_{\pi}^2} = \frac{4}{a \left( 1 + \frac{1}{2} g^2 z_4 \right)^2} \nonumber &=& \frac{4}{\pi} \frac{ \langle \psi_1^2 \rangle } { \lambda_1 \langle \psi_1(1 - \psi_0^2) \rangle^2 } \\ &\simeq& 3.0 \, , \label{bare KSRFII} \end{aligned}$$ which is roughly 50% larger than the value of the KSRF II relation, $$\frac{m_{\rho}^2}{g_{\rho\pi\pi}^2 F_{\pi}^2} =2 \,,$$ or the experimental value estimated as $$\frac{m_{\rho}^2}{g_{\rho\pi\pi}^2 F_{\pi}^2} \Bigg|_{\rm exp} = 1.96 \,,$$ where use has been made of $F_{\pi}= 92.4\,\,{\rm MeV}$, $m_{\rho} = 775.8 \,\,{\rm MeV} $ and $g_{\rho\pi\pi} = 5.99 $. Alternatively, when we use $F_{\pi}(0)=86.4$ MeV in the chiral limit [@HY:PRep], $$\frac{m_{\rho}^2}{g_{\rho\pi\pi}^2 F_{\pi}^2(0)} \Bigg|_{\rm chi} = 2.24 \,.$$ The result coincides with that in Ref. [@SaSu]. This must be so, since different identifications of the $\rho$ meson field, whether the gauge field or the CCWZ matter field, cannot lead to different results [*as far as the tree-level amplitude is concerned*]{} [@HY:PRep]. $1/N_c$-Subleading Corrections ============================== Now we propose a way to include a part of the $1/N_c$ corrections through meson loops as follows: Let us consider the Lagrangian (\[HLS Lag\]), which has the parameters determined in the large $N_c$ limit, as the [*bare Lagrangian*]{} defined at a scale $\Lambda$: $\mathcal{L} = \mathcal{L}(\Lambda)$ [@Harada:1999zj; @HY:PRep]. Then the parameters in the bare Lagrangian are defined as the [*bare parameters*]{} such as $F_{\pi} = F_{\pi}(\Lambda)$, $a=a(\Lambda)$, $g=g(\Lambda)$, and so on. The bare theory is matched to the HQCD at the scale $\Lambda$ which we call the matching scale. Then, the $1/N_c$ corrections are incorporated into physical quantities in such a way that we consider the quantum correction generated from the $\rho$ and $\pi$ loops in HLS ChPT. For $m_{\rho}\le \mu \le\Lambda$ the quantum corrections are incorporated through the renormalization group equations (RGEs) for $F_{\pi}(\mu)$, $a(\mu)$, $g(\mu)$, and $z_4(\mu)$ in the HLS theory [*including the quadratic divergence*]{} in the Wilsonian sense [@Harada:1999zj; @HY:PRep]: [^3] $$\begin{aligned} \mu \frac{dF_{\pi}^2}{d\mu} &=& \frac{N_f}{2(4\pi)^2} [3a^2g^2F_{\pi}^2 + 2(2-a) \mu^2] \,, \label{RGE of F pi} \\ \mu \frac{da}{d \mu} &=& -\frac{N_f}{2(4\pi)^2} (a-1) \nonumber \\ && \times \Bigg[ 3a(a+1)g^2 - (3a-1) \frac{\mu^2}{F_{\pi}^2} \Bigg] \,,\label{RGE of a} \\ \mu \frac{dg^2}{d \mu} &=& -\frac{N_f}{2(4\pi)^2} \frac{87-a^2}{6}g^4 \,,\label{RGE of g} \\ \mu \frac{dz_4}{d \mu} &=& \frac{N_f}{2(4\pi)^2} \frac{2 + 3a - a^2}{6} \, . \label{RGE of z4}\end{aligned}$$ Since $z_4(\Lambda)$ is related to $(a(\Lambda), g(\Lambda))$ as $$a(\Lambda) \left(1 + \frac{1}{2} g^2(\Lambda) z_4(\Lambda) \right)^2 \simeq \frac{4}{3} \,, \label{a-z4}$$ through the HQCD result in Eq.(\[bare KSRFII\]), all four parameters in the low-energy region are determined from just three bare parameters $F_\pi({\Lambda})$, $g(\Lambda)$ and $a(\Lambda)$ through the above RGEs. Note that the $\rho$ meson mass $m_\rho$ is determined by the on-shell condition: $$m_\rho^2 = a(m_\rho) F_{\pi}^2(m_\rho) g^2(m_\rho) \, .$$ For $0\le \mu \le m_{\rho}$, on the other hand, the couplings other than $F_{\pi}$ do not run, while $F_{\pi}$ does by the quantum corrections from the $\pi$ loop alone. As a result, the physical decay constant $F_{\pi}=F_{\pi}(0)$ is related to $F_{\pi}(m_\rho)$ via the RGEs [@HY:PRep]: $$\begin{aligned} F_\pi^2(0) &=& F_\pi^2(m_\rho) \Bigg[ 1 - \frac{N_f}{(4\pi)^2} \frac{m_\rho^2}{F_\pi^2(m_\rho)} \nonumber \\ && \hspace{65pt} \times \left( 1 - \frac{a(m_\rho)}{2} \right) \Bigg] \,. \label{fpichi}\end{aligned}$$ Following Ref. [@HY:PRep], we take as inputs $N_f=3$, $F_{\pi}(0)=86.4 \pm 9.7$ MeV (value at the chiral limit) and $m_{\rho} = 775.8$ MeV [@PDG], and a particular parameter choice [^4] $$z_4(\Lambda)=0 \,, \quad {\rm i.e.}, \quad a(\Lambda) \simeq \frac{4}{3} \simeq 1.33 \, ,$$ among those satisfying Eq.(\[a-z4\]), so that $z_4(m_{\rho})$ is solely induced by the loop corrections ($1/N_c$ corrections). From these, we determine the values of $F_{\pi}(\Lambda)$ and $g(\Lambda)$ as done in Ref. [@HY:PRep]. We choose the matching scale $\Lambda$ as $\Lambda=1.0,1.1,$ and $1.2$ GeV since the effect from the $a_1$ meson is not included. $\Lambda\,\,[\rm GeV]$ $m_{\rho}^2/(g_{\rho\pi\pi}^2F^2_{\pi})$ $g_{\rho\pi\pi}$ ------------------------ ------------------------------------------ ------------------- 1.0 $ 1.98 \pm 1.01 $ $ 6.38 \pm 1.46 $ 1.1 $ 2.01 \pm 1.02 $ $ 6.34 \pm 1.45 $ 1.2 $ 2.04 \pm 1.04 $ $ 6.28 \pm 1.44 $ Exp. $1.96 \pm 0.00 $ $ 5.99 \pm 0.03 $ Chi. $ 2.24 \pm 0.50 $ : Predicted values for the KSRF II relation and $g_{\rho\pi\pi}$ including the $1/N_c$ corrections with $F_\pi(0)$ and $m_\rho$ used as inputs. Value of the ratio $m_{\rho}^2/(g_{\rho\pi\pi}^2F^2_{\pi})$ indicated by “Exp." is obtained with the experimental value $F_{\pi}=92.4 \,\, {\rm MeV}$, while the one by “Chi." is with the value $F_{\pi}(0)=86.4 \pm 9.7\,\,{\rm MeV}$ (at the chiral limit). All errors of the predictions arise from the input value of $F_{\pi}(0)$. []{data-label="rho pi pi"} We should carefully define the physical $\rho$-$\pi$-$\pi$ coupling $g_{\rho\pi\pi}$. One would naively regard the physical $\rho$-$\pi$-$\pi$ coupling as $$g_{\rho\pi\pi} = \frac{1}{2} a(m_\rho) g(m_\rho) \left[1 + \frac{1}{2} g^2(m_\rho) z_4(m_\rho) \right] \, ,$$ where $a(m_\rho)=F_\sigma^2(m_\rho)/F_\pi^2(m_\rho)$. However, $g_{\rho\pi\pi}$ should be defined for the rho meson and the pion both on the mass shell. While $F_\sigma^2$ and $g$ as well as $z_4$ do not run for $\mu<m_\rho$, $F_\pi^2$ does run. Since the on-shell pion decay constant is given by $F_\pi(0)$, we have to use $F_\pi(0)$ to define the on-shell $\rho$-$\pi$-$\pi$ coupling constant [@HY:PRep]. The resultant expression is given by $$g_{\rho\pi\pi} = \frac{1}{2} a(0) g(m_\rho) \left( 1 + \frac{1}{2} g^2(m_\rho) z_4(m_\rho) \right) \, , \label{grhopipi}$$ where $a(0) \equiv F_{\sigma}^2(m_{\rho})/F_{\pi}^2(0)$ is related to $a(m_\rho)$ through Eq.(\[fpichi\]) as: $$\begin{aligned} \frac{1}{a(0)} &=& \frac{1}{a(m_\rho)} \Bigg[ 1 - \frac{3}{(4\pi)^2} \frac{m_\rho^2}{F_\pi^2(m_\rho)} \nonumber \\ && \hspace{65pt} \times \left( 1 - \frac{a(m_\rho)}{2} \right) \Bigg] \, . \label{azero}\end{aligned}$$ By using the above $g_{\rho\pi\pi}$, the physical quantity related to the KSRF II relation is given by $$\begin{aligned} \frac{m_{\rho}^2}{g_{\rho\pi\pi}^2 F_{\pi}^2(0)} &=& \frac{4}{ a(0) \left(1 + \frac{1}{2} g^2(m_{\rho}) z_4(m_{\rho}) \right)^2} \nonumber \\ &\simeq& 2.0 \, , \label{on-shell g rho pi pi }\end{aligned}$$ in good agreement with the experiment, where we have computed $$a(0) \simeq 2.0 \quad , \quad \frac{1}{2} g^2(m_\rho) z_4(m_\rho) \simeq -8.0 \times 10^{-3}$$ through RGE analysis for $\Lambda=1.1 {\rm GeV}$, which are compared with the bare values $a(\Lambda)\simeq 4/3$ and $\frac{1}{2} g^2(\Lambda) z_4(\Lambda)=0$. Eq. (\[on-shell g rho pi pi \]) is our main result, which is compared with the holographic result Eq.(\[bare KSRFII\]). We note that those corrections are of $\mathcal{O}(1/N_c)$. Actually, we may set $a(m_\rho)\simeq a(\Lambda)$ in Eq.(\[azero\]), since $a(\mu)$ does hardly run for $m_\rho < \mu < \Lambda$ due to the fact that the bare value $a(\Lambda) \simeq 1.33$ is close to the fixed point value $a=1$ of the RGE (\[RGE of a\]) (See also Fig. 17 of Ref. [@HY:PRep]). Then $$\begin{aligned} \frac{1}{a(0)} &\simeq& \frac{1}{a(\Lambda)} \Bigg[ 1 - \frac{3}{(4\pi)^2} \frac{m_\rho^2}{F_\pi^2(m_\rho)} \nonumber \\ && \hspace{65pt} \times \left( 1 - \frac{a(\Lambda)}{2} \right) \Bigg] \label{azero rev}\end{aligned}$$ whose second term in the bracket with $m_\rho^2/F_\pi^2$ ($\sim 1/N_c$) is nothing but the $\mathcal{O}(1/N_c)$ correction essentially coming from the pion loop contributions for $0 < \mu < m_\rho$. In Table \[rho pi pi\], we show the predicted values of $m_{\rho}^2/(g_{\rho\pi\pi}^2F^2_{\pi}(0))$ and of $g_{\rho\pi\pi}$ for $\Lambda=1.0, 1.1,1.2$ GeV in good agreement with the experiment within the errors coming from the input value $F_{\pi}(0)$ evaluated at the chiral limit [@HY:PRep]. The result is fairly insensitive to the choice of the matching scale $\Lambda$. This implies that $1/N_c$ corrections actually improve the HQCD prediction, Eq.(\[bare KSRFII\]), $ m_{\rho}^2/(g_{\rho\pi\pi}^2 F_{\pi}^2)|_\Lambda \simeq 3.0$, into the realistic value $\simeq 2.0$. It should be emphasized that the $1/N_c$ corrections make the value always closer to the experimental value for a wide range of the value of the parameter $a(\Lambda)$ not restricted to the present one $a(\Lambda)\simeq 4/3$.\ By introducing external field, SS [@SaSu] obtained “vector meson dominance” for the pion electromagnetic form factor, though not the celebrated “$\rho$ dominance” due to significant contributions from higher resonances, particularly the $\rho^\prime$. The above peculiarity is closely related to its prediction of $g_\rho$, the $\rho$-$\gamma$ mixing strength, or the pion form factor just on the $\rho$ pole in the [*time-like region*]{}, namely a wrong KSRF I relation, $g_\rho/(g_{\rho\pi\pi} F_\pi^2) \simeq 4$ [@SaSu], which is a factor 2 larger than the correct one. These problems will be dealt with in the forthcoming paper [@forthcoming]. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Sekhar Chivukula, Shigeki Sugimoto and Masaharu Tanabashi for useful comments and discussions. This work was supported in part by The Mitsubishi Foundation and JSPS Grant-in-Aid for Scientific Research (B) 18340059, and by the 21st Century COE Program of Nagoya University provided by JSPS (15COEG01). It was also supported in part by the Daiko Foundation \#9099 (M.H.) and JSPS Grant-in-Aid for Scientific Research (C)(2) 16540241 (M.H.). [99]{} J. M. Maldacena, Adv. Theor. Math. Phys.  [**2**]{}, 231 (1998); Int. J. Theor. Phys.  [**38**]{}, 1113 (1999); S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Phys. Lett. B [**428**]{}, 105 (1998); E. Witten, Adv. Theor. Math. Phys.  [**2**]{}, 253 (1998). O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, Phys. Rept.  [**323**]{}, 183 (2000) T. Sakai and S. Sugimoto, Prog. Theor. Phys. [**113**]{}, 843 (2005);   [**114**]{}, 1083 (2006). D. T. Son and M. A. Stephanov, Phys. Rev. D [**69**]{}, 065020 (2004); J. Erlich, E. Katz, D. T. Son and M. A. Stephanov, Phys. Rev. Lett.  [**95**]{}, 261602 (2005); S. Hong, S. Yoon and M. J. Strassler, arXiv:hep-ph/0501197; L. Da Rold and A. Pomarol, Nucl. Phys. B [**721**]{} (2005) 79; N. J. Evans and J. P. Shock, Phys. Rev. D [**70**]{}, 046002 (2004); K. Ghoroku and M. Yahiro, Phys. Lett. B [**604**]{}, 235 (2004); K. Ghoroku, N. Maru, M. Tachibana and M. Yahiro, Phys. Lett. B [**633**]{}, 602 (2006); J. Hirn and V. Sanz, JHEP [**0512**]{}, 030 (2005); J. Hirn, N. Rius and V. Sanz, Phys. Rev. D [**73**]{}, 085005 (2006); S. J. Brodsky and G. F. de Teramond, Phys. Rev. Lett.  [**96**]{}, 201601 (2006). M. Bando, T. Kugo, S. Uehara, K. Yamawaki and T. Yanagida, Phys. Rev. Lett.  [**54**]{}, 1215 (1985); M. Bando, T. Kugo and K. Yamawaki, Nucl. Phys. B [**259**]{}, 493 (1985); T. Fujiwara, T. Kugo, H. Terao, S. Uehara and K. Yamawaki, Prog. Theor. Phys.  [**73**]{}, 926 (1985); M. Bando, T. Fujiwara, and K. Yamawaki, Prog. Theor. Phys. [**79**]{} (1988) 1140; M. Bando, T. Kugo and K. Yamawaki, Phys. Rept.  [**164**]{}, 217 (1988). M. Harada and K. Yamawaki, Phys. Lett. B [**297**]{}, 151 (1992); M. Harada, T. Kugo and K. Yamawaki, Phys. Rev. Lett.  [**71**]{}, 1299 (1993); Prog. Theor. Phys.  [**91**]{}, 801 (1994). M. Tanabashi, arXiv:hep-ph/9306237; Phys. Lett. B [**316**]{}, 534 (1993). M. Harada and K. Yamawaki, Phys. Rev. Lett.  [**83**]{}, 3374 (1999);   [**86**]{}, 757 (2001); Phys. Rev. D [**64**]{}, 014023 (2001). M. Harada and K. Yamawaki, Phys. Rept.  [**381**]{}, 1 (2003). J. Gasser and H. Leutwyler, Annals Phys.  [**158**]{}, 142 (1984); Nucl. Phys. B [**250**]{}, 465 (1985). H. Georgi, Phys. Rev. Lett.  [**63**]{} (1989) 1917; Nucl. Phys. [**B 331**]{} (1990) 311. S. Eidelman [*et al.*]{} \[Particle Data Group\], Phys. Lett. B [**592**]{}, 1 (2004). M. Harada, S. Matsuzaki, and K. Yamawaki, in preparation. [^1]: The result of this paper was presented at the annual meeting of the Physical Society of Japan (held at Tokyo University of Science at Noda), March 24-27, 2005, and at a workshop on “Progress in the Particle Physics 2005” held at Yukawa Institute for Theoretical Physics, Kyoto University, June 20-24, 2005. [^2]: This is contrasted with simply putting the heavy fields $V_{\mu}^{(k)}$ $(k \ge 2) =0 $ in Eq.(\[A expansion\]). The wave functions $\phi_k(z)$ are thus modified, when we integrate out the heavier fields [@forthcoming]. [^3]: Coefficients of RGEs for all the $\mathcal{O}(p^4)$ terms including $z_4$ are given in Appendix D, Table 20 of Ref. [@HY:PRep]. [^4]: $a=4/3$ implies the $\rho$ dominance of the $\pi$ $\pi$ scattering [@HY:PRep].
--- abstract: 'We have obtained empirical relations between the p-mode frequency shift and the [*change*]{} in solar activity indices. The empirical relations are determined on the basis of frequencies obtained from BBSO and GONG stations during solar cycle 22. These relations are applied to estimate the change in mean frequency for the cycle 21 and 23. A remarkable agreement between the calculated and observed frequency shifts for the ascending phase of cycle 23, indicates that the derived relations are independent of epoch and do not change significantly from cycle to cycle. We propose that these relations could be used to estimate the shift in p-mode frequencies for past, present and future solar activity cycles, if the solar activity index is known. The maximum frequency shift for cycle 23 is estimated to be 265 $\pm$ 90 nHz, corresponding to the predicted maximum smoothed sunspot number 118.1 $\pm$ 35.' author: - 'Kiran , S. C. , A. and Brajesh' title: 'EMPIRICAL ESTIMATE OF p-MODE FREQUENCY SHIFT FOR SOLAR CYCLE 23' --- Dr. Kiran Jain\ Udaipur Solar Observatory\ Off Bari Road, Dewali, P. B. No. 198,\ Udaipur - 313001,\ India\ e-mail:kiran@uso.ernet.in Introduction ============ It is now well established that the solar p-mode oscillation frequencies vary with solar activity. The first evidence of this effect came from the analysis of low degree acoustic frequencies derived from solar irradiance data by the Active Cavity Radiometer instrument on board [*Solar Maximum Mission*]{} satellite [@wn85]. Using the Doppler velocity data, also showed that the frequency shifts are well correlated with solar activity cycle and obtained a shift of 0.44 $\pm$ 0.06 $\mu$Hz between the minimum and maximum of solar cycle 21. Later, several other authors have extended these studies to different epochs with new and improved data sets for intermediate (; ) and low degree modes (Jiménez-Reyes [*et al.*]{}, 1998 and [*references therein*]{}). A consistent and continuous data set of intermediate degree p-mode frequencies is recently made available from the Global Oscillation Network Group (GONG) for the period May 1995 to October 1998, with an interval of 108 days. Using some of these data sets between August 1995 to August 1997, obtained a decrease of 0.06 $\mu$Hz in the mean frequency during the descending phase of the solar cycle 22 and an increase of 0.04 $\mu$Hz during the ascending phase of the cycle 23. , using a subset of GONG frequencies, also demonstrated that the p-mode frequencies vary with solar activity cycle. The prime motivation of this work is to derive empirical relations between the shift in frequency and [*change*]{} in the level of the activity indices. The derived relations could be used to estimate the frequency shift for past and future solar cycles. In this paper, we have used the [*change*]{} in activity indices corresponding to the same epoch as the frequency shifts instead of the [*actual*]{} value of activity. We find that the correlation between change in activity and frequency shifts is better as compared when the actual value of the activity indices were used. Using a similar approach showed that the frequency shifts between 1981 and 1989 are correlated with the [*change*]{} in various activity indices; e.g., the sunspot number, sunspot area, and irradiance measurements. Observational data ================== The mode frequencies for cycle 22 in the intermediate degree range are available from the Big Bear Solar observatory (BBSO), LOWL instrument, South Pole expeditions and GONG project. As shown by , frequencies derived from South Pole observations are systematically higher as compared with other three data sets. Similarly frequencies derived from LOWL instrument are from one year power spectra, which may not be valid for the study of short period solar cycle variation. Thus, in this study we use p-mode frequencies obtained from BBSO and GONG, for the period 1986 to 1996; these are summarised in Table I. It may be noted that we have used only the continuous and independent data sets from the GONG network. As it was shown by that the mode frequency shift strongly depends on the frequencies, we have used only the common modes available between GONG and BBSO data sets. This selection criteria generated a total number of 412 common modes in the frequency range of 1500 and 3500 $\mu$Hz and $\ell$ between 5 to 99. Further, we have categorised the data sets into two groups according to the data source as shown in Table I. [llcc]{} Data Set & & Extent & Number of Modes\ & & (days) & (in 0 $\leq$ $\ell$ $\leq$ 100)\ BBSO86 & Mar-Aug 1986 & 131 & 1095\ BBSO88 & Mar-Sep 1988 & 183 & 1095\ BBSO89 & Mar-Sep 1989 & 182 & 1095\ BBSO90 & Mar-Sep 1990 & 180 & 1095\ GM2 & 7 May-22 Aug 1995 & 108 & 1079\ GM5 & 23 Aug-8 Dec 1995 & 108 & 1078\ GM8 & 9 Dec-25 Mar 1996 & 108 & 1579\ GM11 & 26 Mar-11 July 1996 & 108 & 1143\ GM14 & 12 July-27 Oct 1996 & 108 & 1055\ Analysis and results ==================== The mean frequency shift is calculated by taking simple difference between any two data sets chosen from the same group. With 9 frequency data sets (see Table I), the possible combinations (choosing any two at a time, out of the total data sets from an individual group) produced 16 values of the mean frequency shifts. In order to investigate how the frequency shifts are correlated with the [*change*]{} in the level of activity indices and to derive empirical relations between them, we have used seven different activity indices representing the magnetic and radiative indices. These indices are: R$_I$, unsmoothed International sunspot number obtained from the Solar Geophysical Data (SGD); KPMI, Kitt Peak Magnetic Index from Kitt peak full disk magnetograms; MPSI, Magnetic Plage Strength Index from Mount Wilson magnetograms [@ulr91]; FI, total flare index from SGD and Ataç (1999); He I, equivalent width of Helium I 10830Å line [@har84], averaged over the whole disk from Kitt peak; F$_{10}$, integrated radio flux at 10.7 cm from SGD, and R$_s$, smoothed International sunspot number obtained from SGD. A mean value for each activity index was computed, for the epoch corresponding to the actual frequency interval. To study the relative variation in the mean frequency shift $\delta\nu$ with the change in activity index $\delta i$, we assume a linear relationship of the form: $$\delta\nu = a~\delta i + b ,$$ where slope $a$ and intercept $b$ are obtained by performing a linear least square fit and is shown in Figure 1. The solid line represents the ‘ best regression fit and confirms that the data sets are consistent with the assumption of a linear relationship. The bars represent 1$\sigma$ error in fitting. epsf =3.0in The linear relationship given in Equation (1) is further tested by calculating the $\chi^2$, parametric Pearson’s coefficient, $r_P$, the rank correlation coefficient, $r_S$, and their probabilities $P_p$ and $P_s$ respectively. Table II summarises the correlation statistics for all the data sets included in the fitting and shows that a positive correlation exists for all the activity indices. We further note that the best correlation is obtained for F$_{10}$, which confirms earlier results (; ) that the radiative indices are better correlated with the frequency shifts. From the slope $a$ and the gradient $b$ obtained from the regression fitting, the following empirical relations between the shift, $\delta\nu$ and [*change*]{} in activity indices, $\delta i$ are formulated: [lccccc]{} Activity Index & $\chi^2$&$r_{P}$ & $P_{p}$ & $r_{S}$ & $P_{s}$\ R$_I$ &13& 0.99 & 1.4E$-12$ & 0.97 & 4.7E$-$10\ KPMI &72& 0.91 & 7.1E$-07$ & 0.73 & 1.2E$-$03\ F$_{10}$ & 30&1.00 & 3.9E$-19$ & 0.99 & 8.1E$-$13\ He I & 12&0.99 & 3.2E$-14$ & 0.93 & 1.9E$-$07\ FI & 52&0.98 & 5.0E$-12$ & 0.94 & 8.5E$-$08\ MPSI & 25&0.97 & 3.7E$-10$ & 0.81 & 1.4E$-$04\ R$_s$ &38& 0.99 & 1.0E$-13$ & 0.95 & 1.4E$-$08\ $$\begin{aligned} \delta\nu & = &(2.44 \pm 0.18)~\delta R_I - (6.00 \pm 1.36) \\ \delta\nu &= &(18.70 \pm 1.62)~\delta KPMI - (0.82 \pm 1.77) \\ \delta\nu &= &(2.66 \pm 0.20)~\delta F_{10} - (4.67 \pm 1.44) \\ \delta\nu & =& (9.82 \pm 0.71)~\delta HeI + (0.83 \pm 1.68) \\ \delta\nu & = &(23.03 \pm 1.86)~\delta FI - (11.10 \pm 1.22) \\ \delta\nu & = &(159.01 \pm 11.90)~\delta MPSI - (36.10 \pm 1.86) \\ \delta\nu & =& (2.41 \pm 0.19)~\delta R_s - (0.48 \pm 1.68) \end{aligned}$$ where $\delta\nu$ is given in nHz and the change in activity indices have their standard units. The second term in the parenthesis indicates 1$\sigma$ error. As mentioned previously, these relations are valid in the frequency range of 1500 to 3500 $\mu$Hz and $\ell$ between 5 to 99. epsf =4.5in These relations are used to estimate the mean frequency shifts for solar cycle 21 through 23 by taking the frequency data set GM2 as the reference point. As an example, the estimated frequency shift obtained from Equation (8) for the smoothed sunspot number is plotted in Figure 2 and yields a shift of 0.37 $\pm$ 0.03 $\mu$Hz between the minimum and maximum of the solar cycle 21. This value is consistent with the earlier results by and , who had obtained a shift of 0.44 $\pm$ 0.06 $\mu$Hz and 0.46 $\pm$ 0.06 $\mu$Hz respectively for the same period, using the Doppler velocity data of low degree ($\ell$  $\leq$  3) modes. From Figure 2, we notice that a very good agreement exists between the calculated values of $\delta\nu$ and the observed GONG frequencies (diamonds), however a small difference for BBSO (triangles) and LOWL (squares) is noticed. This difference may be interpreted as due to the use of different spectral lines in BBSO and LOWL instruments [@howe99] or different data reduction techniques. Comparison between cycle 22 and 23 ---------------------------------- epsf =4.5in To investigate the validity of empirical relations for different cycles, we calculated the fitting parameters $a$ and $b$ for cycle 23, using the available GONG data in the frequency range of 1500-3500 $\mu$Hz and $\ell$ between 5 to 99. These parameters for solar cycle 22 and 23 are given in Table III and a reasonable agreeement is found for both the cycles. [lrrrr]{} Activity Index&&\ & & & &\ R$_I$ & 2.44 $\pm$ 0.18 & $-$6.00 $\pm$ 1.36 & 2.01 $\pm$ 0.07 & $-$4.20 $\pm$ 0.56\ KPMI & 18.70 $\pm$ 1.62 & $-$0.82 $\pm$ 1.77 & 22.6 $\pm$ 0.79 & $-$1.86 $\pm$ 0.56\ F$_{10}$ & 2.66 $\pm$ 0.20 & $-$4.67 $\pm$ 1.44 & 2.72 $\pm$ 0.09 & $-$3.25 $\pm$ 0.56\ HeI & 9.82 $\pm$ 0.71 & $+$0.83 $\pm$ 1.68 & 6.84 $\pm$ 0.25 & $+$5.56 $\pm$ 1.31\ FI & 23.03 $\pm$ 1.86 & $-$11.10 $\pm$ 1.22 & 37.70 $\pm$ 1.32 & $-$1.71 $\pm$ 1.49\ MPSI & 159.01 $\pm$ 11.90 & $-$36.10 $\pm$ 1.86 & 107.00 $\pm$ 4.17 & $-$15.12 $\pm$ 2.06\ Therefore, we propose that the derived linear relations are independent of solar cycle and can be used to estimate the frequency shifts for past, present and future solar cycles. This is illustrated in Figure 3, wherein we have plotted the calculated frequency shifts using Equations (2) and (4) and the observed frequency shifts for cycle 23. The dashed and solid lines represent the estimated $\delta\nu$, obtained from the relation for International sunspot number and 10.7 cm radio flux respectively, whereas the frequency shifts from the GONG data, with reference to GM2, are shown as triangles. It is clear that the observed frequency shifts for cycle 23 are in close agreement with those obtained from the derived relations for cycle 22. epsf =4.5in In Figure 4, we have plotted the estimated frequency shifts for the current solar cycle 23, using predicted smoothed sunspot numbers and 10.7 cm flux. The solid line in Figure 4$a$ shows the predicted shift using R$_s$, as listed in Solar Geophysical Data web page (http://www.ngdc.noaa/gov/stp/stp.html). The dashed lines are the predicted errors due to error in R$_s$. Each smoothed sunspot number represents the average of two adjacent 12-month running mean of monthly means. The predicted value of sunspot number is based on the actual value of smoothed sunspot number for February 1999, which utilises the values of the averaged monthly means from August 1998 through August 1999. The predicted frequency shift for the solar cycle 23 is estimated to be 265 $\pm$ 90 nHz corresponding to the predicted maximum sunspot number of 118.1 $\pm$ 35. In Figure 4$b$ and 4$c$, we also show the estimated $\delta\nu$ for the current solar cycle using the predicted smoothed sunspot number and 10.7 cm radio flux obtained from http://wwwssl.msfc.nasa.gov/ssl/pad/solar/predict.htm. In summary, we have obtained empirical relations between the [*change*]{} in solar activity indices and the shift in p-mode frequencies for cycle 22. From these empirical relations, we have estimated p-mode frequency shift for the current cycle 23. It will be of interest to see how our predicted value agrees with the actual observed value during solar maximum in 2000. It is further shown that these relations are independent of the solar cycle and hence can be used to estimate the change in frequency for past, present and future epochs, if the solar activity index is known. We thank T. Ataç, and R.K. Ulrich for supplying us the Flare index and MPSI data respectively. The BBSO p-mode data were acquired by Ken Libbrecht and Martin Woodard, Big Bear Solar Observatory, Caltech. LOWL data were obtained from (http://www.hao.ucar.edu/public/research/mlso/LowL/lowl.html), NSO/Kitt Peak magnetic, and Helium measurements used here are produced cooperatively by NSF/NOAO; NASA/GSFC and NOAA/SEL. This work utilizes data obtained by the Global Oscillation Network Group project, managed by the National Solar Observatory, a Division of the National Optical Astronomy Observatories, which is operated by AURA, Inc. under cooperative agreement with the National Science Foundation. The data were acquired by instruments operated by Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Obsrvatory, Udaipur Solar Observatory, Instituto de Astrophsico de Canaris, and Cerro Tololo Interamerican Observatory. This work is partially supported under the CSIR Emeritus Scientist Scheme and Indo-US collaborative programme–NSF Grant INT-9710279. Ataç, T.: 1999, private communication. Bachmann, K. T. and Brown, T. M.: 1993, , L45. Bhatnagar, A., Jain, K., and Tripathy, S. C.: 1999, , 885. Elsworth, Y., Howe, R., Isaak, G. R., Mcleod, C. P.,  and New. R.: 1990, Nature [**345**]{}, 322. Harvey, J. W.: 1984, in B. La Bonte, G. Chapman, H. Hudson, and R. C. Wilson (eds.), Workshop on Solar Irradiance Variations on Active Region Time Scales, NASA CP-2310; NASA, Washington, p. 197. Howe, R., Komm, R., and Hill, F.: 1999, , 1084. Jain, K., Tripathy, S. C., Kumar, B., and Bhatnagar, A. 1999, Bull. Astron. Soc. India (in Press). Jiménez-Reyes, S. J., Régulo, C , Pallé, P. L., and Roca Cortés, T.: 1998, , 1119. Libbrecht, K. G., and Woodard, M. F.: 1990, in Osaki, Y. and Shibahashi, H. (eds.), Progress of Seismology of the Sun and Stars, Springer, p. 145. Pallé. P. L., Régulo, C., and Roca Cortés, T.: 1989, , 253. Rhodes, E. J., Jr., Cacciani, A., Korzennik, S. G., and Ulrich, R. K.: 1993, , 714. Ulrich, R. K.: 1991, Adv. Space Res. [**11(4)**]{}, 217. Woodard, M. F., Kuhn, J. R., Murray, N., and Libbrecht, K. G.: 1991, , L81. Woodard, M. F. and Noyes, R. W.: 1985, Nature [**318**]{}, 449.
--- abstract: 'In this paper, we provide a comprehensive system model of a wireless-powered sensor network (WPSN) based on experimental results on a real-life testbed. In the WPSN, a sensor node is wirelessly powered by the RF energy transfer from a dedicated RF power source. We define the behavior of each component comprising the WPSN and analyze the interaction between these components to set up a realistic WPSN model from the systematic point of view. Towards this, we implement a real-life and full-fledged testbed for the WPSN and conduct extensive experiments to obtain model parameters and to validate the proposed model. Based on this WPSN model, we propose an energy management scheme for the WPSN, which maximizes RF energy transfer efficiency while guaranteeing energy neutral operation. We implement the proposed energy management scheme in a real testbed and show its operation and performance.' author: - 'Dedi Setiawan, Arif Abdul Aziz, Dong In Kim, and Kae Won Choi, [^1] [^2] [^3] [^4]' bibliography: - 'IEEEabrv.bib' - 'WPSN\_BIB.bib' title: | Experiment, Modeling, and Analysis\ of Wireless-Powered Sensor Network\ for Energy Neutral Power Management --- Wireless-powered communication networks, RF energy transfer, sensor networks, stored energy evolution model, energy neutral operation, duty cycling Introduction {#section:introduction} ============ The wireless power transfer technology enables a power source to wirelessly transfer electrical energy to another device by means of electromagnetic fields. While near-field wireless power transfer technologies such as inductive and magnetic resonant coupling can deliver energy to only close-by devices, the radio frequency (RF) energy transfer technology is able to realize far-field power transfer to remotely located devices [@Huang:2015]. However, the end-to-end efficiency of the RF energy transfer is typically very low since a receive antenna captures only a very small fraction of electromagnetic energy that is spread out in space. Therefore, one of the most promising application areas of RF energy transfer is sensor networks since sensor nodes are generally sustainable by a relatively small amount of energy provision [@Xie:2013]. In this paper, we focus on studying a wireless-powered sensor network (WPSN) that is powered by the RF energy transfer from a dedicated RF power source. In recent years, a number of research works have been published by communications society in the area of wireless-powered communication networks (WPCNs) [@Bi:2015; @Lu:2016]. The WPCN is defined as wireless networks consisting of a hybrid access point (H-AP), which acts as both a communication gateway and a power source, and communication nodes, which are powered by the RF energy transfer from the H-AP. Most of the works regarding the WPCN focuses on theoretical investigation into radio resource allocation (e.g., [@Lu:2015Dec]), beamforming (e.g., [@Choi:2015]), cooperative communications (e.g., [@Chen:2015]), and full-duplex communications (e.g., [@Kang:2015]). The weakness of these theoretical works is that they do not incorporate realistic power transfer, energy harvesting, and energy consumption models. For example, most of these works ideally assume that a dominant cause of energy drain is the power used for data transmission, while the circuit power consumption is more significant in most practical applications. In addition, these works assume that RF energy harvesting efficiency is constant even though the actual RF energy harvesting efficiency has a complex non-linear curve because of non-ideal diode behavior. Therefore, it is of paramount importance to set up a practical WPCN model based on real experiments on a testbed. Other than theoretical studies on the WPCN, practical circuit design issues of the RF energy transfer have been investigated in [@Dolgov:2010; @Popovic:2013; @Visser:2012; @Farinholt:2009]. In these works, the authors design and implement RF energy harvesting circuit components such as a rectenna and a maximum power point tracking (MPPT) module. Among these works, [@Dolgov:2010] and [@Popovic:2013] target to harvest energy from ambient power sources (e.g., TV and cell towers) whereas a dedicated RF power source is considered in [@Visser:2012] and [@Farinholt:2009]. Since these works focus their efforts on designing individual circuit components, they do not give a systematic and integrated view of sensor networks powered by the RF energy transfer. In this paper, we provide a comprehensive system model of WPSNs based on real-life experiments. The WPSN under consideration consists of a power beacon (i.e., an RF power source) and a sensor node. The power beacon and the sensor node again consist of many circuit components, for example, an amplifier, an RF transceiver, a wireless energy harvester, and an energy storage, etc. Rather than delving into each component, we model the operation of the whole WPSN system by defining the behavior of each component and the interaction between components. Moreover, we set up a real-life WPSN testbed and conduct extensive experiments to obtain model parameters and validate the proposed model. To our best knowledge, there has been no research work that provides an experiment-based model of the WPSN. Therefore, our WPSN model can be used as a baseline model for computer simulation and theoretical analysis of the WPSN. In addition, this model is expected to facilitate the understanding of the WPSN and to highlight potentials and limitations of the WPSN. Another contribution of this paper is to propose an energy management scheme for energy neutral operation based on the WPSN model. The energy neutral operation makes sure that power consumption is less than harvested power for keeping a sensor node alive. A large number of research works have been conducted on energy harvesting sensor networks harnessing energy from ambient energy sources [@Sudevalayam:2011]. Since ambient energy harvesting is typically uncontrollable, an energy management scheme controls a duty cycle of a sensor node to guarantee energy neutral operation. For example, adaptive energy management schemes are proposed for sensor networks exploiting solar power (e.g., [@Moser:2010; @Renner:2014]) and ambient RF power (e.g., [@Shigeta:2013; @Vyas:2013]). In contrast to these works assuming uncontrollable ambient power sources, our WPSN model adopts a controllable dedicated RF power source. This controllability of the power source adds a new dimension to the traditional energy neutral operation problem. The proposed energy management scheme aims at minimizing power consumption of an amplifier in the RF power source while guaranteeing the energy neutral operation. The proposed energy management scheme achieves its goal by adaptively controlling the RF transmit power so that the RF energy transfer efficiency is maximized. We implement the proposed energy management scheme in a real testbed and show its operation and performance. In our companion work [@Choi:2016], we have studied the multi-antenna WPSN, in which the power beacon concentrates the RF energy on the sensor node by using an antenna array to enhance the RF energy transfer efficiency. The main focus of this companion work is to propose an energy beamforming algorithm that dynamically steers the microwave beam and to conduct experiments on the multi-antenna WPSN testbed for testing the algorithm performance. Differently from the companion work, this paper aims to set up an integrated system model of the single antenna WPSN and to propose an efficient energy management scheme. The rest of the paper is organized as follows. We present the WPSN system model in Section \[section:system model\]. In Section \[section:measurement\], we explain our testbed implementation and present measurement results. The proposed energy management scheme is described in Section \[section:energy management\]. Section \[section:result\] presents detailed experimental results, and the paper is concluded in Section \[section:conclusion\]. System Model {#section:system model} ============ Wireless-Powered Sensor Network Model ------------------------------------- In this section, we introduce an overall system model of the WPSN under consideration. In Fig. \[fig:model\], we show the block diagram of the WPSN model. The WPSN consists of one power beacon and one sensor node. The basic role of the power beacon is to wirelessly supply energy to the sensor node by means of the RF energy transfer technique. The power beacon is connected to a power grid. On the other hand, the sensor node relies solely on the energy wirelessly supplied by the power beacon, without any other power source. The power beacon gathers sensor measurement results from the sensor node and controls the operation of the sensor node through a data communication link. In the power beacon, the signal generator generates a weak continuous wave (CW) RF source signal and feeds the RF source signal to the amplifier. The amplifier amplifies the weak RF source signal from the signal generator and sends a high-power RF signal to the air through the transmit antenna. For amplification, the amplifier consumes DC power from the DC power supply. Therefore, the amplifier performs DC-to-RF power conversion. The controller of the power beacon conducts a power management function that keeps the stored energy of the sensor node to a sufficient level while minimizing power consumption of the amplifier. For the power management, the controller is able to control the gain of the amplifier. In addition, the controller can send a control command to the sensor node via the RF transceiver for the power management. The control decision made by the controller is based on the sensor measurement report from the sensor node, which is delivered by the RF transceiver. In the sensor node, the wireless energy harvester receives the RF signal from the power beacon through the receive antenna. The wireless energy harvester performs RF-to-DC conversion by using a rectifier. The DC power from the wireless energy harvester is stored in the energy storage. We adopt a supercapacitor as the energy storage rather than a rechargeable battery. A supercapacitor is considered as a suitable energy storage device for energy harvesting sensor nodes since it can endure rapid charging and discharging cycles. The micro controller unit (MCU), which consists of a central processing unit (CPU) and peripherals, controls the whole sensor node. The MCU obtains sensing results from the sensors and sends the sensor measurement report to the power beacon via the RF transceiver. The MCU, the RF transceiver, and the sensor consume energy stored in the energy storage. The WPSN system model has two RF channels: the RF energy transfer channel and the data communication channel. These two channels use different frequency bands, and therefore there is no interference between these channels. Power Beacon and RF Energy Transfer Channel Model ------------------------------------------------- In this subsection, we provide a more detailed model of the power beacon and the RF energy transfer channel. The RF source signal generated by the signal generator is a CW signal with the frequency $f_o$ and the power $p_\text{src}$. The frequency $f_o$ and the source power $p_\text{src}$ are fixed. The amplifier amplifies this weak RF source signal to generate a transmit RF signal. Henceforth, the power of the transmit RF signal will be called transmit power. We assume that the amplifier can be dynamically turned on and off by the controller. The transmit power, denoted by $p_\text{tx}$, is given by $$\begin{aligned} \label{eq:amptx} p_\text{tx} = \chi_\text{amp}\cdot g_\text{amp}(p_\text{src})\cdot p_\text{src},\end{aligned}$$ where $g_\text{amp}(p)$ is the amplifier gain as a function of amplifier input power $p$, and $\chi_\text{amp}$ is the indicator that is one if the amplifier is turned on; and zero, otherwise. The transmit power cannot exceed the maximum output power of the amplifier. The power amplifier consumes power from the DC power supply. Let $p_\text{cons}$ denote the amplifier power consumption. When the amplifier is turned on, the power added efficiency (PAE) of the amplifier is given by $$\begin{aligned} \label{eq:pae} \text{PAE} = \frac{p_\text{tx}-p_\text{src}}{p_\text{cons}} \simeq \frac{p_\text{tx}}{p_\text{cons}},\end{aligned}$$ since $p_\text{src}$ is negligibly small compared to $p_\text{tx}$. Since the PAE is a function of the output power of the amplifier (i.e., the transmit power), we define $\theta(p)$ as the PAE function mapping the output power $p$ to the PAE. The amplifier power consumption is zero when the amplifier is turned off (i.e., when $\chi_\text{amp}=0$). From , the amplifier power consumption is given as a function of $p_\text{tx}$ by $$\begin{aligned} \label{eq:ampcons} p_\text{cons} = \chi_\text{amp}\cdot \frac{p_\text{tx}}{\theta(p_\text{tx})}.\end{aligned}$$ The transmit RF signal from the amplifier is sent from the transmit antenna of the power beacon. This transmit RF signal goes through the RF energy transfer channel and arrives at the receive antenna of the sensor node. The receive power, denoted by $p_\text{rx}$, is defined as the power of the received RF signal at the sensor node. The receive power is given by $$\begin{aligned} p_\text{rx} = h\cdot p_\text{tx},\end{aligned}$$ where $h$ is the power attenuation of the RF energy transfer channel. The power attenuation depends on the distance between the power beacon and the sensor node, which is denoted by $d$. The power attenuation function $\psi$ defines the relationship between the power attenuation and the distance according to the following path loss formula: $$\begin{aligned} h=\psi(d) = G/d^\nu,\end{aligned}$$ where $d$ is the distance in meter, $\nu$ is a path loss exponent, and $G$ is the power attenuation at one meter distance. Sensor Node Power Model {#section:powermodel} ----------------------- In this subsection, we explain the sensor node model in more detail. Fig. \[fig:circuit\] shows an equivalent power circuit model of the sensor node. This model consists of three hardware modules: a wireless energy harvesting module, an energy storage module, and a sensor module. The RF signal from the power beacon is received by the wireless energy harvesting module and is converted to a DC current to supply energy to the sensor node. A supercapacitor is used in the energy storage module. The harvested energy is stored in the supercapacitor when more energy is harvested than the energy consumed by the sensor module. The sensor module obtains sensing results, performs computation, and sends the sensing results to the power beacon. For this operation, the sensor module consumes energy harvested by the wireless energy harvesting module. In the sensor node model in Fig. \[fig:circuit\], all circuit components are simply connected in parallel and have the same voltage, denoted by $V$, dropped across them. Henceforth, we will call $V$ the sensor node voltage. The sensor node voltage is dependent upon the voltage across the supercapacitor, which is in turn decided by the stored energy in the supercapacitor. In our sensor node model, we do not incorporate a DC-DC converter component such as an MPPT module and the voltage regulator. Although an MPPT module can apply the proper resistance to the wireless energy harvester to obtain the maximum power, an MPPT module consumes some energy by itself. Since only a very small amount of energy can be harvested by the wireless energy harvester, it is not appropriate to use an energy-consuming MPPT module. For the same reason, we do not use the voltage regulator for the sensor module. Since we do not adopt any DC-DC converter component, the input voltage to the wireless energy harvester and the sensor module is equal to the voltage across the supercapacitor. The wireless energy harvester receives the RF signal from the power beacon. The wireless energy harvester generally consists of a matching network and a rectifier. The performance of the wireless energy harvester depends on how these components are designed. Due to the nonlinearity of these components, it is very difficult to derive a meaningful analytic expression for describing the behavior of the wireless energy harvester. Rather than modeling each component of the wireless energy harvester, a current-voltage curve (i.e., an I-V curve) according to a given received power (i.e., $p_\text{rx}$) can be used for fully characterizing the wireless energy harvester. The I-V function of the wireless energy harvester, denoted by $\rho$, is defined as $$\begin{aligned} \label{eq:ehiv} I_\text{WEH} = -\rho(V,p_\text{rx}),\end{aligned}$$ where $p_\text{rx}$ is the received power and $I_\text{WEH}$ is the current through the wireless energy harvester. As shown in Fig. \[fig:model\], a supercapacitor of the energy storage module is typically modeled by the equivalent circuit consisting of an ideal capacitor and a leakage resistor. The ideal capacitor has a capacitance of $C$. The time derivative of the voltage across the ideal capacitor is proportional to the current as $$\begin{aligned} \frac{\mathrm{d}V}{\mathrm{d}t} = \frac{I_\text{cap}}{C},\end{aligned}$$ where $I_\text{cap}$ is the current through the ideal capacitor. The stored energy in the ideal capacitor, denoted by $E$, is given by $$\begin{aligned} \label{eq:sten} E = \frac{1}{2}C V^2.\end{aligned}$$ A leakage resistor is introduced to describe a non-ideal behavior of a real supercapacitor. A supercapacitor discharges by itself even when it is disconnected from other parts of the circuit. If the resistance of the leakage resistor is $R_\text{leak}$, the leakage current is $I_\text{leak} = V/R_\text{leak}$. The supercapacitor in consideration is fully characterized by two parameters: the capacitance $C$ and the leakage resistance $R_\text{leak}$. The sensor module acts as a load that draws energy from the wireless energy harvester and the supercapacitor. The main energy sinks in the sensor module are integrated circuits (ICs) such as the MCU and the RF transceiver. In the sensor module, the RF transceiver is typically compliant to a low power communication standard such as IEEE 802.15.4. We do not consider energy consumption of sensors such as a temperature sensor since it is very small compared to the energy consumption of other ICs. The ICs on the sensor module act as different types of loads depending on how they consume energy. Two types of loads are incorporated in this model: a constant resistance load and a constant current load. Typically, the sensor module alternates between the several different sensor modes: idle, active, receive, and transmit. The mode of the sensor module is indexed by $m=\text{idle}$, $\text{act}$, $\text{rx}$, and $\text{tx}$ for the idle, active, receive, and transmit mode, respectively. Almost all the ICs are turned off except for minimal functionalities (e.g., timer) in the idle mode, the MCU is activated in the active mode, the RF transceiver is ready to receive data in the receive mode, and the RF transceiver transmits data in the transmit mode. Each mode has different load characteristics. When the sensor module is in mode $m$, the resistance of the constant resistance load and the current of the constant current load are denoted by $\gamma(m)$ and $\zeta(m)$, respectively. The sensor module stops working if the input voltage drops under the minimum voltage required for operation. We define this minimum voltage as the minimum sensor node voltage, denoted by $V_\text{min}$. The wireless energy harvester cuts off the current if the sensor node voltage exceeds the maximum allowed voltage of the supercapacitor or the sensor module to avoid damage to these components. We define this maximum voltage as the maximum sensor node voltage, denoted by $V_\text{max}$. Stored Energy Evolution Model {#section:analysis} ----------------------------- This subsection introduces the derivation of the ordinary differential equation (ODE) that governs the time evolution of the stored energy $E$ in the sensor node. The time evolution model of the stored energy in a supercapacitor can be obtained by analyzing the equivalent circuit [@Mishra:2015; @Renner:2014]. It is noted that the stored energy evolution model in this subsection and the sensor node power model in Section \[section:powermodel\] are partly introduced in [@Choi:2016] as well. However, these models are not experimentally validated in [@Choi:2016]. The time derivative of the stored energy in is equal to the power charged to the ideal capacitor as given in the following equation: $$\begin{aligned} \label{eq:dedt} \frac{\mathrm{d}E}{\mathrm{d}t} = VI_\text{cap}.\end{aligned}$$ By the Kirchhoff’s law, the current through the ideal capacitor $I_\text{cap}$ in is given by $$\begin{aligned} I_\text{cap} = I_\text{ES} - I_\text{leak}.\label{eq:i1}\end{aligned}$$ Let us obtain the current through the energy storage module $I_\text{ES}$ and the current through the leakage resistor $I_\text{leak}$. By the Kirchhoff’s law, the current through the energy storage module is given by $$\begin{aligned} I_\text{ES} = - I_\text{WEH} - I_\text{sen},\label{eq:i2}\end{aligned}$$ The current through the wireless energy harvester is $I_\text{WEH} = -\rho(V,p_\text{rx})$ from and the current through the sensor module is given by $$\begin{aligned} \label{eq:isen} I_\text{sen} = \frac{V}{\gamma(m)} + \zeta(m).\end{aligned}$$ From , , and , the current through the energy storage module is $$\begin{aligned} \label{eq:ies} I_\text{ES} = - I_\text{WEH} - I_\text{sen} =\rho(V,p_\text{rx}) - \frac{V}{\gamma(m)} - \zeta(m).\end{aligned}$$ In addition, we obtain the current through the leakage resistor $I_\text{leak}$ as $$\begin{aligned} \label{eq:ileak} I_\text{leak} = \frac{V}{R_\text{leak}}.\end{aligned}$$ From , , , and , the time derivative of the stored energy is given by $$\begin{aligned} \label{eq:evol1} \begin{split} \frac{\mathrm{d}E}{\mathrm{d}t} & = VI_\text{cap} = V(I_\text{ES}-I_\text{leak})\\ &=\rho(V,p_\text{rx})V - \frac{V^2}{\gamma(m)} - \zeta(m) V - \frac{V^2}{R_\text{leak}}. \end{split}\end{aligned}$$ In , $\rho(V,p_\text{rx})V$ is the harvested power by the wireless energy harvester. We define the wireless energy harvesting efficiency as the ratio of the harvested power to the receive power. Then, the wireless energy harvesting efficiency as functions of $V$ and $p_\text{rx}$ is given by $$\begin{aligned} \label{eq:efficiency} \eta_V(V,p_\text{rx}) = \frac{\rho(V,p_\text{rx})V}{p_\text{rx}}.\end{aligned}$$ Since $V = \sqrt{2E/C}$ from , we can rewrite with respect to the stored energy $E$ as follows: $$\begin{aligned} \label{eq:evol2} \begin{split} \frac{\mathrm{d}E}{\mathrm{d}t} = \phi(E,p_\text{rx}) - \xi_m(E) - \xi_\text{leak}(E). \end{split}\end{aligned}$$ In , $\phi(E,p_\text{rx})$ is the harvested power such that $$\begin{aligned} \label{eq:phieph} \phi(E,p_\text{rx}) = \eta_E(E,p_\text{rx})\cdot p_\text{rx},\end{aligned}$$ where $\eta_E(E,p_\text{rx})$ is the wireless energy harvesting efficiency function with respect to the stored energy such that $$\begin{aligned} \eta_E(E,p_\text{rx}) = \eta_V\big(\sqrt{2E/C},p_\text{rx}\big).\end{aligned}$$ In , $\xi_m(E)$ is the sensor module power consumption in mode $m$ such that $$\begin{aligned} \xi_m(E) = \frac{2}{C\gamma(m)}E + \sqrt{\frac{2 \zeta(m)^2}{C}}\sqrt{E},\end{aligned}$$ and $\xi_\text{leak}(E)$ is the supercapacitor leakage power such that $$\begin{aligned} \xi_\text{leak}(E) = \frac{2}{C\cdot R_\text{leak}}E.\end{aligned}$$ We define the minimum and maximum stored energies corresponding to the minimum and the maximum sensor node voltages as $E_\text{min} = C(V_\text{min})^2/2$ and $E_\text{max} = C(V_\text{max})^2/2$, respectively. Then, the stored energy $E$ should be kept above $E_\text{min}$ for continuous sensor node operation. In addition, the stored energy cannot exceed $E_\text{max}$ since no power is charged by the wireless energy harvester in the case that $E \ge E_\text{max}$. Testbed Setup and Measurement {#section:measurement} ============================= Testbed Setup ------------- We set up a real-life WPSN testbed that implements the system model described in Section \[section:system model\]. This testbed is used for measuring the parameters and the functions of the system model (i.e., $\theta$, $\psi$, $\rho$, $C$, $R_\text{leak}$, $\gamma(m)$, and $\zeta(m)$), verifying the system model, and evaluating the proposed energy management scheme. This testbed has the same architecture as the system model in Fig. \[fig:model\], that is, the testbed consists of a power beacon, which is able to wirelessly send energy via microwave, and a sensor node, which makes use of the energy received from the power beacon. Only off-the-shelf commercially available hardware components are used to build the testbed, and the testbed is controlled by software so that various energy and data transmission experiments can easily be conducted and reproduced. The power beacon consists of a controller, a signal generator, an amplifier, and a DC power supply. In the testbed, the controller is comprised of a desktop computer, a PXI chassis, and an FPGA module. In addition, a signal generator is Tektronix TSG-4104A, and an amplifier is RFMD RF2173. The RF source signal generated by the signal generator is fed into the amplifier. The amplifier is controlled by the analog output voltage of the FPGA module, which is connected to the desktop computer via the PXI chassis. In the desktop computer, a Labview software is used to control the power beacon and process the measurement data. The amplifier draws power from the DC power supply (i.e., GW Instek GPC-3060D) to amplify the RF source signal. The RF energy transfer uses the frequency of 920 MHz. Fig. \[fig:pbpic\] shows a picture of the power beacon in our testbed.      We use two types of RF energy transfer channels for the testbed. The first one is the antenna-based channel environment, in which the power beacon sends the RF signal by a Yagi antenna (i.e., Laird PC904N Yagi antenna) with 8 dBi gain, and the sensor node receives the RF signal by a PCB patch antenna with 6.1 dBi gain. Although we can test actual wireless power transfer over the air in this antenna-based channel environment, it is difficult to obtain reproducible results since accurate control of the power attenuation is not possible. Therefore, we use a step attenuator-based channel environment, in which the RF signal from the power beacon goes through a step attenuator to get to the sensor node. In this channel environment, we can accurately control the power attenuation to simulate the real wireless channel with various distances. Unless noted otherwise, we will mainly use the step attenuator-based channel for reproducibility. The sensor node in the testbed consists of a wireless energy harvesting module, an energy storage module, and a sensor module with the same arrangement as in Fig. \[fig:circuit\]. In the sensor node, a sensor module (i.e., Zolertia Z1 mote [@zolertiaz1]) draws current from a supercapacitor (i.e., Samxon DDL series) that stores energy charged by a wireless energy harvesting module (i.e., Powercast P1110 [@p1110]). The wireless energy harvesting module has an internal RF power sensor and is capable of measuring the receive power. The sensor module can acquire the received power from the wireless energy harvesting module by using analog-to-digital conversion (ADC). The sensor module can also measure the sensor node voltage, which indicates the amount of the energy stored in the supercapacitor. The sensor module adopts TI MSP430 as an MCU and TI CC2420 as an RF transceiver. In Fig. \[fig:snpic\], we show a picture of the sensor node in our testbed. The power beacon and the sensor node exchange information by means of the IEEE 802.15.4 RF transceiver on the 2.4 GHz ISM band. In the sensor node, the IEEE 802.15.4-compliant chip (i.e., TI CC2420) in the sensor module is used for communication. On the other hand, in the power beacon, a Zolertia Z1 mote attached to the desktop computer via serial connection is used for communication. By using this communication link, the sensor node can report the receive power and the sensor node voltage to the power beacon. The power beacon can send a command to the sensor node via this communication link. Power Beacon and RF Energy Transfer Channel Measurement -------------------------------------------------------   We have conducted an experiment on the power amplifier to obtain the amplifier power consumption and the PAE according to the transmit power. We set the power of the RF source signal to 6 dBm and the voltage of the DC power supply to 3.5 V. Fig. \[fig:ampcon\] shows that the PAE function (i.e., $\theta$) is an increasing function of the transmit power. In Fig. \[fig:atten\], we show the power attenuation according to the distance in the antenna-based channel environment. This figure plots the receive power for the given transmit power, from which we can derive the formula for the power attenuation according to the distance. By regression analysis, the power attenuation is obtained as $h = \psi(d)= 0.01/d^{3.31}$, where $d$ is the distance in meter. Sensor Node Measurement -----------------------   We have conducted tests on the wireless energy harvester (i.e., Powercast P1110) to obtain the I-V function $\rho$ and the wireless energy harvesting efficiency function $\eta_V$ for various receive power. For the test, a signal generator is used to input a 920 MHz CW signal and an electronic load (i.e., Mayuno M9711) is utilized to measure the current according to various voltages. Fig. \[fig:vigraph\] shows the measurement result of the I-V function of the wireless energy harvester. The graphs in Fig. \[fig:vigraph\] are drawn for the receive power ranging from $-0.3$ dBm to $15.7$ dBm. This figure defines the I-V function $\rho$ in that can be used to calculate the current through the wireless energy harvester according to the sensor node voltage. From the I-V function, we calculate the wireless energy harvesting efficiency as $\eta_V(V,p_\text{rx}) = V\rho(V,p_\text{rx})/p_\text{rx}$. The wireless energy harvesting efficiency graph is plotted in Fig. \[fig:harveff\] as a function of the receive power.    The parameters of the supercapacitor are obtained in the testbed as well. The testbed uses a Samxon DDL series supercapacitor with the capacitance of $C=0.1$ F. In Fig. \[fig:leakage\], a leakage test is conducted by letting the supercapacitor discharge by itself. The ideal capacitor and the leakage resistor in Fig. \[fig:model\] form an RC circuit when disconnected from other parts. The voltage of an RC circuit evolves according to the equation $V(t) = V(0)\cdot \exp(-t/(R_\text{leak}C))$, where $V(t)$ is the voltage at time $t$ and $V(0)$ is the initial voltage at time $t=0$. By fitting this equation into the graph in Fig. \[fig:leakage\], the leakage resistance is obtained as $R_\text{leak} = 196$ k$\Omega$. In the testbed, the Zolertia Z1 mote is used as a sensor module. We test the load characteristics of the Z1 mote for different modes of the sensor module. In Fig. \[fig:modepwr\], the power consumption of the Z1 mote is measured according to the voltage. In the idle mode, very small power consumption is observed because of the leakage current of the ICs on the sensor module. In the active mode, only the MCU (i.e., MSP430), which acts as a constant resistance load, is activated. From Fig. \[fig:modepwr\], the load resistance of the MSP430 is calculated to be 0.626 k$\Omega$. In the receive and the transmit mode, the RF transceiver (i.e., CC2420) receives and transmits while the MSP430 is activated. The power consumption of the CC2420 is obtained by subtracting out the power consumption of the MSP430 from the measured power consumption. The CC2420 acts as a constant current load [@cc2420], and the current consumption of the CC2420 is calculated to be 15.87 mA in the receive mode and 14.55 mA in the transmit mode. These results are summarized in Table \[tab:chipmode\]. From this table, the resistance of the constant resistance load (CRL) (i.e., $\gamma(m)$) and the current of the constant current load (CCL) (i.e., $\zeta(m)$) are decided for each mode $m$. **Load Type** **Idle** **Active** **Receive** **Transmit** --------------- ---------- ----------------- ----------------- ----------------- CRL $\infty$ 0.626 k$\Omega$ 0.626 k$\Omega$ 0.626 k$\Omega$ CCL 0.035 mA 0.035 mA 15.87 mA 14.55 mA : Measured parameters of the constant resistance and constant current loads for each mode of the Z1 mote. \[tab:chipmode\] Energy Management Scheme for Energy Neutral Operation {#section:energy management} ===================================================== Description of Energy Management Scheme --------------------------------------- In this section, we propose an energy management scheme for the WPSN. The targets of the proposed energy management scheme are threefold. First, the energy management scheme minimizes the amplifier power consumption in the power beacon. Second, the energy management scheme guarantees that the power beacon receives sensor measurement reports from the sensor node as frequently as required by a sensor application. Third, the energy management scheme maintains the stored energy $E$ over the minimum stored energy $E_\text{min}$ for assuring continuous operation of the sensor node. These three goals are generally conflicting with each other. Therefore, it is required to carefully design the energy management scheme so that these three goals are optimally balanced. In the proposed scheme, both the power beacon and the sensor node perform duty cycling. The timing diagram of the proposed energy management scheme is illustrated in Fig. \[fig:energymanagement\]. The basic time unit for all operations of the energy management scheme is a frame, whose length is denoted by $T_\text{frame}$. Each frame is indexed by $k$. The power beacon performs duty cycling on a frame-by-frame basis. During each frame, the power beacon turns on the amplifier for $T_\text{ET}$ and turns off the amplifier for the rest of the frame. Then, the amplifier duty cycle, denoted by $\alpha$, is given by $$\begin{aligned} \alpha = T_\text{ET}/T_\text{frame}.\end{aligned}$$ When the amplifier is turned on, the transmit power is set to $p_\text{tx} = \Upsilon$. Hereafter, we will call $\Upsilon$ energy transfer power. The energy transfer power satisfies $0 < \Upsilon \le \Upsilon_\text{max}$, where $\Upsilon_\text{max}$ denotes the maximum energy transfer power. From , the amplifier power consumption is given by $$\begin{aligned} \label{eq:ampcons2} p_\text{cons} = \Upsilon/\theta(\Upsilon),\end{aligned}$$ when the amplifier is turned on. On the other hand, when the amplifier is turned off, the transmit power and the amplifier power consumption are both zero. Let $\alpha(k)$ and $\Upsilon(k)$ denote the amplifier duty cycle and the energy transfer power in frame $k$. The sensor node performs duty cycling that periodically wakes up the sensor node for a short time and then puts the sensor node into the idle mode to minimize the sensor module power consumption. A frame is a basic time unit of the duty cycling of the sensor node. For each frame, the sensor node can be in the awake state or in the sleep state. If the sensor node is in the awake state during a frame, the sensor node goes through the receive, active, and transmit modes in sequence for $T_\text{rx}$, $T_\text{act}$, and $T_\text{tx}$, respectively, and it goes into the idle mode for the rest of the frame during $T_\text{idle}=T_\text{frame}-T_\text{rx}-T_\text{act}-T_\text{tx}$, as shown in Fig. \[fig:energymanagement\]. On the other hand, if the sensor node is in the sleep state during a frame, it stays in the idle mode for the entire frame. Let $a(k)$ denote an awake indicator that is 1 if the sensor node is in the awake state in frame $k$; and is 0, otherwise. Suppose that the sensor node is in the awake state in frame $k$ (i.e., $a(k)=1$). Then, the sensor node sets up the wake-up timer before it goes into the idle mode so that it can wake up again after the wake-up interval. The wake-up interval in frame $k$ is denoted by $\tau(k)$. Let $\sigma(k)$ denote the remaining number of frames until the sensor node wakes up again. In frame $k$ such that $\sigma(k)>0$, the sensor node is in the sleep state (i.e., $a(k)=0$). On the other hand, if $\sigma(k)=0$, the sensor node is in the awake state in frame $k$ (i.e., $a(k)=1$). In each frame $k$ such that $\sigma(k)>0$, $\sigma(k)$ counts down by one (i.e., $\sigma(k+1)=\sigma(k)-1$ if $\sigma(k)>0$). If $\sigma(k)$ becomes zero, the sensor node wakes up (i.e., $a(k)=1$) and it sets $\sigma(k+1)$ to $(\tau(k)-1)$ so that it wakes up again after $\tau(k)$ frames (i.e., $\sigma(k+1)=\tau(k)-1$ if $\sigma(k)=0$). At the start of each frame, the power beacon sends a beacon packet that contains a control command to the sensor node. When the sensor node wakes up at the start of a frame, the sensor node is in the receive mode for time duration $T_\text{rx}$ to receive a beacon packet from the power beacon. The sensor node wakes up slightly before the start of a frame to make sure the sensor node safely receives a beacon packet. After successful reception of a beacon packet, the sensor node turns off the RF transceiver and the mode of the sensor node is changed to the active mode. In the active mode, the sensor node performs sensor measurements and computation for time duration $T_\text{act}$. The sensor measurements include the receive power, the stored energy, and other sensor measurements (e.g., temperature). During the time that the sensor node measures the receive power, the amplifier in the power beacon should be turned on. Therefore, the energy management scheme makes the amplifier duty cycle larger than a predefined minimum amplifier duty cycle $\alpha_\text{min}$ so that the amplifier is turned on while the sensor node is in the active mode. The receive power measurement in frame $k$ is denoted by $\overline{S}(k)$. In addition, the sensor node measures the sensor node voltage and calculates the stored energy from . The stored energy measurement in frame $k$ is denoted by $\overline{E}(k)$. The sensor node prepares the sensor measurement report that includes $\overline{S}(k)$, $\overline{E}(k)$, and other sensor measurements. Then, the sensor node sends a data packet containing the sensor measurement report in the transmit mode for time duration $T_\text{tx}$. After the packet transmission is over, the sensor node is put into the idle mode. The sensor node wakes up again after the wake-up interval. The proposed energy management scheme has three control parameters: the amplifier duty cycle, the energy transfer power, and the wake-up interval. The energy management scheme can dynamically decide the amplifier duty cycle, the energy transfer power, and the wake-up interval for each frame in which the sensor node is in the awake state. This decision is based on the receive power measurement $\overline{S}(k)$ and the stored energy measurement $\overline{E}(k)$ included in the sensor measurement report from the sensor node. The controller sends the decided wake-up interval to the sensor node by enclosing it in the control command in the beacon packet, and the sensor node adjusts the wake-up interval according to the received control command. Discrete-Time Stored Energy Evolution Model of Energy Management Scheme {#section:evolutionmodel} ----------------------------------------------------------------------- In this subsection, we derive the discrete-time stored energy evolution model when the proposed energy management scheme is applied. The stored energy at the start of frame $k$ is denoted by $E(k)$. From the continuous-time energy transition function in , the discrete-time energy evolution formula is obtained as $$\begin{aligned} \begin{split} E(k+1) &= \min\big\{E(k) + \mbox{$\int^{t(k+1)}_{t(k)}$} \big(\phi(E^{(t)},p_\text{rx}^{(t)})\\ &\quad\quad - \xi_{m^{(t)}}(E^{(t)}) - \xi_\text{leak}(E^{(t)})\big)\mathrm{d}t, E_\text{max}\big\}, \end{split}\end{aligned}$$ where $t(k)$ is the time at the start of frame $k$, and $E^{(t)}$, $p_\text{rx}^{(t)}$, and $m^{(t)}$ are the stored energy, the receive power, and the mode of the sensor module at time $t$, respectively. To simplify the discrete-time energy evolution formula, we assume that the variation of $E^{(t)}$ during one frame does not much affect the value of $\phi(E^{(t)},p_\text{rx}^{(t)})$, $\xi_{m^{(t)}}(E^{(t)})$, and $\xi_\text{leak}(E^{(t)})$. Then, we can assume that $\phi(E(k),p_\text{rx}^{(t)})=\phi(E^{(t)},p_\text{rx}^{(t)})$, $\xi_{m^{(t)}}(E(k))=\xi_{m^{(t)}}(E^{(t)})$, and $\xi_\text{leak}(E(k))=\xi_\text{leak}(E^{(t)})$ for $t(k) \le t < t(k+1)$. This assumption is valid when the capacitance of the supercapacitor (i.e., $C$) is sufficiently large. Since $V=\sqrt{2E/C}$, the large capacitance makes the sensor node voltage changes slowly, and we can consider that the sensor node voltage does not change during a frame. Then, the above assumption holds since the harvested power, the sensor module power consumption, and the supercapacitor leakage power are all functions of the sensor node voltage. Under this assumption, the discrete-time energy evolution is given by $$\begin{aligned} \label{eq:discenevol} \begin{split} E(k+1)& = \min\{E(k)+\Phi(E(k),\alpha(k),\Upsilon(k),h)\\ &\qquad\quad-\Theta(E(k),a(k)),\ E_\text{max}\}, \end{split}\end{aligned}$$ where $\Phi(E,\alpha,\Upsilon,h)$ is the harvested energy and $\Theta(E,a)$ is the consumed energy during one frame when the stored energy is $E$, the amplifier duty cycle is $\alpha$, the energy transfer power is $\Upsilon$, the power attenuation is $h$, and the awake indicator is $a$. In , the harvested energy is defined as $$\begin{aligned} \label{eq:harven} \Phi(E,\alpha,\Upsilon,h) = \alpha\cdot\eta_E(E,h\Upsilon)\cdot h\Upsilon\cdot T_\text{frame},\end{aligned}$$ and the consumed energy is defined as $$\begin{aligned} \label{eq:consen} \begin{split} &\Theta(E,a)\\ &=a\cdot\big(\mbox{$\sum_{m\in\{\text{rx,act,tx,idle}\}}$}\xi_m(E)\cdot T_m + \xi_\text{leak}(E)\cdot T_\text{frame}\big)\\ &\quad+ (1-a)\cdot(\xi_\text{idle}(E) + \xi_\text{leak}(E)))\cdot T_\text{frame}\\ &= \varphi(E) + \delta(E)\cdot a, \end{split}\end{aligned}$$ where $\varphi(E) = (\xi_\text{idle}(E) + \xi_\text{leak}(E))\cdot T_\text{frame}$ and $\delta(E) = \mbox{$\sum_{m\in\{\text{rx,act,tx}\}}$}(\xi_m(E)-\xi_\text{idle}(E))\cdot T_m$. The power amplifier at the power beacon consumes DC power according to only when it is turned on. Therefore, the average amplifier power consumption is a function of the amplifier duty cycle and the energy transfer power. The average amplifier power consumption is defined as $$\begin{aligned} \label{eq:omega} \Omega(\alpha,\Upsilon) = \alpha\cdot (\Upsilon/\theta(\Upsilon)).\end{aligned}$$ Optimal Energy Transfer Strategy {#section:optstrategy} -------------------------------- The proposed energy management scheme aims to minimize the average amplifier power consumption $\Omega(\alpha,\Upsilon)$ while maintaining the wake-up interval $\tau$ to the target wake-up interval $\tau_\text{tgt}$. In doing so, the energy management scheme stabilizes the stored energy $E$ at the target stored energy $E_\text{tgt}$, which is higher than the minimum stored energy $E_\text{min}$. Let us define the awake frame ratio, denoted by $r$, as the ratio of the frames in which the sensor node is in the awake state. Since the sensor node is in the awake state in one frame out of $\tau$ frames, the awake frame ratio is $r = 1/\tau$. Then, the average consumed energy is given by $$\begin{aligned} \begin{split} Q(E,r) &= r\cdot \Theta(E,1)- (1-r)\cdot \Theta(E,0)\\ &= \varphi(E) + \delta(E)\cdot r. \end{split}\end{aligned}$$ While the stored energy and the wake-up interval are maintained to $E_\text{tgt}$ and $\tau_\text{tgt}$, respectively, the energy management scheme finds the optimal amplifier duty cycle $\alpha^*$ and the optimal energy transfer power $\Upsilon^*$ of the following optimization problem: $$\begin{aligned} &&&\text{minimize} & &\Omega(\alpha,\Upsilon)&&&&\label{eq:opttarget}\\ &&&\text{subject to} & &\Phi(E_\text{tgt},\alpha,\Upsilon,h) \ge Q(E_\text{tgt},r_\text{tgt}), &&&&\label{eq:optconst}\end{aligned}$$ where $r_\text{tgt}=1/\tau_\text{tgt}$, $\alpha_\text{min}\le \alpha\le 1$, and $0< \Upsilon \le \Upsilon_\text{max}$. Note that the optimization problem and is not convex. The Lagrangian of the optimization problem and is given by $$\begin{aligned} \label{eq:lagrangian} \begin{split} &L(\alpha,\Upsilon,\mu)\\ &= \Omega(\alpha,\Upsilon) - \mu\Phi(E_\text{tgt},\alpha,\Upsilon,h)+\mu Q(E_\text{tgt},r_\text{tgt})\\ &=\alpha\Upsilon (\theta(\Upsilon)^{-1} - \mu\cdot \eta_E(E_\text{tgt},h\Upsilon) h)+\mu Q(E_\text{tgt},r_\text{tgt}), \end{split}\end{aligned}$$ where $\mu \ge 0$ is the Lagrange multiplier. The dual function is $g(\mu) = \min_{\alpha,\Upsilon} L(\alpha,\Upsilon,\mu)$. It is known that the dual function is always smaller than or equal to the optimal value of and , that is, $g(\mu)\le \Omega(\alpha^*,\Upsilon^*)$ for $\mu \ge 0$. Let $\widehat{\mu}$ denote the minimum $\mu$ that satisfies $\theta(\Upsilon)^{-1} - \mu\cdot \eta_E(E_\text{tgt},h\Upsilon) h = 0$ for some $\Upsilon$ over $0< \Upsilon \le \Upsilon_\text{max}$. For such $\widehat{\mu}$, we define $\widehat{\Upsilon}$ that satisfies $\theta(\widehat{\Upsilon})^{-1} - \widehat{\mu}\cdot \eta_E(E_\text{tgt},h\widehat{\Upsilon}) h = 0$. Henceforth, we will call $\widehat{\Upsilon}$ maximum efficiency energy transfer power. In addition, we define the maximum efficiency receive power as $\widehat{S} = h\widehat{\Upsilon}$ and the maximum efficiency harvested power as $\widehat{H} = \eta_E(E_\text{tgt},\widehat{S})\cdot \widehat{S}$. The optimal solutions of and satisfy the following theorem. \[theorem:optimal\] If $Q(E_\text{tgt},r_\text{tgt}) \le \widehat{H}$, the optimal solutions are $\alpha^* = Q(E_\text{tgt},r_\text{tgt})/\widehat{H}$ and $\Upsilon^* = \widehat{\Upsilon}$. See Appendix \[proof:optimal\]. The maximum efficiency energy transfer power $\widehat{\Upsilon}$ mainly depends on two efficiency functions, the wireless energy harvesting efficiency function $\eta_E$ and the PAE function $\theta$. Theorem \[theorem:optimal\] states that, if $Q(E_\text{tgt},r_\text{tgt})$ is less than or equal to a threshold $\widehat{H}$, it is efficient to send the RF energy with the maximum efficiency energy transfer power $\widehat{\Upsilon}$ and to set the amplifier duty cycle less than one so that the amplifier power consumption is minimized while the average consumed energy is supported. Adaptive Energy Management Algorithm {#section:emalg} ------------------------------------ In this subsection, we propose an adaptive energy management algorithm that controls the amplifier duty cycle $\alpha(k)$, the energy transfer power $\Upsilon(k)$, and the wake-up interval $\tau(k)$. The energy management algorithm updates $\alpha(k)$, $\Upsilon(k)$, and $\tau(k)$ only in the frame in which the sensor node is in the awake state (i.e., frame $k$ such that $a(k)=1$). The start of frame $k$ such that $a(k)=1$ is defined as an energy management epoch, and the $i$th energy management epoch will be called epoch $i$. Let $E_i$, $\alpha_i$, $\Upsilon_i$, $\tau_i$, $\overline{E}_i$, and $\overline{S}_i$ denote the stored energy, the amplifier duty cycle, the energy transfer power, the wake-up interval, the stored energy measurement, and the receive power measurement at epoch $i$, respectively. Then, the stored energy evolves over epochs according to the following formula. $$\begin{aligned} \label{eq:epochenevol} \begin{split} E_{i+1}& = \min\{E_i+(\Phi(E_i,\alpha_i,\Upsilon_i,h)\\ &\qquad\qquad\qquad\quad-Q(E_i,1/\tau_i))\cdot\tau_i,\ E_\text{max}\}, \end{split}\end{aligned}$$ The power beacon controls $\alpha_i$, $\Upsilon_i$, and $\tau_i$ based on the receive power measurement $\overline{S}_i$ and the stored energy measurement $\overline{E}_i$ enclosed in the sensor measurement report from the sensor node. Note that this algorithm treats the wake-up interval $\tau_i$ as a real number rather than as an integer. Therefore, the wake-up interval obtained by this algorithm should be rounded off when it is actually applied. The first priority of the energy management algorithm is to maintain the stored energy $E_i$ to the target stored energy $E_\text{tgt}$. The second priority is to keep the wake-up interval $\tau_i$ to the target wake-up interval $\tau_\text{tgt}$. As long as the above two targets are satisfied, the algorithm tries to minimize the average amplifier power consumption according to Theorem \[theorem:optimal\]. According to the average consumed energy, $Q(E_\text{tgt},r_\text{tgt})$, required for achieving $E_\text{tgt}$ and $\tau_\text{tgt}$, we can consider the following three cases. In Case I, it is satisfied that $Q(E_\text{tgt},r_\text{tgt}) \le \widehat{H}$. This case corresponds to Theorem \[theorem:optimal\]. This is the most common case if the wireless energy harvesting module is designed in such a way that the maximum energy harvesting efficiency is achieved in the desired operating condition. To achieve the optimality in Case I, $\alpha_i$ should be controlled to maintain the target stored energy while $\Upsilon_i$ is set to $\widehat{\Upsilon}$, according to Theorem \[theorem:optimal\]. In Case II, we have $\widehat{H} < Q(E_\text{tgt},r_\text{tgt}) \le \eta_E(E_\text{tgt},h\Upsilon_\text{max})\cdot h\Upsilon_\text{max}$. In this case, $Q(E_\text{tgt},r_\text{tgt})$ cannot be supported with $\Upsilon_i = \widehat{\Upsilon}$ and $\alpha_i = 1$. Therefore, the proposed algorithm controls $\Upsilon_i$ to a value higher than $\widehat{\Upsilon}$ while it sets $\alpha_i=1$. In Case III, we have $Q(E_\text{tgt},r_\text{tgt}) > \eta_E(E_\text{tgt},h\Upsilon_\text{max})\cdot h\Upsilon_\text{max}$. In this case, $Q(E_\text{tgt},r_\text{tgt})$ cannot be supported even with $\Upsilon_i = \Upsilon_\text{max}$ and $\alpha_i = 1$. The only way to maintain the stored energy is to adjust $\tau_i$ to a value higher than $\tau_\text{tgt}$ to reduce the average consumed energy. The proposed algorithm controls $\alpha_i$ for Case I, $\Upsilon_i$ for Case II, and $\tau_i$ for Case III. Since one parameter is controlled at a time in all three cases, we can define one control variable $x_i$ that is mapped to the tuple of three parameters $(\alpha_i,\Upsilon_i,\tau_i)$. The mapping function is ${\boldsymbol \omega(x)} = (\omega_\alpha(x),\omega_\Upsilon(x),\omega_\tau(x))$, which is defined as $$\begin{aligned} \label{eq:omegax} \begin{split} {\boldsymbol \omega(x)} =\begin{cases} (x+\alpha_\text{min},\widehat{\Upsilon},\tau_\text{tgt}),&\text{if }0\le x\le \kappa_1\\ (1,\beta_\Upsilon(x-\kappa_1)+\widehat{\Upsilon},\tau_\text{tgt}),&\text{if }\kappa_1< x\le \kappa_2\\ (1,\Upsilon_\text{max},\beta_\tau(x-\kappa_2)+\tau_\text{tgt}),&\text{if }x> \kappa_2, \end{cases} \end{split}\end{aligned}$$ where $\kappa_1 = 1-\alpha_\text{min}$, $\kappa_2 = (\Upsilon_\text{max}-\widehat{\Upsilon})/\beta_\Upsilon+1-\alpha_\text{min}$, and $\beta_\Upsilon$ and $\beta_\tau$ are positive constants. The stored energy evolution in can be rewritten as a function of $x$ as follows. $$\begin{aligned} \label{eq:epochenevol2} \begin{split} E_{i+1}& = \min\{E_i+\Delta(E_i,x_i),E_\text{max}\}, \end{split}\end{aligned}$$ where $\Delta(E,x)$ is defined as $$\begin{aligned} \label{eq:delta} \begin{split} &\Delta(E,x) \\ &= (\Phi(E,\omega_\alpha(x),\omega_\Upsilon(x),h)-Q(E,1/\omega_\tau(x)))\cdot\omega_\tau(x)\\ &= (\omega_\alpha(x)\eta_E(E,h\omega_\Upsilon(x))h\omega_\Upsilon(x)\\ &\qquad\qquad-\varphi(E)-\delta(E)/\omega_\tau(x))\cdot\omega_\tau(x). \end{split}\end{aligned}$$ From , we can see that $\Delta(E,x)$ is a non-decreasing function of $x\ge 0$ for all $E_\text{min}\le E\le E_\text{max}$. The stored energy evolution in is an integrating process. If the plant is an integrating process, the proportional-integral (PI) controller can be used to control $x_i$ for maintaining $E_i$ close to $E_\text{tgt}$. The PI controller is $$\begin{aligned} \label{eq:pic} x_i = C_P (E_\text{tgt}-\overline{E}_i) + C_I \mbox{$\sum_{j=1}^i$} (E_\text{tgt}-\overline{E}_j),\end{aligned}$$ where $C_P$ and $C_I$ are the coefficients for the proportional and integral terms, respectively. After calculating , the algorithm can derive $\alpha_i = \omega_\alpha(x_i)$, $\Upsilon_i = \omega_\Upsilon(x_i)$, and $\tau_i = \omega_\tau(x_i)$. In calculating , the algorithm should know the maximum efficiency transfer power $\widehat{\Upsilon}$. However, it is difficult to find the exact $\widehat{\Upsilon}$ since $\widehat{\Upsilon}$ depends on nonlinear functions as well as the power attenuation. Therefore, in our experiment, we assume that the maximum efficiency receive power $\widehat{S}$ has a constant value $S_\text{tgt}$. The power attenuation is obtained as $h = \overline{S}_i/\Upsilon_i$ based on the receive power measurement $\overline{S}_i$. Then, the maximum efficiency energy transfer power can be calculated as $\widehat{\Upsilon}=S_\text{tgt}/h$. Experimental Results {#section:result} ==================== In this section, we present experimental results on the WPSN testbed to show the performance of the proposed energy management scheme. We have implemented the energy management scheme on the WPSN testbed as proposed in Section \[section:energy management\]. The length of a frame is $T_\text{frame} = 100$ ms, the maximum energy transfer power is $\Upsilon_\text{max} = 2.3$ W, and the maximum sensor node voltage is 3.0 V. In this section, the power attenuation on the RF energy transfer channel will be given in a dB scale (i.e., $-10\log_{10} h$). In Figs. \[fig:ampdutycycling\] and \[fig:ampharvcons\], we show the efficiency of the RF energy transfer when the amplifier duty cycling is applied. Fig. \[fig:ampduty\] shows the average amplifier power consumption (i.e., $\Omega(\alpha,\Upsilon)=\alpha\cdot(\Upsilon/\theta(\Upsilon))$) on y-axis and the average transmit power (i.e., $\alpha\Upsilon$) on x-axis, according to the amplifier duty cycle $\alpha$ and the energy transfer power $\Upsilon$. In this figure, we can see that lower average amplifier power consumption is achieved for a given average transmit power when the amplifier duty cycling is used with fixed $\Upsilon$, which is because of the characteristics of the PAE. In Fig. \[fig:dutyharv\], we show the average harvested power (i.e., $\alpha\cdot\eta_E(E,h\Upsilon)\cdot h\Upsilon$) on y-axis and the average receive power (i.e., $\alpha\cdot h\Upsilon$) on x-axis, according to $\alpha$ and the receive power (i.e., $h\Upsilon$). Since the wireless energy harvesting efficiency is maximized around 10 mW from Fig. \[fig:harveff\], we have higher average harvested power when the receive power is fixed to 10 mW. Fig. \[fig:ampharvcons\] shows the average amplifier power consumption for achieving a given average harvested power, which is affected by both the PAE and the wireless energy harvesting efficiency, when the power attenuations are 19.73 dB and 22.65 dB. This figure clearly shows the advantage of the amplifier duty cycling.   Fig. \[fig:senscons\] shows the sensor module power consumption over time when the wake-up interval is set to $\tau = 1$ and the sensor node voltage is 3.0 V. In Fig. \[fig:sensconslong\], we can see the sensor module wakes up every 100 ms similarly to the timing diagram in Fig. \[fig:energymanagement\]. From Fig. \[fig:sensconsshort\], we can derive the time duration of each mode such that $T_\text{rx} = 2.34$ ms, $T_\text{act} = 5.01$ ms, and $T_\text{tx} = 1.81$ ms. In Fig. \[fig:intpwr\], we show the average sensor module power consumption according to the wake-up interval. The average sensor module power consumption is given by $(\sum_{m\in \{\text{rx},\text{act},\text{tx},\text{idle}\}} \xi_m(E)\cdot T_m + \xi_\text{idle}(E) \cdot (\tau-1) T_\text{frame})/(\tau T_\text{frame})$. In Fig. \[fig:intpwr\], ‘expected’ refers to the average sensor module power consumption calculated by this equation based on $T_\text{rx}$, $T_\text{act}$, $T_\text{tx}$, and Table \[tab:chipmode\]. In addition, we have measured the actual sensor node power consumption by measuring current going through the sensor module. We can see that the expected sensor module power consumption well matches with the measured one.   Fig. \[fig:senschar\] shows the energy storage charging power over time. The energy storage charging power is defined as the power charged to the energy storage module, which is the multiplication of the sensor node voltage and the current through the energy storage module, i.e., $V I_\text{ES}$. For this figure, we set $\alpha = 0.4$, $\Upsilon = 2.3$ W, $\tau = 1$, the power attenuation to 15 dB, and the sensor node voltage to 3.0 V. In Fig. \[fig:senschar\], we can see that the power is discharged around the start of each frame due to the sensor module power consumption and the power is charged after the sensor module is put into an idle mode. In Fig. \[fig:harvcons\], we show the energy storage charging power according to the energy transfer power and the wake-up interval when $\alpha = 1$, the sensor node voltage is 3.0 V, and the power attenuations are 26.89 dB and 30.62 dB. Considering that the supercapacitor leakage power is small, the energy storage can be stable or charged as long as the energy storage charging power is non-negative. In Fig. \[fig:harvcons\], we can see that high energy transfer power or high wake-up interval is required for positive energy storage charging power.     In Fig. \[fig:harvconsexmodelval\], we validate the discrete-time stored energy evolution model in Section \[section:evolutionmodel\] by comparing the expected and the measured results. By slightly modifying , the expected energy storage charging power is calculated as $\Phi(E,\alpha,\Upsilon,h)/T_\text{frame} - (\sum_{m\in \{\text{rx},\text{act},\text{tx},\text{idle}\}} \xi_m(E)\cdot T_m + \xi_\text{idle}(E) \cdot (\tau-1) T_\text{frame})/(\tau T_\text{frame})$. In Fig. \[fig:harvconsex\], we can see that the expected energy storage charging power is very similar to the measured one. Fig. \[fig:modelval\] compares the expected and the measured stored energy variation over time. The expected stored energy variation is calculated as $(\Phi(E,\alpha,\Upsilon,h)-Q(E,1/\tau))/T_\text{frame}$. In calculating the expected stored energy variation, the stored energy $E$ is the only measured parameter and all other parameters and functions are given by the stored energy evolution model. To calculate the measured stored energy variation, we have measured the stored energy every second and have taken the difference of two consecutively obtained stored energy measurements. For Fig. \[fig:modelval\], the power attenuation is set to 26.89 dB and we change the parameters $(\alpha,\Upsilon\text{ (in Watt)},\tau)$ every ten seconds in the following sequence: (0,2,1), (1,2,1), (1,2,2), (1,1,2), (0.5,1,2), (0.2,1,2), (0.2,1,5), (0.2,1,10), (1,1,10). In this figure, we can see that the expected stored energy variation well approximates the measured one, which proves the validity of our stored energy evolution model.   In Figs. \[fig:conttime\] and \[fig:conttimeant\], we show the operation of the adaptive energy management algorithm in Section \[section:emalg\]. In these figures, we plot the amplifier duty cycle, the energy transfer power, the wake-up interval, the stored energy, and the amplifier power consumption over time when the adaptive energy management algorithm is used. The algorithm parameters are $\alpha_\text{min}=0.1$, $\Upsilon_\text{max}=2300$ mW, $\tau_\text{tgt}=1$, $E_\text{tgt}=380$ mJ, $C_P=0$, $C_I=0.01$, $\beta_{\Upsilon}=100$, and $\beta_{\tau}=1$. We use the step attenuator-based channel environment for Fig. \[fig:conttime\] and the antenna-based channel environment for Fig. \[fig:conttimeant\]. We set $S_\text{tgt}=20$ mW for Fig. \[fig:conttime\] and $S_\text{tgt}=10$ mW for Fig. \[fig:conttimeant\]. For Fig. \[fig:conttime\], we set the power attenuation (in dB) every 100 seconds in the following sequence: 16.69, 20.66, 24.63, 28.62, 32.82, 16.69, 32.82, 24.63, 20.66, 16.69. For Fig. \[fig:conttimeant\], the distance (in meter) is changed every 100 seconds in the following sequence: 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 2.5, 1.0, 3.0, 2.0. In both figures, we can see that the energy management algorithm controls one of $\alpha$, $\Upsilon$, and $\tau$ at a time in order to keep the stored energy $E_i$ to $E_\text{tgt}$. As a result, we can see that the stored energy is maintained to be around 380 mJ.     Figs. \[fig:energymanagementresult\] and \[fig:contamppow\] compare the performance of the proposed energy management scheme and the no amplifier duty cycling scheme. The no amplifier duty cycling scheme fixes the amplifier duty cycle to one (i.e., $\alpha = 1$) while it controls $\Upsilon$ and $\tau$ in the same way as the proposed algorithm does. Fig. \[fig:energymanagementresult\] plots $\alpha$, $\Upsilon$, and $\tau$ after convergence according to the power attenuation. For Fig. \[fig:contprop20\], we set $S_\text{tgt}=20$ mW. In Fig. \[fig:contprop20\], we can see that $\Upsilon$ is first controlled and then $\alpha$ and $\tau$ are controlled as the power attenuation becomes severe. Fig. \[fig:contamppow\] shows the amplifier power consumption of the proposed energy management scheme and the no amplifier duty cycling scheme. In this figure, we can see that the proposed scheme outperforms the no amplifier duty cycling scheme. When $S_\text{tgt}=20$ mW and the power attenuation is less than 23 dB, the proposed scheme only consumes less than half of the power consumed by the no amplifier duty cycling scheme.   Conclusion {#section:conclusion} ========== In this paper, we have built a full-fledged WPSN testbed and have conducted extensive experiments on the testbed. By using the testbed, we have obtained the parameters for the WPSN model and have validated the usefulness and practical importance of the model. Based on the stored energy evolution model, we have proposed the energy management scheme for the energy neutral operation of the sensor node. The experimental results demonstrated that the proposed scheme adaptively controls the WPSN to achieve energy neutrality and high-efficiency RF power transfer. Proof of Theorem \[theorem:optimal\] {#proof:optimal} ==================================== If $Q(E_\text{tgt},r_\text{tgt}) \le \widehat{H}$, we can define $\widehat{\alpha} = Q(E_\text{tgt},r_\text{tgt})/\widehat{H}$ such that $0\le \widehat{\alpha} \le 1$. For such $\widehat{\alpha}$, it is satisfied that $\Phi(E_\text{tgt},\widehat{\alpha},\widehat{\Upsilon},h) = \widehat{\alpha}\widehat{H} = Q(E_\text{tgt},r_\text{tgt})$. According to the definition of $\widehat{\Upsilon}$, we have $L(\widehat{\alpha},\widehat{\Upsilon},\widehat{\mu})=g(\widehat{\mu})\le \Omega(\alpha^*,\Upsilon^*)$. In addition, we also have $L(\widehat{\alpha},\widehat{\Upsilon},\widehat{\mu})=\Omega(\widehat{\alpha},\widehat{\Upsilon}) - \widehat{\mu}(\Phi(E_\text{tgt},\widehat{\alpha},\widehat{\Upsilon},h)- Q(E_\text{tgt},r_\text{tgt}))=\Omega(\widehat{\alpha},\widehat{\Upsilon})$. Therefore, we can conclude that $\Omega(\widehat{\alpha},\widehat{\Upsilon}) \le \Omega(\alpha^*,\Upsilon^*)$, which leads to $\widehat{\alpha}=\alpha^*$ and $\widehat{\Upsilon} = \Upsilon^*$. [^1]: D. Setiawan is with the Convergence Institute of Biomedical Engineering & Biomaterials, Seoul National University of Science and Technology, Korea (email: morethanubabe@gmail.com). [^2]: A. A. Aziz is with the Dept. of Computer Science and Engineering, Seoul National University of Science and Technology, Korea (email: arif.abdul.aziz92@gmail.com). [^3]: D. I. Kim and K. W. Choi are with the School of Information and Communication Engineering, Sungkyunkwan University (SKKU), Suwon, Korea (email: dikim@skku.ac.kr and kaewon.choi@gmail.com). [^4]: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (2014R1A5A1011478).
--- abstract: 'In this project[^1], we first study the Gaussian-based hidden Markov random field (HMRF) model and its expectation-maximization (EM) algorithm. Then we generalize it to Gaussian mixture model-based hidden Markov random field. The algorithm is implemented in [MATLAB]{}. We also apply this algorithm to color image segmentation problems and 3D volume segmentation problems.' author: - | Quan Wang\ Signal Analysis and Machine Perception Laboratory\ Electrical, Computer, and Systems Engineering\ Rensselaer Polytechnic Institute\ [wangq10@rpi.edu]{} bibliography: - 'egbib.bib' title: | GMM-Based Hidden Markov Random Field\ for Color Image and 3D Volume Segmentation --- Introduction ============ Markov random fields (MRFs) have been widely used for computer vision problems, such as image segmentation [@zhanglei], surface reconstruction [@surface] and depth inference [@depth]. Much of its success attributes to the efficient algorithms, such as Iterated Conditional Modes [@ICM], and its consideration of both “data faithfulness” and “model smoothness” [@AGSM]. The HMRF-EM framework was first proposed for segmentation of brain MR images [@HMRF-EM]. For simplicity, we first assume that the image is 2D gray-level, and the intensity distribution of each region to be segmented follows a Gaussian distribution. Given an image $\textbf{Y}=(y_1,\dots,y_N)$ where $N$ is the number of pixels and each $y_i$ is the gray-level intensity of a pixel, we want to infer a configuration of labels $\textbf{X}=(x_1,\dots,x_N)$ where $x_i \in L$ and $L$ is the set of all possible labels. In a binary segmentation problem, $L= \lbrace 0,1 \rbrace$. According to the MAP criterion, we seek the labeling $\textbf{X}^\star$ which satisfies: $$\label{eq:MAP} \textbf{X}^\star=\underset{\textbf{X}}{\operatorname{argmax}} \; \lbrace P(\textbf{Y}|\textbf{X},\Theta)P(\textbf{X}) \rbrace .$$ The prior probability $P(\textbf{X})$ is a Gibbs distribution, and the joint likelihood probability is $$\begin{aligned} P(\textbf{Y}|\textbf{X},\Theta)&=&\prod\limits_i P(y_i|\textbf{X},\Theta) \nonumber \\ &=&\prod\limits_i P(y_i|x_i,\theta_{x_i}) ,\end{aligned}$$ where $P(y_i|x_i,\theta_{x_i})$ is a Gaussian distribution with parameters $\theta_{x_i}=(\mu_{x_i},\sigma_{x_i})$. In MRF problems, people usually learn the parameter set $\Theta=\lbrace \theta_l | l \in L \rbrace$ from the training data. For example, in image segmentation problems, prior knowledge of the intensity distributions of the foreground and the background might be consistent within a dataset, especially domain specific dataset. Thus, we can learn the parameters from some images that are manually labeled, and use these parameters to run the MRF to segment the other images. The major difference between MRF and HMRF is that, in HMRF, the parameter set $\Theta$ is learned in an unsupervised manner. In a HMRF image segmentation problem, there is no training stage, and we assume no prior knowledge is known about the foreground/background intensity distribution. Thus, a natural proposal for solving a HMRF problem is to use the EM algorithm, where parameter set $\Theta$ and label configuration $\textbf{X}$ are learned alternatively. EM Algorithm for Parameters {#sec:EM} =========================== We still use the 2D gray-level and Gaussian distribution assumption. We use the EM algorithm to estimate the parameter set $\Theta=\lbrace \theta_l | l \in L \rbrace$. We describe the EM algorithm by the following [@hmrf-em-image]: 1. *Start:* Assume we have an initial parameter set $\Theta^{(0)}$. 2. *E-step:* At the $t$th iteration, we have $\Theta^{(t)}$, and we calculate the conditional expectation: $$\begin{aligned} \hspace{-5mm} Q(\Theta | \Theta^{(t)}) &=& E\left[ \ln P(\textbf{X},\textbf{Y}|\Theta) | \textbf{Y},\Theta^{(t)} \right] \nonumber \\ &=& \sum\limits_{\textbf{X}\in \chi} P(\textbf{X}|\textbf{Y},\Theta^{(t)}) \ln P(\textbf{X},\textbf{Y}|\Theta) ,\end{aligned}$$ where $\chi$ is the set of all possible configurations of labels. 3. *M-step:* Now maximize $Q(\Theta | \Theta^{(t)})$ to obtain the next estimate: $$\Theta^{(t+1)}=\underset{\Theta}{\operatorname{argmax}} \; Q(\Theta | \Theta^{(t)}) .$$ Then let $\Theta^{(t+1)} \rightarrow \Theta^{(t)}$ and repeat from the E-step. Let $G(z;\theta_l)$ denote a Gaussian distribution function with parameters $\theta_l=(\mu_l,\sigma_l)$: $$\label{eq:gaussian} G(z;\theta_l)=\dfrac{1}{\sqrt{2\pi\sigma_l^2}} \exp\left( -\dfrac{(z-\mu_l)^2}{2\sigma_l^2} \right) .$$ We assume that the prior probability can be written as $$\begin{aligned} P(\textbf{X})=\dfrac{1}{Z}\exp\left( -U(\textbf{X}) \right) ,\end{aligned}$$ where $U(\textbf{x})$ is the prior energy function. We also assume that $$\begin{aligned} P(\textbf{Y}|\textbf{X},\Theta)&=& \prod\limits_i P(y_i|x_i,\theta_{x_i}) \nonumber \\ &=& \prod\limits_i G(y_i;\theta_{x_i}) \nonumber \\ &=& \dfrac{1}{Z'}\exp\left( -U(\textbf{Y}|\textbf{X}) \right) .\end{aligned}$$ With these assumptions, the HMRF-EM algorithm is given below: 1. Start with initial parameter set $\Theta^{(0)}$. 2. Calculate the likelihood distribution $P^{(t)}(y_i|x_i,\theta_{x_i})$. 3. Using current parameter set $\Theta^{(t)}$ to estimate the labels by MAP estimation: $$\begin{aligned} \textbf{X}^{(t)}&=&\underset{\textbf{X}\in \chi}{\operatorname{argmax}} \; \lbrace P(\textbf{Y}|\textbf{X},\Theta^{(t)})P(\textbf{X}) \rbrace \nonumber \\ &=&\underset{\textbf{X}\in \chi}{\operatorname{argmin}} \; \lbrace U(\textbf{Y}|\textbf{X},\Theta^{(t)})+U(\textbf{X}) \rbrace .\end{aligned}$$ The algorithm for the MAP estimation is discussed in Section \[section:MAP\]. 4. Calculate the posterior distribution for all $l\in L$ and all pixels $y_i$ using the Bayesian rule: $$\begin{aligned} P^{(t)}(l | y_i)=\dfrac{G(y_i;\theta_l)P(l | x_{N_i}^{(t)})}{P^{(t)}(y_i)} ,\end{aligned}$$ where $x_{N_i}^{(t)}$ is the neighborhood configuration of $x_i^{(t)}$, and $$P^{(t)}(y_i)=\sum\limits_{l\in L}G(y_i;\theta_l)P(l | x_{N_i}^{(t)}) .$$ Note here we have $$\begin{aligned} P(l | x_{N_i}^{(t)}) &=& \dfrac{1}{Z}\exp \left( -\sum\limits_{j\in N_i} V_c(l,x_j^{(t)})\right) .\end{aligned}$$ 5. Use $P^{(t)}(l | y_i)$ to update the parameters: $$\begin{aligned} \mu_l^{(t+1)}&=&\dfrac{\sum\limits_{i}P^{(t)}(l|y_i)y_i}{\sum\limits_{i}P^{(t)}(l|y_i)} \\ (\sigma_l^{(t+1)})^2&=& \dfrac{\sum\limits_{i}P^{(t)}(l|y_i)(y_i-\mu_l^{(t+1)})^2}{\sum\limits_{i}P^{(t)}(l|y_i)} .\end{aligned}$$ MAP Estimation for Labels {#section:MAP} ========================= In the EM algorithm, we need to solve for $\textbf{X}^\star$ that minimizes the total posterior energy $$\label{eq:MAP2} \textbf{X}^{\star} =\underset{\textbf{X}\in \chi}{\operatorname{argmin}} \; \lbrace U(\textbf{Y}|\textbf{X},\Theta)+U(\textbf{X}) \rbrace$$ with given $\textbf{Y}$ and $\Theta$, where the likelihood energy (also called unitary potential) is $$\begin{aligned} \label{eq:like_energy} U(\textbf{Y}|\textbf{X},\Theta)&=& \sum\limits_{i} U(y_i|x_i,\Theta) \nonumber \\ &=& \sum\limits_{i} \left[ \dfrac{(y_i-\mu_{x_i})^2}{2\sigma_{x_i}^2}+\ln \sigma_{x_i} \right] .\end{aligned}$$ The prior energy function (also called pairwise potential) $U(\textbf{X})$ has the form $$U(\textbf{X})=\sum\limits_{c\in C} V_c(\textbf{X}) ,$$ where $V_c(\textbf{X})$ is the clique potential and $C$ is the set of all possible cliques. In the image domain, we assume that one pixel has at most 4 neighbors: the pixels in its 4-neighborhood. Then the clique potential is defined on pairs of neighboring pixels: $$\begin{aligned} \label{eq:Vc} V_c(x_i,x_j)=\dfrac{1}{2}(1-I_{x_i,x_j}) ,\end{aligned}$$ where $$I_{x_i,x_j}= \left\{\begin{array}{c} 0 \qquad \textrm{if $x_i \neq x_j$}\\ 1 \qquad \textrm{if $x_i = x_j$} \end{array}\right. .$$ Note that in Eq. (\[eq:Vc\]), the constant coefficient $1/2$ can be replaced by a variable coefficient $\beta$. We just follow [@HMRF-EM] to use the $1/2$ constant, which proves effective in many of our experiment results. We developed an iterative algorithm to solve (\[eq:MAP2\]): 1. To start with, we have an initial estimate $\textbf{X}^{(0)}$, which can be from the previous loop of the EM algorithm. 2. \[item:MAP\_iter\] Provided $\textbf{X}^{(k)}$, for all $1\leq i\leq N$, we find $$\label{eq:MAP_iter} x_i^{(k+1)}= \underset{l \in L}{\operatorname{argmin}} \; \lbrace U(y_i | l)+\sum\limits_{j\in N_i} V_c(l,x_j^{(k)}) \rbrace .$$ 3. Repeat step \[item:MAP\_iter\] until $U(\textbf{Y}|\textbf{X},\Theta)+U(\textbf{X})$ stops changing significantly or a maximum $k$ is achieved. GMM-Based HMRF ============== In previous sections, we have been assuming that the intensity distribution of each region to be segmented follows a Gaussian distribution with parameters $\theta_{x_i}=(\mu_{x_i},\sigma_{x_i})$. However, this is a very strong hypothesis which is insufficient to model the complexity of the intensity distribution of real-life objects, especially for objects with multimodal distributions. Gaussian mixture model (GMM), in contrast, is much more powerful for modeling the complex distributions than one single Gaussian distribution. A Gaussian mixture model with $g$ components can be represented by parameters: $$\begin{aligned} \label{eq:gmm_para} \theta_l=\lbrace (\mu_{l,1},\sigma_{l,1},w_{l,1}),\dots,(\mu_{l,g},\sigma_{l,g},w_{l,g})\rbrace . \end{aligned}$$ Compared with Eq. (\[eq:gaussian\]), the GMM now has a weighted probability $$\begin{aligned} G_{\textrm{mix}}(z;\theta_l)=\sum\limits_{c=1}^g w_{l,c}G(z;\mu_{l,c},\sigma_{l,c}) .\end{aligned}$$ Now, the M-step of the EM-algorithm described in Section \[sec:EM\] changes to a Gaussian mixture model fitting problem. The GMM fitting problem itself can be also solved using an EM-algorithm. In the E-step, we determine which data should belong to which Gaussian component; in the M-step, we recompute the GMM parameters. Experiment Results ================== We use the above mentioned GMM-based HMRF for two applications: color image segmentation and 3D volume segmentation. For each application, minor modifications need to be made. Color Image Segmentation ------------------------ The difference between color image segmentation and gray-level image segmentation is that, for a color image, the pixel intensity is no longer a number, but a 3-dimensional vector of RGB values: $\textbf{Y}=(\textbf{y}_1,\dots,\textbf{y}_N)$, and $\textbf{y}_i=(y_{iR},y_{iG},y_{iB}){^\mathsf{T}}$. The parameters of a Gaussian mixture model now becomes $$\begin{aligned} \label{eq:gmm_para_rgb} \theta_l=\lbrace (\boldsymbol\mu_{l,1},\boldsymbol\Sigma_{l,1},w_{l,1}),\dots,(\boldsymbol\mu_{l,g},\boldsymbol\Sigma_{l,g},w_{l,g})\rbrace , \end{aligned}$$ which can be compared with Eq. (\[eq:gmm\_para\]). Also, the likelihood energy Eq. (\[eq:like\_energy\]) becomes $$\begin{aligned} \label{eq:like_energy_rgb} U(\textbf{Y}|\textbf{X},\Theta)&=& \sum\limits_{i} U(\textbf{y}_i|x_i,\Theta) \nonumber \\ &=& \sum\limits_{i} \left[ \dfrac{1}{2}(\textbf{y}_i-\boldsymbol\mu_{x_i}){^\mathsf{T}}\boldsymbol\Sigma_{x_i}^{-1} (\textbf{y}_i-\boldsymbol\mu_{x_i}) \right. \nonumber \\ &&+ \left. \ln |\boldsymbol\Sigma_{x_i}|^{\frac{1}{2}} \right] .\end{aligned}$$ Example color image segmentation results are shown in Figure \[fig:color\_result\_1\], \[fig:color\_result\_2\], and \[fig:color\_result\_3\]. \ \ \ \ 3D Volume Segmentation ---------------------- The only difference between 2D image segmentation and 3D image segmentation is the neighborhood system. In 2D images, we usually use the 4-neighborhood system or the 8-neighborhood system; in 3D images, we usually use the 6-neighborhood system or the 26-neighborhood system. The difference is shown in Figure \[fig:neighborhood\]. ![Neighborhood system in 2D and 3D images. []{data-label="fig:neighborhood"}](neighborhood){width="48.00000%"} To validate our algorithm for 3D volume segmentation, we generate a synthetic 3D image of size $50\times50\times50$ and with a foreground sphere of radius $20$ at the center. The intensity of background is $0$, and the foreground is $100$. Random noise uniformly distributed within $[0,120]$ is added to the entire image, at all positions. Thus clustering methods such as k-means will not guarantee spatial continuousness of the segmentation results. A comparison of k-means segmentation and HMRF segementation is shown in Figure \[fig:3d\_result\]. With 10 EM iterations and 10 MAP iterations, while setting $g=1$ for GMM, the 3D segmentation takes about 14 seconds on a 2.53 GHz Intel(R) Core(TM) i5 CPU. Code Documentation ================== We provide the name and usage of each file of our [MATLAB]{} implementation in Tabel \[table:code1\] and \[table:code2\], including both color image segmentation and 3D volume segmentation. File Type Usage ------------------------- ---------- --------------------------------------------- A color image segmentation example. Users can run this file directly. The k-means algorithm for 2D color images. This will generate an initial segmentation. [HMRF\_EM.m]{} Function The HMRF-EM algorithm. [MRF\_MAP.m]{} Function The MAP algorithm. [gaussianBlur.m]{} Function Blurring an image using Gaussian kernel. [gaussianMask.m]{} Function Obtaining the mask of Gaussian kernel. [ind2ij.m]{} Function Index to 2D image coordinates conversion. [get\_GMM.m]{} Function Fitting Gaussian mixture model to data. [BoundMirrorExpand.m]{} Function Expanding an image. [BoundMirrorShrink.m]{} Function Shrinking an image. [385028.jpg]{} Image An example input color image. File Type Usage --------------------------- ----------------- --------------------------------------------- A 3D volume segmentation example. Users can run this file directly. [generate\_3D\_image.m]{} Runnable script Generating the synthetic input 3D image. The k-means algorithm for 3D volumes. This will generate an initial segmentation. [HMRF\_EM.m]{} Function The HMRF-EM algorithm. [MRF\_MAP.m]{} Function The MAP algorithm. [ind2ijq.m]{} Function Index to 3D image coordinates conversion. [get\_GMM.m]{} Function Fitting Gaussian mixture model to data. [Image.raw]{} Raw 3D image An example input raw 3D image. Discussion ========== In this project, we have studied the hidden Markov random field, and its expectation-maximization algorithm. The basic idea of HMRF is combining “data faithfulness” and “model smoothness”, which is very similar to active contours [@snakes], gradient vector flow (GVF) [@gvf], graph cuts [@gc], and random walks [@rw]. We also combined the HMRF-EM framework with Gaussian mixture models, and applied it to color image segmentation and 3D volume segmentation problems. The algorithms are implemented in [MATLAB]{}. In color image segmentation experiments, we can see the HMRF segmentation results are much more smooth than the results of direct k-means clustering. In 3D volume segmentation results, the segmented object is much closer to the original shape than clustering. This is because Markov random field imposes strong spatial constraints on the segmented regions, while clustering-based segmentation only considers pixel/voxel intensities. [^1]: This work originally appears as the final project of Prof. [Qiang Ji](http://www.ecse.rpi.edu/~qji/)’s course *Introduction to Probabilistic Graphical Models* at RPI.
[**Schröder Paths and Pattern Avoiding Partitions** ]{}\ Sherry H.F. Yan\ Department of Mathematics, Zhejiang Normal University\ Jinhua 321004, P.R. China\ huifangyan@hotmail.com [**Abstract.** ]{} In this paper, we show that both $12312$-avoiding partitions and $12321$-avoiding partitions of the set $[n+1]$ are in one-to-one correspondence with Schröder paths of semilength $n$ without peaks at even level. As a consequence, the refined enumeration of $12312$-avoiding (resp. $12321$-avoiding) partitions according to the number of blocks can be reduced to the enumeration of certain Schröder paths according to the number of peaks. Furthermore, we get the enumeration of irreducible $12312$-avoiding (resp. $12321$-avoiding) partitions, which are closely related to skew Dyck paths. [**AMS Classification:**]{} 05A15, 05A19 [**Keywords:**]{} Schröder path, pattern avoiding partition, skew Dyck path. Introduction and notations ========================== A [*Schröder path*]{} of semilength $n$ is a lattice path on the plane from $(0,0)$ to $(2n,0)$ that does not go below the $x$-axis and consists of up steps $U=(1,1)$, down steps $D=(1,-1)$ and horizontal steps $H=(2,0)$. They are counted by the larger Schröder numbers (A006318 in [@Seq]). A [*UH-free*]{} Schröder path is a Schröder path without up steps followed immediately by horizontal steps. A UH-free Schröder path of semilength $12$ is illustrated as Figure \[fig1\]. (200,50) (0,0) (0,0)[(1,0)[2]{}]{} (2,0) (2,0)[(1,1)[1]{}]{}(3,1)(3,1)[(1,1)[1]{}]{} (4,2)(4,2)[(1,-1)[1]{}]{}(5,1) (5,1)[(1,0)[2]{}]{}(7,1)(7,1)[(1,0)[2]{}]{}(9,1) (9,1)[(1,1)[1]{}]{}(10,2)(10,2)[(1,1)[1]{}]{}(11,3) (11,3)[(1,1)[1]{}]{}(12,4) (12,4)[(1,-1)[1]{}]{}(13,3) (13,3)[(1,-1)[1]{}]{}(14,2) (14,2)[(1,0)[2]{}]{}(16,2) (16,2)[(1,-1)[1]{}]{}(17,1) (17,1)[(1,0)[2]{}]{}(19,1) (19,1)[(1,-1)[1]{}]{}(20,0) (20,0)[(1,1)[1]{}]{}(21,1) (21,1)[(1,1)[1]{}]{}(22,2) (22,2)[(1,-1)[1]{}]{}(23,1) (23,1)[(1,-1)[1]{}]{}(24,0) An up step followed by a down step in a path is called a [*peak*]{}. The [*level*]{} of an up step (a horizontal step) is defined as the larger $y$ coordinate of the step. The [*level*]{} of a peak is defined as the level of the up step in the peak. Denote by $\mathcal{SE}_n$ and $\mathcal{SH}_n$ the set of Schröder paths of semilength $n$ without peaks at even level and the set of UH-free Schröder paths of semilength $n$, respectively. A partition $\pi$ of the set $[n]=\{1,2,\ldots, n\}$ is a collection $B_1, B_2, \ldots, B_k$ of nonempty disjoint subsets of $[n]$. The elements of a partition are called blocks. We assume that $B_1, B_2, \ldots, B_k$ are listed in the increasing order of their minimum elements, that is $min B_1 <min B_2 <\ldots<min B_k$. A partition $\pi$ of $[n]$ with $k$ blocks can also be represented by a sequence $\pi_1\pi_2\ldots\pi_n$ on the set $\{1, 2, \ldots, k\}$ such that $\pi_i=j$ if and only if $i\in B_j$. Such a representation is called the [*Davenport-Schinzel sequence*]{} or the [*canonical sequential form*]{}. In this paper, we will always represent a partition by its canonical sequential form. In the terminology of canonical sequential forms, we say that a partition $\pi $ [*avoids*]{} a partition $\tau$, or it is $\tau$-[*avoiding*]{}, if there is no subsequence which is order-isomorphic to $\tau$ in $\pi$. In such context, $\tau$ is usually called a pattern. The set of $\tau$-avoiding partitions of $[n]$ is denoted $\mathcal{P}_n(\tau)$. The enumeration on pattern avoiding partitions has received extensive attention from several authors, see [@chen2; @Goyt; @Jet; @Mansour; @Sagan] and references therein. By using kernel method, Mansour and Severini [@Mansour] deduced that the number of $12312$-avoiding partitions of $[n+1]$ is equal to the number of Schröder paths of semilength $n$ without peaks at even level (A007317 in [@Seq]). Recently, Jelinek and Mansour[@Jet] proved that the cardinality of $\mathcal{P}_n(12312)$ is equal to that of $\mathcal{P}_n(12321)$. In this paper, we will provide a bijection between the set of $12312$-avoiding partitions of $[n+1]$ and the set of UH-free Schröder paths of semilength $n$. By making a simple variation of this bijection, we get a bijection between the set of $12321$-avoiding partitions of $[n+1]$ and the set of UH-free Schröder paths of semilength $n$. A bijection between the set of UH-free Schröder paths of semilength $n$ and the set of Schröder paths of semilength $n$ without peaks at even level is also provided, which leads to a bijection between $12312$-avoiding (resp. $12321$-avoiding) partitions of of $[n+1]$ and the set of Schröder paths of semilength $n$ without peaks at even level. As a consequence, the refined enumeration of $12312$-avoiding (resp. $12321$-avoiding) partitions according to the number of blocks can be reduced to the enumeration of certain Schröder paths according to the number of peaks. Furthermore, we also get the enumeration of irreducible $12312$-avoiding (resp. $12321$-avoiding) partitions, which are closely related to skew Dyck paths. Bijection between $\mathcal{P}_{n+1}(12312)$ and $\mathcal{SE}_n$ ================================================================= In this section, we will provide a bijection between the set of $12312$-avoiding (resp. $12321$-avoiding )partitions of $[n+1]$ and the set of UH-free Schröder paths of semilength $n$. A bijection between the set of UH-free Schröder paths of semilength $n$ and the set of Schröder paths of semilength $n$ without peaks at even level is also given, which leads to a bijection between the set of $12312$-avoiding (resp. $12321$-avoiding )partitions of $[n+1]$ and the set of Schröder paths of semilength $n$ without peaks at even level. Let $\pi$ be a nonempty partition of $[n+1]$ with $k$ blocks. Then $ \pi $ can be uniquely decomposed as $$\label{eq.1} 1w_1 2w_2\ldots iw_i\ldots kw_k,$$ where $1,2,\ldots, k$ are the [*left-to-right maxima*]{} of $\pi$ and each $w_i$ is a possibly empty word on $[i]$. For, $1\leq i\leq k$, denote by $w_i\setminus \{i\}$ the word obtained from $w_i$ by deleting all the $i's$. The following property of 12312-avoiding (resp. 12321-avoiding ) partitions can be verified easily and we omit the proof here. \[lem1\] A partition $\pi$ is 12312-avoiding (resp. 12321-avoiding) partition with $k$ blocks if and only if the word $w_1\setminus \{1\}w_2\setminus \{2\}\ldots w_k\setminus \{k\}$ is in weakly decreasing (increasing) order. Now, we proceed to construct a map $\sigma $ from $\mathcal{P}_{n+1}(12312)$ to $\mathcal{SH}_n$. Given a $12312$-avoiding partition $\pi$ of $[n+1]$ with $k$ blocks, if $\pi=1$, then let $\sigma(\pi)$ be the empty path. Otherwise, suppose that $\pi$ is decomposed as (\[eq.1\]) and for $i=1,2,\ldots, k-1$, denote by $d_i$ the number of occurrences of $i$ which are right to the first occurrence of $i+1$. We read the decomposition from left to right and generate a path $\sigma(\pi)$ as follows: when a left-to-right maximum $i$ ($i\geq 2$) is read, we adjoin $d_{i-1}+1$ successive up steps followed by one down step; when each element less than $i$ in any word $w_i$ ($1\leq i\leq k$) is read, we adjoin one down step; when each element $i$ in any word $w_i$ ($1\leq i\leq k$) is read, we adjoin one horizontal step. Lemma \[lem1\] ensures that the obtained path $\sigma(\pi)$ is a well defined UH-free Schröder path of semilength $n$. For instance, a $12312$-avoiding partition $\pi=11232343411$ of $[11]$ can be decomposed as $1w_12w_23w_34w_4$, where $w_1=1, w_3=23, w_4=3411, d_1=2, d_2=1 , d_3=1,$ and $w_2$ is empty. The corresponding UH-free Schröder path $\sigma(\pi)$ of semilength $10$ is illustrated as Figure \[UH-free\]. (200,70) (0,0) (0,0)[(1,0)[2]{}]{} (2,0) (2,0)[(1,1)[1]{}]{} (3,1) (3,1)[(1,1)[1]{}]{} (4,2) (4,2)[(1,1)[1]{}]{} (5,3) (5,3)[(1,-1)[1]{}]{} (6,2) (6,2)[(1,1)[1]{}]{} (7,3) (7,3)[(1,1)[1]{}]{} (8,4) (8,4)[(1,-1)[1]{}]{} (9,3) (9,3)[(1,-1)[1]{}]{} (10,2) (10,2)[(1,0)[2]{}]{} (12,2) (12,2)[(1,1)[1]{}]{} (13,3) (13,3)[(1,1)[1]{}]{} (14,4) (14,4)[(1,-1)[1]{}]{} (15,3) (15,3)[(1,-1)[1]{}]{} (16,2) (16,2)[(1,0)[2]{}]{} (18,2) (18,2)[(1,-1)[1]{}]{} (19,1) (19,1)[(1,-1)[1]{}]{} (20,0) (7,3)[(1,1)[1]{}]{} Conversely, we can get a $12312$-avoiding partition of $[n+1]$ from a UH-free Schröder path $P$ of semilength $n$. If $P$ is empty, then let $\sigma^{-1}(P)=1$, otherwise suppose that $P$ has $k$ peaks. Then we can get a word $\sigma^{-1}(P)$ as the following procedure. - Firstly, add a peak at the very begining of $P$ and denote by $P'$ the obtained path; - Secondly, label all the up steps in peaks of $P'$ with the alphabet $\{1,2,\ldots, k+1\}$ from left to right and label each remaining up step $s$ and each horizontal step $h$ with the maximum alphabet which are left to the steps $s$ and $h$, respectively; - Thirdly, if a down step $s$ is in a peak, label $s$ with the same label as the label of the up step in the same peak; Otherwise, suppose that $L^{U}$ (resp. $L^{D}$) is the multiset of all the labels of the up (resp. down) steps left to the step $s$. Then label $s$ with the maximum element of the multiset obtained from $L^{U}$ by removing all the elements of $L^{D}$; - Lastly, let $\sigma^{-1}(P)$ be a word obtained by reading the labels of all the down steps and horizontal steps of $P'$ successively. Obviously, the obtained word $\sigma^{-1}(P)$ is a $12312$-avoiding partition of $[n+1]$. An example of the reverse map of $\sigma$ is shown in Figure \[12312\]. (300,120) (2,1) (2,1)[(1,0)[2]{}]{} (4,1) (3,1.1)[$1$]{} (4,1)[(1,1)[1]{}]{}(5,2) (4.3,1.7)[$1$]{} (5,2)[(1,1)[1]{}]{} (6,3) (5.2,2.7)[$2$]{} (6,3)[(1,-1)[1]{}]{}(7,2) (6.4,2.7)[$2$]{} (7,2)[(1,0)[2]{}]{}(9,2) (8,2.1)[$2$]{} (9,2)[(1,0)[2]{}]{}(11,2) (10,2.1)[$2$]{} (11,2)[(1,1)[1]{}]{}(12,3) (11.3,2.7)[$2$]{} (12,3)[(1,1)[1]{}]{}(13,4) (12.3,3.7)[$2$]{} (13,4)[(1,1)[1]{}]{}(14,5) (13.3,4.7)[$3$]{} (14,5)[(1,-1)[1]{}]{}(15,4) (14.4,4.7)[$3$]{} (15,4)[(1,-1)[1]{}]{}(16,3) (15.4,3.7)[$2$]{} (16,3)[(1,0)[2]{}]{}(18,3) (17,3.1)[$3$]{} (18,3)[(1,-1)[1]{}]{}(19,2) (18.4,2.7)[$2$]{} (19,2)[(1,0)[2]{}]{}(21,2) (20,2.1)[$3$]{} (21,2)[(1,-1)[1]{}]{}(22,1) (21.4,1.7)[$1$]{} (22,1)[(1,1)[1]{}]{}(23,2) (22.3,1.7)[$3$]{} (23,2)[(1,1)[1]{}]{}(24,3) (23.3,2.7)[$4$]{} (24,3)[(1,-1)[1]{}]{}(25,2) (24.4,2.7)[$4$]{} (25,2)[(1,-1)[1]{}]{}(26,1) (25.4,1.7)[$3$]{} (0,1) (0,1)[(1,1)[1]{}]{}(1,2) (1,2)[(1,-1)[1]{}]{} (0.3, 1.5)[$1$]{} (1.5, 1.5)[$1$]{} (2,6) (2,6)[(1,0)[2]{}]{} (4,6) (4,6)[(1,1)[1]{}]{}(5,7) (5,7)[(1,1)[1]{}]{} (6,8) (6,8)[(1,-1)[1]{}]{}(7,7) (7,7)[(1,0)[2]{}]{}(9,7) (9,7)[(1,0)[2]{}]{}(11,7) (11,7)[(1,1)[1]{}]{}(12,8) (12,8)[(1,1)[1]{}]{}(13,9) (13,9)[(1,1)[1]{}]{}(14,10) (14,10)[(1,-1)[1]{}]{}(15,9) (15,9)[(1,-1)[1]{}]{}(16,8) (16,8)[(1,0)[2]{}]{}(18,8) (18,8)[(1,-1)[1]{}]{}(19,7) (19,7)[(1,0)[2]{}]{}(21,7) (21,7)[(1,-1)[1]{}]{}(22,6) (22,6)[(1,1)[1]{}]{}(23,7) (23,7)[(1,1)[1]{}]{}(24,8) (24,8)[(1,-1)[1]{}]{}(25,7) (25,7)[(1,-1)[1]{}]{}(26,6) (13, 6)[$\downarrow$]{} (13, 0.5)[$\downarrow$]{} (8,-1)[$\pi=1122232323143$]{} \[sigma\] The map $\sigma$ is a bijection between the set of $12312$-avoiding partitions of $[n+1]$ and the set of UH-free Schröder paths of semilength $n$. We define a map $\phi$ from $\mathcal{P}_{n+1}(12321)$ to $\mathcal{SH}_n$ the same as the map $\sigma$ and define the reverse of $\phi$ the same as the reverse of $\sigma$ except that in Step 3 we label the down step $s$ not in a peak by the minimum element of the multiset obtained from $L^{U}$ by removing all the elements of $L^{D}$. It is easy to check that $\phi$ is a bijection between the set of $12321$-avoiding partitions of $[n+1]$ and the set of UH-free Schröder paths of semilength $n$. An example of the reverse map of $\phi$ is illustrated as Figure \[12321\]. (300,120) (2,1) (2,1)[(1,0)[2]{}]{} (4,1) (3,1.1)[$1$]{} (4,1)[(1,1)[1]{}]{}(5,2) (4.3,1.7)[$1$]{} (5,2)[(1,1)[1]{}]{} (6,3) (5.2,2.7)[$2$]{} (6,3)[(1,-1)[1]{}]{}(7,2) (6.4,2.7)[$2$]{} (7,2)[(1,0)[2]{}]{}(9,2) (8,2.1)[$2$]{} (9,2)[(1,0)[2]{}]{}(11,2) (10,2.1)[$2$]{} (11,2)[(1,1)[1]{}]{}(12,3) (11.3,2.7)[$2$]{} (12,3)[(1,1)[1]{}]{}(13,4) (12.3,3.7)[$2$]{} (13,4)[(1,1)[1]{}]{}(14,5) (13.3,4.7)[$3$]{} (14,5)[(1,-1)[1]{}]{}(15,4) (14.4,4.7)[$3$]{} (15,4)[(1,-1)[1]{}]{}(16,3) (15.4,3.7)[$1$]{} (16,3)[(1,0)[2]{}]{}(18,3) (17,3.1)[$3$]{} (18,3)[(1,-1)[1]{}]{}(19,2) (18.4,2.7)[$2$]{} (19,2)[(1,0)[2]{}]{}(21,2) (20,2.1)[$3$]{} (21,2)[(1,-1)[1]{}]{}(22,1) (21.4,1.7)[$2$]{} (22,1)[(1,1)[1]{}]{}(23,2) (22.3,1.7)[$3$]{} (23,2)[(1,1)[1]{}]{}(24,3) (23.3,2.7)[$4$]{} (24,3)[(1,-1)[1]{}]{}(25,2) (24.4,2.7)[$4$]{} (25,2)[(1,-1)[1]{}]{}(26,1) (25.4,1.7)[$3$]{} (0,1) (0,1)[(1,1)[1]{}]{}(1,2) (1,2)[(1,-1)[1]{}]{} (0.3, 1.5)[$1$]{} (1.5, 1.5)[$1$]{} (2,6) (2,6)[(1,0)[2]{}]{} (4,6) (4,6)[(1,1)[1]{}]{}(5,7) (5,7)[(1,1)[1]{}]{} (6,8) (6,8)[(1,-1)[1]{}]{}(7,7) (7,7)[(1,0)[2]{}]{}(9,7) (9,7)[(1,0)[2]{}]{}(11,7) (11,7)[(1,1)[1]{}]{}(12,8) (12,8)[(1,1)[1]{}]{}(13,9) (13,9)[(1,1)[1]{}]{}(14,10) (14,10)[(1,-1)[1]{}]{}(15,9) (15,9)[(1,-1)[1]{}]{}(16,8) (16,8)[(1,0)[2]{}]{}(18,8) (18,8)[(1,-1)[1]{}]{}(19,7) (19,7)[(1,0)[2]{}]{}(21,7) (21,7)[(1,-1)[1]{}]{}(22,6) (22,6)[(1,1)[1]{}]{}(23,7) (23,7)[(1,1)[1]{}]{}(24,8) (24,8)[(1,-1)[1]{}]{}(25,7) (25,7)[(1,-1)[1]{}]{}(26,6)(13, 6)[$\downarrow$]{} (13, 0.5)[$\downarrow$]{} (8,-1)[$\pi=1122231323243$]{} \[phi\] The map $\phi$ is a bijection between UH-free Schröder paths of semilength $n$ and $\mathcal{P}_{n+1}(12321)$. In order to get a bijection between $\mathcal{P}_{n+1}(12312)$ and $\mathcal{SE}_n$, we should provide a bijection between $\mathcal{SH}_n$ and $\mathcal{SE}_n$. Now we proceed to construct the map $\psi$ from $\mathcal{SH}_n$ and $\mathcal{SE}_n$. Given a UH-free Schröder path $P\in \mathcal{SH}_n$, if it is empty, then let $\psi(P)$ be an empty path. Otherwise, we can get $\psi(P)$ recursively as follows: - If $P=HP'$, then let $\psi(P)=H\psi(P')$, where $P'$ is a possibly empty UH-free Schröder path; - If $P=UDP'$, then let $\psi(P)=UD\psi(P')$, where $P'$ is a possibly empty UH-free Schröder path; - If $P=U^kDP_1DP_2\ldots DP_k$, where $k\geq 2$, $U^k$ denotes $k$ consecutive up steps and for $1\leq i\leq k$, each $P_i$ is a possibly empty UH-free Schröder path, then let $\psi(P)=UP'_1P'_2\ldots P'_{k-1}D\psi(P_k)$ such that for $1\leq i\leq k-1$, each $ P'_{i}=H$ if $P_i$ is empty and $P'_i=U\psi(P_i)D$, otherwise. Obviously, the obtained path $\psi(P)$ is a Schröder path of semilength $n$ without peaks at even level. It is easy to check that the map $\psi$ is reversible. For the convenience of simplicity, we omit the reverse map of $\psi$. (300,70) (0,0)(0,0)[(1,1)[1]{}]{} (1,1)(1,1)[(1,1)[1]{}]{} (2,2)(2,2)[(1,1)[1]{}]{} (3,3)(3,3)[(1,-1)[1]{}]{} (4,2)(4,2)[(1,-1)[1]{}]{} (5,1)(5,1)[(1,0)[2]{}]{} (7,1)(7,1)[(1,1)[1]{}]{} (8,2)(8,2)[(1,-1)[1]{}]{} (9,1)(9,1)[(1,-1)[1]{}]{} (10,0)(10,0)[(1,1)[1]{}]{} (11,1)(11,1)[(1,-1)[1]{}]{} (12,0) (13,1)[$\longleftrightarrow$]{} (13.5,1.5)[$\psi$]{} (16,0)(16,0)[(1,1)[1]{}]{} (17,1)[(1,0)[2]{}]{}(19,1) (19,1)[(1,1)[1]{}]{}(20,2)(20,2)[(1,0)[2]{}]{} (22,2)(22,2)[(1,1)[1]{}]{} (23,3)(23,3)[(1,-1)[1]{}]{} (24,2)(24,2)[(1,-1)[1]{}]{} (25,1)(25,1)[(1,-1)[1]{}]{} (26,0)(26,0)[(1,1)[1]{}]{} (27,1)(27,1)[(1,-1)[1]{}]{} (28,0) \[sch\] The map $\psi$ is a bijection between the set of UH-free Schröder paths of semilength $n$ and the set of Schröder path of semilength $n$ without peaks at even level. Combining Theorems \[sigma\], \[phi\] and \[sch\], we have the following results. The map $\psi\cdot \sigma $ (resp. $\psi\cdot \phi $) is a bijection between the set of $12312$-avoiding (resp. $12321$-avoiding) partitions of $[n+1]$ and the set of Schröder paths of semilength $n$ without peaks at even level. Refined enumerations ==================== In this section, we aim to get the refined enumeration of $12312$-avoiding (resp. $12321$-avoidng )partitions according to the number of blocks. By restricting the peaks in Schröder paths, we get the enumeration of irreducible $12321$-avoiding (resp. 12321-avoiding) partitions. From the construction of the maps $\sigma$ and $\phi$, we get that each block apart from the first block in a 12312-avoiding (resp. 12321-avoiding) partition $\pi$ brings up a peak in its corresponding UH-free Schröder path $\sigma(\pi)$ (resp. $\phi(\pi)$). Hence, we get the following result. \[coro1\] Let $\pi$ be a 12312-avoiding (resp. 12321-avoiding) partition on $[n+1]$ with $k+1$ blocks, then $\sigma(\pi)$ (resp. $\phi(\pi)$) is a UH-free Schröder path of semilength $n$ with $k$ peaks. A [*Dyck path*]{} of semilength $n$ is a lattice path on the plane from $(0,0)$ to $(2n,0)$ that does not go below the $x$-axis and consists of up steps $U=(1,1)$ and down steps $D=(1,-1)$. The number of Dyck paths of semilength $n$ with $k$ peaks is counted by the Narayana number $$N_{n,k}={1\over n}{n\choose k}{n\choose k-1}.$$ Note that any UH-free Schröder path of semilength $n$ with $k$ peaks can be obtained from a Dyck path of semilength $j$ $(0\leq j\leq n)$ with $k$ peaks by inserting $n-j$ horizontal steps into the positions after down steps and the position at the very beginning of the Dyck path. The number of such arrangement is equal to ${n\choose j}$. Hence, the number of UH-free Schröder paths of semilength $n$ with $k$ peaks is counted by $\sum_{j=k}^{n}{1\over j}{j\choose k}{j\choose k-1}{n\choose j}$. From Corollary \[coro1\], we get the following result. The number of 12312-avoiding (resp. 12321-avoiding ) partitions on $[n+1]$ with $k+1$ ($k\geq 1$) blocks is equal to $$\sum_{j=k}^{n}{1\over j}{j\choose k-1}{j\choose k} {n\choose j}.$$ A partition $P$ of $[n]$ is called an [*irreducible partition*]{} if for any $m\in [n-1]$, $P$ can not be reduced to two smaller partitions $P_1$ and $P_2$ such that $P_1$ is a partition of $[m]$ and $P_2$ is a partition of $\{m+1, m+2, \ldots, n\}$. Irreducible partitions have been studied by Lehner [@Le]. In fact, a partition $\pi$ of $[n]$ is irreducible if and only if for any element $i\in [n]$, there is at least one occurrence of an element $j$ which is less than $i$ and right to the first occurrences of $i$. Hence, by the construction of the maps $\sigma$ and $\phi$, we see that if $\pi$ is irreducible, then its corresponding Schröder path $\sigma(\pi)$ (resp. $\phi(\pi)$) has no peaks at level one. The map $\sigma$ (resp. $\phi$) is a bijection between the set of irreducible $12312$-avoiding (resp. $12321$-avoiding) partitions on $[n+1]$ and the set of UH-free Schröder paths of semilength $n$ without peaks at level one. Denote by $\mathcal{SH'}_n$ the set of UH-free Schröder paths of semilength $n$ without peaks at level one. Let $s_n$ and $s'_n$ the cardinality of $\mathcal{SH}_n$ and $\mathcal{SH'}_n$, respectively. Let $f(x)=\sum_{n=0}^{\infty}s_nx^n$ and $f'(x)=\sum_{n=0}^{\infty}s'_nx^n$ where $s_0=1 $ and $s'_0=1$. Then, it is easy to get the following recurrence relations: $$f(x)=1+2xf(x)+xf(x)(f(x)-1-xf(x)),$$ and $$f'(x)=1+xf'(x)+xf'(x)(f(x)-1-xf(x)).$$ Hence, we have $$f(x)={1-x-\sqrt{1-6x+5x^2}\over 2(x-x^2)},$$ and $$f'(x)={1\over 1-x(1-x)f(x)}={2\over 1+x+\sqrt{1-6x+5x^2} },$$ which is the generating function for skew Dyck paths of length $n$ ending with a down step, see [@Seq A033321]. A [*skew Dyck path*]{} is a path in the first quadrant which begins at the origin, ends on the $x$-axis, consists of steps $U=(1,1)$, $D=(1,-1)$, and $L=(-1,-1)$ so that never lie below the x-axis and up and left steps do not overlap. The number of irreducible $12312$-avoiding (resp. $12321$-avoiding) partitions of $[n+1]$ is equal to the number of skew Dyck paths of semilength $n$ ending with a down step. [99]{} W.Y.C. Chen, T. Mansour and S.H.F. Yan, Matchings avoiding partial patterns, [*Electron. J. Combin.*]{} [**13**]{}(2006) R112. A.M. Goyt, Avoidance of partitions of a three-element set, [*Adv. Appl. Math.*]{} [**41**]{}(2008) 95–114. V. Jelinek, T. Mansour, On pattern-avoiding partitions, [*Electron. J. Combin.*]{} [**15**]{}(2008) R39. F. Lehner, Free cumulants and enumeration of connected partitions, [*Europ. J. Combin.*]{} [**23**]{}(2002) 1025–1031. T. Mansour, S. Severini, Enumeration of $(k, 2)$-noncrossing partitions, [*Discrete Math.*]{} [**308**]{} (2008) 4570-4577. B.E. Sagan, Pattern avoidance in set partitions, arXiv: Math.CO 0604292. N.J.A. Sloane, The On-Line Encyclopedia of Integer Sequences, http://www.research.att.com/$\thicksim$njas/sequences.
--- abstract: 'Two strings $x$ and $y$ over $\Sigma \cup \Pi$ of equal length are said to *parameterized match* (*p-match*) if there is a renaming bijection $f:\Sigma \cup \Pi \rightarrow \Sigma \cup \Pi$ that is identity on $\Sigma$ and transforms $x$ to $y$ (or vice versa). The *p-matching* problem is to look for substrings in a text that p-match a given pattern. In this paper, we propose *parameterized suffix automata* (*p-suffix automata*) and *parameterized directed acyclic word graphs* (*PDAWGs*) which are the p-matching versions of suffix automata and DAWGs. While suffix automata and DAWGs are equivalent for standard strings, we show that p-suffix automata can have $\Theta(n^2)$ nodes and edges but PDAWGs have only $O(n)$ nodes and edges, where $n$ is the length of an input string. We also give $O(n |\Pi| \log (|\Pi| + |\Sigma|))$-time $O(n)$-space algorithm that builds the PDAWG in a left-to-right online manner. We then show that an *implicit* representation for the PDAWG can be built in $O(n \log (|\Pi| + |\Sigma|))$ time and $O(n)$ space from left to right. As a byproduct, it is shown that the *parameterized suffix tree* for the reversed string can also be built in the same time and space, in a right-to-left online manner. We also discuss *parameterized compact DAWGs*.' author: - Katsuhito Nakashima - Noriki Fujisato - Diptarama Hendrian - Yuto Nakashima - Ryo Yoshinaka - Shunsuke Inenaga - Hideo Bannai - Ayumi Shinohara - Masayuki Takeda bibliography: - 'ref.bib' title: | DAWGs for parameterized matching:\ online construction and related indexing structures ---
--- abstract: | In this paper we study the Riesz transform on complete and connected Riemannian manifolds $M$ with a certain spectral gap in the $L^2$ spectrum of the Laplacian. We show that on such manifolds the Riesz transform is $L^p$ bounded for all $p \in (1,\infty)$. This generalizes a result by Mandouvalos and Marias and extends a result by Auscher, Coulhon, Duong, and Hofmann to the case where zero is an isolated point of the $L^2$ spectrum of the Laplacian.\ [*Keywords:*]{} Riesz tranform on Riemannian manifolds, Spectral gap. author: - | Lizhen Ji[^1]\ [Department of Mathematics, University of Michigan]{} - | Peer Kunstmann[^2]\ [Institut für Analysis, Universität Karlsruhe (TH)]{} - | Andreas Weber[^3]\ [Institut für Algebra und Geometrie, Universität Karlsruhe (TH)]{} bibliography: - 'dissertation.bib' - 'hypercyclic.bib' - 'symmetricSpaces.bib' title: Riesz Transform on Locally Symmetric Spaces and Riemannian Manifolds with a Spectral Gap --- Introduction and Main Result ============================ Investigations concerning the question whether the Riesz transform $ \nabla \Delta^{-1/2}$ on a general complete Riemannian manifold $M$ is a bounded operator from $L^p(M)$ to the space of $L^p$ vector fields, $1< p < \infty,$ have been started with the paper [@MR705991] by Strichartz (note that we denote by $\nabla$ the gradient and by $\Delta$ the Laplace-Beltrami operator on $M$). He proved in this paper that the $L^2$ Riesz transform is an isometry for any complete Riemannian manifold $M$ and furthermore, he showed boundedness on $L^p, 1<p<\infty$, if $M$ is a rank one symmetric space of non-compact type. In general however, the Riesz transform needs not be bounded for all $p > 1$, see e.g. [@MR1458299] where an example for a Riemannian manifold $M$ is given such that the Riesz transform is unbounded on $L^p$ for all $p>2$. Note that the $L^p$ boundedness of the Riesz transform on a Riemannian manifold is equivalent to the existence of a constant $C_p>0$ such that $$\| |\nabla f| \|_{L^p} \leq C_p \| \Delta^{1/2}f \|_{L^p}$$ for all $f\in C_0^{\infty}(M)$ holds. In [@Mandouvalos:2009yq] Mandouvalos and Marias studied the Riesz transform on certain locally symmetric spaces. Inspired by their paper and [@MR2119242] we show here how the $L^p$ boundedness of the Riesz transform for a larger class of connected Riemannian manifolds $M$ can be obtained. Our only assumptions are a certain spectral gap in the $L^2$ spectrum of the Laplacian and the following properties of $M$: - The Riemannian manifold $M$ satisfies the [*exponential growth property*]{}: For all $r_0>0, x\in M, \theta >1,$ and $0< r < r_0$ the volume $V(x,r)$ of a ball with center $x$ and radius $r$ satisfies $$\label{exponential growth property} V(x,\theta r) \leq Ce^{c\theta}V(x,r)$$ for some constants $C\geq 0, c>0$ that only depend on $r_0$. Note that this condition implies the local volume doubling property, cf. [@MR2119242]. - If $p(t,x,y)$ denotes the heat kernel of the heat semigroup $e^{-t\Delta}: L^2(M) \to L^2(M)$ we assume that there exists $C > 0$ such that for all $x,y \in M$ and all $t \in (0,1)$ we have $$\label{DUB} p(t,x,x) \leq \frac{C}{V(x,\sqrt{t})}$$ and $$\label{gradient} \left|\nabla_x p(t,x,y)\right| \leq \frac{C}{\sqrt{t}\,V(y,\sqrt{t})}.$$ More precisely, we prove the following theorem. \[main theorem\] Let $M$ denote a complete and connected Riemannian manifold satisfying properties (\[exponential growth property\]), (\[DUB\]), and (\[gradient\]) from above, and assume that there is a constant $a>0$ such that the following condition for the $L^2$ spectrum $\sigma(\Delta)$ of $\Delta$ holds: $$\sigma(\Delta) \subset \{0\} \cup [a,\infty).$$ Then there exist for any $p\in (1,\infty)$ constants $c_p,C_p >0$ such that $$\label{inequality Riesz} c_p \| \Delta^{1/2}f \|_{L^p} \leq \| |\nabla f| \|_{L^p} \leq C_p \| \Delta^{1/2}f \|_{L^p}$$ for any $f\in C_0^{\infty}(M)$. This result extends [@MR2119242 Theorem 1.9]. The new feature is that we allow also $0\in \sigma(\Delta)$. Note that similar ideas as in our proof have been used in [@MR1458299 Theorem 1.3]. Important examples for manifolds that satisfy the conditions in Theorem \[main theorem\] will be given in Section \[section proofs\]. Note, that properties (\[exponential growth property\]), (\[DUB\]), and (\[gradient\]) are always satisfied if the Ricci curvature of $M$ is bounded from below (cf. also the discussion in [@MR2119242 Section 1.3]) and hence, these conditions are fulfilled if $M$ is a locally symmetric space. Let $M$ denote a complete and connected Riemannian manifold whose Ricci curvature is bounded from below, and assume that there is a constant $a>0$ with $$\sigma(\Delta) \subset \{0\} \cup [a,\infty).$$ Then there exist for any $p\in (1,\infty)$ constants $c_p,C_p >0$ such that $$c_p \| \Delta^{1/2}f \|_{L^p} \leq \| |\nabla f| \|_{L^p} \leq C_p \| \Delta^{1/2}f \|_{L^p}$$ holds for any $f\in C_0^{\infty}(M)$. While Mandouvalos and Marias in [@Mandouvalos:2009yq] proved boundedness of the $L^p$ Riesz transform only for $p\in (p_1,p_2)$ with certain $1<p_1<2<p_2<\infty$, we allow here $p\in (1,\infty)$. A standing assumption in the paper [@Mandouvalos:2009yq] is that the $L^2$ spectrum of the Laplacian on a non-compact locally symmetric space $M=\Gamma\backslash X$ is of the form $\{\lambda_0,\ldots, \lambda_r\} \cup [||\rho||^2,\infty)$, where $\lambda_i < ||\rho||^2, i=0,\ldots,r,$ are eigenvalues of finite multiplicity and $\rho$ denotes half the sum of the positive roots. Note that this assumption needs not be satisfied for all non-compact locally symmetric spaces. If e.g. the universal covering $X$ is a higher rank symmetric space, the absolutely continuous part of the $L^2$ spectrum of a non-compact finite volume quotient $M=\Gamma\backslash X$ is in many cases of the form $[a,\infty)$ where $0<a< ||\rho||^2$, [@Ji:2007fk]. However, it is known that the above cited condition from [@Mandouvalos:2009yq] holds true for all non-compact, geometrically finite hyperbolic manifolds $\Gamma\backslash {\mathbb{H}}^n$, [@MR661875].\ In a first step towards a proof of the boundedness of the Riesz transform the authors show in [@Mandouvalos:2009yq] that any $L^2$ eigenfunction $\varphi_{\lambda_i}$ with respect to some eigenvalue $\lambda_i < ||\rho||^2$ is contained in $L^p(M)$ for $p$ in an interval $(p_1,p_2)$ where $$\begin{aligned} p_1 & = & 2 \left( 1 + \left(1 - \frac{\lambda_i}{|| \rho ||^2} \right)^{1/2} \right)^{-1},\\ p_2 & = & 2 \left( 1 + \left(1 - \frac{\lambda_i}{|| \rho ||^2} \right)^{1/2} \right),\\ \end{aligned}$$ cf. [@Mandouvalos:2009yq Theorem 1]. Note that the authors use $\lambda_r$ instead of $\lambda_i \leq \lambda_r$ in the statement of their result. The proof however shows, that $p_1$ and $p_2$ from above can be used. Note further, that $p_1' \geq p_2$, where $p_1'$ denotes the conjugate of $p_1$, i.e. $\frac{1}{p_1} + \frac{1}{p_1'} = 1$. In the proof of this result the authors state that the volume of balls grows exponentially (formula 2.11) with respect to the radius. Unfortunately, this assumption is often not satisfied if $M$ is a locally symmetric space with non-trivial fundamental group. In the case of finite volume it is certainly false. And even if the volume is infinite the volume growth may be linear (the following example was communicated to us by Richard D Canary): if $M$ is a compact hyperbolic $3$-manifold that fibers over the circle we may ’unwrap’ this manifold along the circle such that only the fundamental group coming from the fibers survives. The resulting hyperbolic manifold $\tilde{M}$ is a covering of $M$ that has an infinite cyclic subgroup of isometries with compact quotient and hence linear volume growth. The existence of hyperbolic $3$-manifolds that fiber over the circle follows from Thurston’s work in [@Thurston:yq] and, of course, from Perelman’s proof of Thurston’s geometrization conjecture using Ricci flow. However, a detailed look at the proof of [@Mandouvalos:2009yq Theorem 1] shows that only an exponential upper bound on the volume growth is needed and this condition is always satisfied for complete locally symmetric spaces $M$.\ In our proof of the boundedness of the Riesz transform we do not need to make any of the above assumptions and we will not need specific information about the $L^2$ eigenfunctions. Nevertheless, we want to mention briefly that in some cases it is possible to enlarge the interval $(p_1,p_2)$: If $M$ is a locally symmetric space with ${\mathbb{Q}}$-rank one it had been shown in [@Ji:2007fk Theorem 4.1] that $p_1=1$ and $p_2 = 2( 1 + (1 - \frac{\lambda_i}{|| \rho_{{{\bf P}}} ||^2})^{1/2})^{-1}$ can be chosen.\ If the injectivity radius of $M$ is strictly positive, it follows immediately from Taylor’s work on the $L^p$ spectrum of the Laplacian [@MR1016445 Proposition 3.3] that the interval can be enlarged to the right hand side, more precisely, we can choose $p_2 = p_1' = 2 ( 1 - (1 - \frac{\lambda_i}{|| \rho ||^2} )^{1/2} )^{-1}$. Note that the condition on the injectivity radius implies in particular that $M$ has infinite volume.\ Proof and Examples {#section proofs} ================== The following theorem due to Auscher, Coulhon, Duong, and Hofmann is a main ingredient in our proof of Theorem \[main theorem\]. \[ACDH\] Let $M$ denote a complete Riemannian manifold satisfying the properties (\[exponential growth property\]), (\[DUB\]), and (\[gradient\]), and let $p\in (1,\infty)$. Then there exists a constant $C_p > 0$ such that the inequality $$\| | \nabla f | \|_{L^p} + \| f \|_{L^p} \leq C_p \Big( \| \Delta^{1/2}f \|_{L^p} + \| f \|_{L^p} \Big).$$ holds for all $f\in C_0^{\infty}(M)$. The following lemma can basically be found in the proof of [@MR1458299 Theorem 1.3]. For the sake of completeness and because we need a variant of the argument below, we recall its short proof. \[lemma fractional\] Assume that $0\notin \sigma(\Delta)$. Then for any $\alpha \in (0,1)$ the operator $\Delta^{-\alpha}$ defines a bounded operator on $L^p(M), p\in (1,\infty)$. Since $\Delta$ is a selfadjoint operator on $L^2(M)$ and $\sigma(\Delta)\subset [a,\infty)$ for some $a>0$ it follows from the spectral theorem $\| e^{-t\Delta} \|_{L^2\to L^2} \leq e^{-at}$. Furthermore, the semigroup $e^{-t\Delta}: L^2(M)\to L^2(M)$ is positive and $L^{\infty}(M)$ contractive. Hence, this semigroup extends to a strongly continuous contraction semigroup $e^{-t\Delta}: L^p(M)\to L^p(M)$ for any $p\in [1,\infty)$. By interpolation and duality it follows that for any $p\in [1,\infty)$ we have $\| e^{-t\Delta} \|_{L^p\to L^p} \leq e^{- 2 \min\{1/p,1/p'\} a t }$ and hence, the integral $$\Delta^{-\alpha} = \frac{1}{\Gamma(\alpha)}\int_0^{\infty} t^{1-\alpha} e^{-t\Delta} dt.$$ defines for all $\alpha\in (0,1)$ a bounded operator on $L^p(M), p\in (1,\infty)$. Before we proceed with the proof of Theorem \[main theorem\], we recall the following general result that has been proved in [@MR705991]. Note that we denote by $-\Delta_p$ the generator of the heat semigroup $T_p(t) = e^{-t\Delta}$ on $L^p(M)$. \[core\] For any $1<p<\infty$ and any complete Riemannian manifold $M$, the space $C^\infty_0(M)$ is a core for the operator $\Delta_p$ in $L^p(M)$, i.e. $C^\infty_0(M)$ is dense in ${\cal D}(\Delta_p)$ with respect to the graph norm. In particular, $C^\infty_0(M)$ is a core for the operator $\Delta_p^{1/2}$. Letting $L_p$ denote the $L^p(M)$ closure of $\Delta$, defined on $C^\infty_0(M)$, it is shown in [@MR705991], beginning of the proof of Theorem 3.5, that $-L_p$ is dissipative and generates a contraction semigroup in $L^p(M)$. This semigroup coincides with the heat semigroup, which implies $L_p=\Delta_p$. #### Proof of Theorem \[main theorem\]. {#proof-of-theorem-main-theorem. .unnumbered} It is well known (cf. [@MR889472; @MR990472]) that by duality the first inequality in (\[inequality Riesz\]) is implied by the second and hence we will concentrate on the second.\ The case $0\notin \sigma(\Delta)$ was already proved in [@MR2119242 Theorem 1.9] (cf. also [@MR1458299 Theorem 1.3]): By Lemma \[lemma fractional\] the inverse square root $\Delta^{-1/2}$ defines a bounded operator on $L^p(M)$ for any $p\in (1,\infty)$ and hence, the claim follows from Theorem \[ACDH\] together with $\| f \|_{L^p} \leq C \| \Delta^{1/2}f \|_{L^p}$ for $f\in C_0^{\infty}(M)$.\ Assume now $0\in\sigma(\Delta)$. Then $0$ is an isolated point in the spectrum of a self-adjoint operator and hence an eigenvalue. If $f\in L^2(M)$ is a corresponding eigenfunction we may conclude that $f=const.$ as $0=\langle \Delta f,f \rangle = \langle\nabla f, \nabla f \rangle$ and thus $vol(M) < \infty$. From Hölder’s inequality it follows in particular $L^p(M) \hookrightarrow L^q(M)$ if $1\leq q\leq p\leq \infty$.\ Let us define for $p\in [1,\infty)$ $$L^p_0(M) = \left\{ f\in L^p(M) : \langle 1, f\rangle=\int_M f dx = 0 \right\}.$$ Then $L^p_0(M)$ is an invariant subspace for the semigroup $T_p(t)=e^{-t\Delta}: L^p(M)\to L^p(M)$ and $L^p(M) = L^p_0(M) \oplus {\mathrm{span}}\{ 1\}$. Furthermore, if we denote by $-\Delta_0$ the generator of $e^{-t\Delta}: L^2_0(M) \to L^2_0(M)$ we have $\Delta_0 = \Delta\big|_{{\cal D}(\Delta_2)\cap L^2_0(M)}$. As $\sigma(\Delta_0) \subset [a,\infty)$ for some $a>0$ we can prove as in Lemma \[lemma fractional\] that $\Delta_0^{-\alpha}$ is bounded on $L^p_0(M)$ for all $\alpha\in (0,1)$ and $p\in (1,\infty)$. Note that we need here to assure that the spaces $L^p_0(M), p\in [1,\infty),$ interpolate correctly, i.e. $[L^{p_1}_0(M), L^{p_2}_0(M)]_{\theta} = L^{p_\theta}_0(M)$ (see [@MR1328645]), where $\frac{1}{p_{\theta}} = \frac{1-\theta}{p_1} + \frac{\theta}{p_2}$. This however follows from the fact that $$P: L^p(M) \to L^p(M), \, f\mapsto f - \frac{1}{vol(M)} \int_M f dx$$ is a continuous projection onto $L^p_0(M)$ together with [@MR1328645 1.2.4]. If $M$ is compact then inequality (\[inequality Riesz\]) follows from Theorem \[ACDH\] since $Pf\in C^\infty_0(M)\cap L^p_0(M)$ and (\[inequality Riesz\]) holds for the constant function $g:=f-Pf$.\ If $M$ is non-compact but of finite volume then $Pf\in C^\infty_0(M)$ is no longer true in general. But by Lemma \[core\], Theorem \[ACDH\] holds by approximation for all $g\in {\cal D}(\Delta_p^{1/2})$ with the same constant. In particular, it holds for all $g\in C^\infty_0(M)+{\mathrm{span}}\{1\}$. Now we can finish the proof as in the compact case since $f\in C^\infty_0(M)$ implies $Pf\in \left(C^\infty_0(M)+{\mathrm{span}}\{1\}\right)\cap L^p_0(M)$. Examples {#examples .unnumbered} -------- The conditions in Theorem \[main theorem\] are satisfied for the following manifolds. #### Compact manifolds. {#compact-manifolds. .unnumbered} If $M$ is compact, its $L^2$ spectrum is discrete and $0\in \sigma(\Delta)$. #### Locally symmetric spaces. {#locally-symmetric-spaces. .unnumbered} Let $M= \Gamma\backslash G/K$ denote a locally symmetric space whose universal covering is a symmetric space of non-compact type.\ If the critical exponent $\delta(\Gamma)$ satisfies $\delta(\Gamma) < 2||\rho||$ ($\rho$ denotes half the sum of the positive roots) then the bottom of the $L^2$ spectrum is bounded from below by a strictly positive constant [@MR2019974; @Weber:2007fk]. Note that in this case $vol(M)=\infty$.\ If $M$ is non-compact with finite volume, the $L^2$ spectrum equals $\sigma(\Delta)= \{0, \lambda_1,\ldots, \lambda_r\} \cup [b,\infty)$ for some $b>0$, see e.g. Müller’s article in [@:2008xq].\ Important examples in this context are all non-compact hyperbolic manifolds $M=\Gamma\backslash {\mathbb{H}}^n$ where $\Gamma$ is geometrically finite. Then, in the case of finite and infinite volume of $M$, the $L^2$ spectrum is of the form $\{\lambda_1,\ldots, \lambda_r\} \cup \left[\frac{(n-1)^2}{4},\infty\right)$, cf. [@MR661875]. #### Manifolds with cusps of rank one. {#manifolds-with-cusps-of-rank-one. .unnumbered} These non-compact manifolds with finite volume satisfy also the condition $\sigma(\Delta)= \{0, \lambda_1,\ldots, \lambda_r\} \cup [b,\infty)$ for some $b>0$, cf. [@MR891654]. #### Homogeneous Spaces. {#homogeneous-spaces. .unnumbered} Consider $M = \Gamma\backslash G$ where G is a semisimple Lie group endowed with an invariant metric and $ \Gamma \subset G$ is a non-uniform arithmetic lattice in $G$. Then, the spectrum equals $\sigma(\Delta)= \{0, \lambda_1,\ldots, \lambda_r\} \cup [b,\infty)$ for some $b>0$, cf. [@MR701563].\ Note that the continuous $L^2$ spectrum of the Laplacian on a complete Riemannian manifold is invariant under compact perturbations of the Riemannian metric. This follows from the so-called decomposition principle in [@MR544241]. Hence, the spectral gap condition in the above mentioned cases is still satisfied after compact perturbations. #### Is zero in the spectrum? {#is-zero-in-the-spectrum .unnumbered} This question was raised in various contexts. Let us recall the following results which immediately give further examples. Let $M$ have infinite volume. Suppose that there is a constant $\kappa\geq 0$ such that $Ricci_M \geq -\kappa^2$. Then $0\notin \sigma(\Delta)$ if and only if $M$ is open at infinity. Recall that a manifold with infinite volume is called open at infinity if and only if the Cheeger constant is strictly positive. Let $X$ be a normal covering of a compact manifold $M$ with covering group $\Gamma$. Then $0\in \sigma(\Delta)$ if and only if $\Gamma$ is amenable. Let $M$ denote a Riemannian manifold with Ricci curvature bounded from below and with empty cut locus. If there is a point $x_0\in M$ such that the volume form $\sqrt{g(x_0,\zeta)}$ of the Riemannian metric grows exponentially in every direction (with respect to geodesic normal coordinates $(x,\zeta)$), the bottom of the $L^2$ spectrum is strictly positive. #### Acknowledgements {#acknowledgements .unnumbered} We want to thank Michel Marias for valuable remarks on a first version of this paper. We also want to thank the referee for many suggestions which led to a great improvement of this paper. Lizhen Ji was partially supported by NSF grant DMS 0604878. [^1]: Email: lji@umich.edu, Address: 1834 East Hall, Ann Arbor, MI 48109-1043, USA. [^2]: Email: peer.kunstmann@math.uni-karlsruhe.de, Address: Kaiserstr. 89 - 93, 76128 Karlsruhe, Germany. [^3]: Email: andreas.weber@math.uni-karlsruhe.de, Address: Kaiserstr. 89 - 93, 76128 Karlsruhe, Germany.
--- abstract: 'We introduce a new method to estimate the Markov equivalence class of a directed acyclic graph (DAG) in the presence of hidden variables, in settings where the underlying DAG among the observed variables is sparse, and there are a few hidden variables that have a direct effect on many of the observed ones. Building on the so-called low rank plus sparse framework, we suggest a two-stage approach which first removes unwanted variation using latent Gaussian graphical model selection, and then estimates the Markov equivalence class of the underlying DAG by applying GES. This approach is consistent in certain high-dimensional regimes and performs favourably when compared to the state of the art, both in terms of graphical structure recovery and total causal effect estimation.' address: - 'Seminar für Statistik, ETH Zürich, Zürich, Switzerland' - 'Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, USA' - 'Seminar für Statistik, ETH Zürich, Zürich, Switzerland' author: - Benjamin Frot - Preetam Nandy - 'Marloes H. Maathuis' title: Learning Directed Acyclic Graphs with Hidden Variables via Latent Gaussian Graphical Model Selection --- [^1] Introduction {#section:introduction} ============ Problem Statement {#section:problem_statement} ================= Theoretical Results {#section:theoretical_results} =================== Performances on Simulated Data {#section:simulations} ============================== Applications {#section:application} ============ Discussion ========== [^1]: BF and PN are equally contributing authors.
--- abstract: 'The luminosity of young giant planets can inform about their formation and accretion history. The directly imaged planets detected so far are consistent with the “hot-start" scenario of high entropy and luminosity. If nebular gas passes through a shock front before being accreted into a protoplanet, the entropy can be substantially altered. To investigate this, we present high resolution, 3D radiative hydrodynamic simulations of accreting giant planets. The accreted gas is found to fall with supersonic speed in the gap from the circumstellar disk’s upper layers onto the surface of the circumplanetary disk and polar region of the protoplanet. There it shocks, creating an extended hot supercritical shock surface. This shock front is optically thick, therefore, it can conceal the planet’s intrinsic luminosity beneath. The gas in the vertical influx has high entropy which when passing through the shock front decreases significantly while the gas becomes part of the disk and protoplanet. This shows that circumplanetary disks play a key role in regulating a planet’s thermodynamic state. Our simulations furthermore indicate that around the shock surface extended regions of atomic – sometimes ionized – hydrogen develop. Therefore circumplanetary disk shock surfaces could influence significantly the observational appearance of forming gas-giants.' author: - | J. Szulágyi$^{1}$[^1] & C. Mordasini$^{2}$\ $^{1}$ETH Zürich, Institute for Astronomy, Wolfgang-Pauli-Strasse 27, CH-8093, Zürich, Switzerland\ $^{2}$Physikalisches Institut, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland\ date: 'Accepted XX. Received XX; in original form XX' title: 'Thermodynamics of Giant Planet Formation: Shocking Hot Surfaces on Circumplanetary Disks' --- \[firstpage\] accretion, accretion discs – hydrodynamics – methods: numerical – planets and satellites: formation – planet-disc interactions Introduction ============ Giant planets are thought to form either via core-accretion [@Pollack96] or gravitational instability scenario [@Boss97]. To get a handle on which formation scenario led to an observed gas giant, the post-formation entropy of the planet was initially thought to distinguish between the two cases [@Burrows97; @Marley07]. Traditionally, planets formed by core accretion were thought to have a low luminosity and entropy ($\la$9.5 $\mathrm{k_B}$/baryon) – corresponding to the so called “cold-start" scenario – whereas gravitational instability was thought to lead to giant planets having a high luminosity and entropy – the “hot-start" scenario ($\ga$9.5 $\mathrm{k_B}$/baryon, @Marley07). Recent studies, however, pointed out that the situation is more complex. The entropy of the planets is affected by whether the accretion of gas onto the planet happens through a supercritical shock front, as indicated by one-dimensional spherically symmetric models [@Marley07]. If the gas forming the planet passes through such a entropy-reducing shock where a significant part of the accretion luminosity can be radiated, the planet itself will have a low entropy, consistent with the “cold-start" scenario, regardless which formation mechanism builds the gas giant [@Mordasini12]. Moreover, the mass of the solid planetary core can alter the post-formation entropy of the planet as well, with higher mass cores leading to hotter planets [@Mordasini13; @bodenheimerdangelo2013]. Finally, @OM16 showed that the presence of a circumplanetary disk around the planet will funnel hot gas to the planet that can inflate the outer layers of the gas giant, enhancing the planet’s entropy. Observations of directly imaged planets allow to measure the luminosities of young [e.g., @Marois08; @lagrangebonnefoy2010] and recently also of still forming embedded planets [e.g., @KI12; @Quanz15]. This makes it possible to estimate their entropies [@MC14] and conclude whether they are consistent with the “cold-start” or “hot-start" scenarios. A handful of directly imaged gas giants is available for study today, and most seem to be luminous and consistent [@MC14; @OM16] with the “hot-start" scenario (which could in principle also be an observational bias as fainter planets are more difficult to detect). However, gravitational instability as a formation mechanism appears unlikely in several of these cases due to various factors, such as the rather small semi-major axis or a rather low circumstellar disk mass [@FR13]. Furthermore, the luminosity estimations of forming embedded planets from direct imaging observations can be contaminated by the luminosity of the circumplanetary disk or accretion [@Zhu15; @Szulagyi16], enhancing the observed overall luminosity, that can make the planet look like a “hot-start" case. The observation that the currently known directly observed planets seem to be consistent with a “hot-start" scenario, but some of them may formed via core accretion (e.g., $\beta$ Pic b, @mordasinimolliere2015) indicates that solely the low versus high entropy state of an observed gas giant cannot conclusively distinguish between the two planet formation scenario. This is partially due to the lack of theoretical studies of the thermodynamics of giant planet formation which predict the post-formation entropy based on multi-dimensional, radiation-hydrodynamical simulations. In this paper we therefore present a thermodynamical study based on 3D, radiative hydrodynamic simulations of forming gas giants with various masses embedded in circumstellar disks. The planets form circumplanetary disks or circumplanetary envelopes around them depending on the gas temperature in the planet vicinity [@Szulagyi16]. As described e.g. in @Szulagyi14, the accretion of the gas happens from the vertical direction through the planetary gap. This is because the top layers of the circumstellar disk try to close the gap opened by the giant planet, and, as gas enters the gap, it falls nearly freely onto the circumplanetary disk’s surface and onto the polar regions of the protoplanet. The gas shocks at this surface, and then becomes the part of the disk, where it eventually spirals down to the planet. In this work we study the change of important thermodynamic quantities like the entropy or ionization state of the gas as it passes through the shock front. Methods {#sec:methods} ======= Our study is based on three-dimensional, radiative, grid-based hydrodynamic simulations with the JUPITER code [@Borro06; @Szulagyi14; @Szulagyi16], developed by F. Masset & J. Szulágyi. This code is based on a shock-capturing Godunov’s method using Riemann-solvers and has nested meshes, which allow to zoom onto the planet’s vicinity with high resolution. The radiative module is based on the flux limited diffusion approximation [e.g. @Commercon11], and calculates temperatures where viscous heating and radiative cooling is included. The opacity of the gas and dust is taken into account through the @BL94 opacity table. This means that despite of the purely gas hydrodynamic simulations, the impact of the dust on the temperature is included through the dust opacities and a fixed dust-to-gas ratio of 1%. More details about the radiative code and the simulations is given in @Szulagyi16 and in a future paper for the 3-10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ gas giants. Here we only mention the most important characteristics. The coordinate system is spherical, centered onto the star, and covers the entire circumstellar disk radially between 2.0 and 12.4 AU with an initial surface density $\Sigma=\Sigma_0(\frac{r}{a})^{-0.5}$ with $\Sigma_0=2.22 \times 10^{3} \rm{kg/m^2}$ ($a$ being the semi-major axis, 5.2 AU). This density was chosen to be close to the Minimum Mass Solar Nebula (@Hayashi). Only the lower half of the circumstellar (and circumplanetary) disks are simulated, assuming symmetry to the disk mid-plane. The equation of state (hereafter, EOS) in the code is the ideal gas EOS with an adiabatic index $\gamma$ equal to 1.43, such that $P=(\gamma-1)\epsilon$, where $\epsilon$ is the internal energy of the gas ($\epsilon=\rho C_v T$), and $P$ is the pressure. The mean molecular weight is 2.3, corresponding to the solar value. Due to the fixed adiabatic exponent and mean molecular weight, the ionization and dissociation of hydrogen and helium is not taken into account. The lack of these mechanisms could alter the predicted temperatures, entropies, and compositions, therefore our computations are only estimates found with an idealized EOS. Due to the extensive computation time needed for the high-resolution radiative simulations presented here, and due to the limitations of the solver, we had to use in this work an ideal EOS. ![image](pasted2_zoom.png){width="18cm"} We performed simulations with 1, 3, 5, and 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets for $\sim220$ orbits until steady state has been reached. A constant kinematic viscosity was applied with value of $10^{-5} a^2\Omega_p$, where $a$ is the semi-major axis, $\Omega_p$is the orbital frequency. This viscosity corresponds to a value of $\alpha=0.004$ at 5.2 AU. Due to the nested meshes centered on the planet, the resolution changes mesh by mesh, with each level of refinement doubling the resolution. This way the highest level of resolution (reached on level 6) was $7.5\times10^{-4}\,$AU$\,=\,0.8\,\mathrm{d_{{{\textrm{\small \jupiter}}}}}$ where $\mathrm{d_{{{\textrm{\small \jupiter}}}}}$ is the diameter of Jupiter. In the simulations the planet is treated as a point-mass in the corner of 8 cells. In other words there is only a gravitational potential well, no sphere is modeled for the giant planet. The smoothing lengths were  $4.4 \times 10^{-3}$, $8.8\times 10^{-3}$, $8.8\times 10^{-3}$ AU, and $1.8\times 10^{-2}$ AU for the 1, 3, 5, and 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets, respectively. As it was shown in @Szulagyi16, in the 1 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ simulation the gas is too hot in the planet’s vicinity to collapse into a circumplanetary disk. Instead, a pressure supported spherical envelope formed around the planet. In this case, there is no shock front on the surface of the envelope where the vertical influx hits it. We show the entropies of this simulation for comparison, in order to distinguish between cases with and without supercritical shock fronts. Results {#sec:entropy} ======= To estimate the specific entropy ($S$) of the gas, we used four different approaches. All four give the specific entropy based on the pressure and the temperature from the hydrodynamic simulations. The first expression we applied is the classical Sackur-Tetrode entropy formula for $\mathrm{H_2}$/He mixture with He mass fraction of 25% [@MC14]: $$S ($\rm{k_B}$/baryon)=9.6+\frac{45}{32}\ln (T/1600 \mathrm{K}) - \frac{7}{16} \ln(P/3\,\mathrm{bar}) \label{eq::Sackur}$$ This expression does not include ionization/dissociation, such as the EOS used in the hydrodynamic code. The left-hand panel of Fig. \[fig::entr\] shows the specific entropy – calculated with the Sackur-Tetrode expression – on a vertical slice through the 3 $M_{{{\textrm{\small \jupiter}}}}$ planet, zooming to the planet vicinity. The circumplanetary disk pops out with blue colors, which has clearly lower entropy than the supersonic vertical gas influx (red-yellow colors) which feeds the circumplanetary disk. The minimum entropy can be found in a spherical envelope around the planet, within the inner circumplanetary disk. In Fig. \[fig::entr\] we also show the same vertical cut of the density and a further zoomed-in temperature map in the middle and the right-hand panels, respectively. On the temperature color-map one can see with bright yellow color part of the razor-sharp shock layer (the Zeldovich spike) right above the planet and on the surface of the inner circumplanetary disk. The shock however, is even more extended on the top layer of the circumplanetary disk, based on maps of the Mach number. The temperature just above and below Zeldovich spike is identical, meaning that this shock front is super-critical [@Vaytet2013] for all the different mass planets which are forming circumplanetary disks (3-10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets). In the case of the 1 Jupiter-mass simulation, as it was mentioned in the previous section, the circumplanetary gas is too hot to collapse into a disk, therefore an envelope is found (see Fig. 1 in @Szulagyi16). Due to the extended envelope, the vertical influx does not hit it with large enough velocity (i.e. subsonically) to create a shock front on the surface of the envelope. ![1D vertical entropy profiles of the gas flow passing through the shock, right below (or above) the planet location. The rightmost value is at the midplane, the leftmost value is in the vertical influx. The peaks in the 3-10 $M_{{{\textrm{\small \jupiter}}}}$ simulations are the Zeldovich spikes at the supercritical shock front. The gas undergoes a strong entropy reduction ($> 3\mathrm{k_B}$/baryon) via radiative cooling while passing through the shock.[]{data-label="fig::shockprof_cdp"}](pasted_entro_cpd_planet.png){width="\columnwidth"} The second EOS we used was the Saumon-Chabrier-van Horn EOS developed for interiors of giant planets [@SCh95] using the same He/H mass fraction. The third way to calculate the specific entropies was using the CEA Gibbs minimizer [@mcbridegordon1993]. This approach, unlike the Sackur-Tetrode expression, calculates the specific entropy, mean molecular weight, and adiabatic index with the dissociation and ionization of hydrogen and helium taken into account. However, for comparison we also did a calculation with this code enforcing the molecularity and neutrality of the hydrogen-helium gas (only H$_{2}$ and He). This corresponds again to a situation close to the EOS used in the hydrodynamic simulations, except for the inclusion of the variation of the degrees of freedom with temperature. The Fig. \[fig::shockprof\_cdp\] shows a 1D vertical profile of the entropy as the gas passes through the shock front right below the planet (vertical cut through radius=5.2 AU, azimuth=0.0). As mentioned in Sect. \[sec:methods\], only the lower half of the circumstellar disk is simulated, therefore the co-latitute ranges in this figure from 1.562 radians to 1.57079 ($\frac{\pi}{2}$), the latter being the midplane value. The entropies shown were calculated with CEA assuming only H$_{2}$ and He. One can observe that the gas lost significant amount of entropy ($>3\mathrm{k_B}$/baryon) when passing through the shock of the circumplanetary disk. The shock front itself is visible as a spike (the Zeldovich spike). It is obvious that in the vertical influx and at the Zeldovich spike the entropy is very high (17-23 $k_{\rm B}$/baryon) as we are dealing with low-density preheated gas falling from the top of the gap. The entropy is minimal at the planet location (around 13 $\mathrm{k_B}$/baryon for each planet). This shows that also in 3D, an accretion shock is found to play a crucial role in regulating the planet’s thermodynamic state. Comparing the 3-10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ gas giants, it can be observed that with increasing mass, the stronger is the shock, and the higher is the radiative entropy reduction. This is visible from the decreasing value of the minimum of $S$ at some distance behind the shock. Also on the mid-plane (right-end of the figure) the 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ simulation has the lowest entropy. In the case of the 1 $M_{{{\textrm{\small \jupiter}}}}$ planet, where there is only a circumplanetary envelope without any shock, the entropy almost steadily decreasing while approaching the planet. The shock $S_{\rm shock}$ and post-shock $S$ entropies (measured in and immediately after the Zeldovich spike, respectively) for the four different EOS can be found in Table \[tab::entropy\]. The motivation to measure the entropy after the spike is to estimate the entropy of the gas which the planet will eventually accrete. Currently we find high, “hot-start” like entropies in the ideal gas EOS limit. The Saumon-Chabrier EOS also give very similar entropy values (within 0.1 $\mathrm{k_B}$/baryon) to the CEA values, as expected. The Sackur-Tetrode expression gives somewhat higher (by 0.6-0.7 $\mathrm{k_B}$/baryon) specific entropies. In contrast to this, in the Zeldovich spike itself (and in a small envelope right around the planet’s position), the entropy values differ significantly when ionization and dissociation is included in CEA relative to the neutral case, since here ionization and dissociation should happen. Planet mass $[\mathrm{M_{{{\textrm{\small \jupiter}}}}}]$ $\mathrm{T_{shock}}$ \[K\] $\mathrm{P_{shock}}$ \[$\rm{dyn/cm^2}$\] $\mathrm{S}$ (ST) $\mathrm{S}$ (SCvH) $\mathrm{S}$ (CEA-ion) $\mathrm{S}$ (CEA-neutral) $\mathrm{S_{shock}}$ (CEA-ion) $\rm{S_{shock}}$(CEA-neutral) ----------------------------------------------------------- ---------------------------- ------------------------------------------ ------------------- --------------------- ------------------------ ---------------------------- -------------------------------- ------------------------------- 3.0 3893 0.42 15.39 14.73 14.66 14.66 28.39 17.40 5.0 4429 0.19 15.74 15.14 15.05 15.05 29.33 18.00 10.0 8281 9.87e-5 17.51 16.85 16.77 16.77 62.69 22.53 We also estimated with CEA the mass fractions of H (dissociation) and H+ (ionization) (Fig. \[fig::ion-diss\]) for the 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planet. For all planetary masses the polar shock surface is so hot ($T>3800$ K), that the H$_2$ molecules are dissociated, and in the 10 Jupiter case ($T>8000$ K) hydrogen is even ionized. The presence of H+ at the shock surface assumes a detectable H-$\alpha$ emission from this extended (in comparison to a typical planetary radius of a few Jovian radii) region. This typical accretion tracer was recently detected around the planet, LkCa15b [@Sallum15]. We calculated the upper limit for H-$\alpha$ flux from the shock of the 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planet using Eq. 21 in @Zhu15 and found $ 3\times10^{-2} \rm{L_{\odot}}$. For the planetary accretion, we applied the expression $\rm{L_{acc}}=GM_p\dot{M_p}/R_p$ and its scaling with $L_{\mathrm{H}\alpha}$ used in @Sallum15. The accretion rates to the planet calculated from our simulations are 9.8, 13.89, $5.7 \times 10^{-8} \mathrm{M_{{{\textrm{\small \jupiter}}}}/year}$ for the 3, 5, 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets, which indicate $\sim$ 4 to 8 $\times10^{-5} \rm{L_{\odot}}$ for $\rm{L_{acc}}$. This translates to $\sim$ 4 to 7 $\times10^{-6} \mathrm{L_{\odot}}$ for H-$\alpha$ luminosity, an order of magnitude lower than what was measured in LkCa15b. Given that our H-$\alpha$ line flux upper limit is three orders of magnitude higher than our planet emitted line flux, it is also plausible that the shock of circumplanetary disk accounts for difference found with @Sallum15. Our results indicate, that by detecting the H-$\alpha$ emission, one cannot necessarily distinguish whether it tracks the accretion onto the circumplanetary disk or from the accreting planet itself. ![The ionization (top) and dissociation (bottom) of hydrogen shown in mass fractions zoomed into the envelope around the planet, where these occur for the 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planet. Only the lower half of the disk is shown, so the mid-plane is at the top of the figures. In this case, ionization occurs at the shock front, which suggests that there is H-$\alpha$ emission. Dissociation only is found in the shock surfaces of 3 and 5 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ gas giants.[]{data-label="fig::ion-diss"}](ion_diss_10jup.png){width="\columnwidth"} When observing a forming embedded giant planet via direct imaging, one has to be careful from where the detected luminosity originates. As we see on the temperature color map of Fig. \[fig::entr\], part of the shock front on the surface of the circumplanetary disk and protoplanet is very luminous due to shock heating. This part of the shock front extends to $\sim$100-250 $\mathrm{R_{{{\textrm{\small \jupiter}}}}}$ in diameter for the 3 to 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planet simulations. This part is optically thick in our grey approximation, so it is possible that observationally this shock front is the surface the observations detect, at least in some wavelengths, rather than the actual protoplanet below. Because the luminosities of forming directly imaged planets are used to distinguish between “hot-start” and “cold-start” scenarios, and also to estimate the planetary mass, it is important to consider a possible contribution from the circumplanetary disk shock luminosity. This is in particular the case if it is not possible to distinguish observationally (spectroscopically) the origin of the radiation, e.g., because hard shock radiation first gets reprocessed in the surrounding disk and re-radiated at longer wavelength. This highlights that observationally, even intrinsically cold planets could look like “hot-start” planets during formation if they are surrounded by luminous shocks on the circumplanetary disk surface. In conclusion, circumplanetary disk shock surfaces play a key role not only in regulating a planet’s post-formation thermodynamic state, but also for the observational appearance of protoplanets during formation. Conclusions & Discussions ========================= In this paper we present a study of the thermodynamics found in global three-dimensional radiative hydrodynamical simulations of embedded accreting giant planets of 1, 3, 5, 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$. As described in detail in @Szulagyi14, the accretional gas flow from the circumstellar disk to the planet is the following. The gas acts to close the gap opened by the planet in the circumstellar disk, especially so in the high co-latitute regions. Gas enters the gap region, and then falls nearly freely in a vertical influx onto the circumplanetary disk and protoplanet. Because this vertical inflow is supersonic (MACH = 6.2, 8.1, 10.3 for the 3, 5, 10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets, respectively), it shocks on the surface of the circumplanetary disk and on the polar region of the protoplanet before becoming part of the disk and eventually reaching the protoplanet. In this work we showed that the gas undergoes a significant reduction of the specific entropy (typically more than 3 $\mathrm{k_B}$/baryon) while passing through the shock front that is found to be supercritical. The vertical influx has a very high entropy which after the shock in the disk mid-plane reaches a minimum value. We found that the circumplanetary disk consists of gas of significantly lower entropy than the vertical influx and the lowest entropies are found in a spherical small envelope around the planet within the inner parts of the circumplanetary disk. We conclude that shocks play a key role in regulating the post-formation entropy. Because the shock front on the circumplanetary disk is hot, optically thick (in our grey-approximation), and this luminous region is extended (100-250 $\mathrm{R_{{{\textrm{\small \jupiter}}}}}$), it can contribute strongly to the bolometric luminosity of a directly imaged planet if the gas giant is still accreting. Therefore it is important to disentangle the luminosity of the shock front on the upper layer of the circumplanetary disk from the luminosity of the protoplanet itself beneath the shock surface. Our radiative hydrodynamic simulations compute temperatures taking into account radiative cooling with the inclusion of dust opacities. However, the use of ideal gas EOS does not take into account ionization and dissociation. This can lead to too high temperatures, and means that we cannot yet exactly predict the post-formation entropies. To estimate where dissociation and ionization could occur, we used the CEA code to determine potential H and H+ regions. We found that for 3-10 $\mathrm{M_{{{\textrm{\small \jupiter}}}}}$ planets dissociation occurs in front of the polar shock surface indicating that non-ideal effect could be important. In the 10 Jupiter-mass simulation the shock surface is so hot to also produced H+, which means that there could be extended $H-\alpha$ emission from this region, which may be detectable, as seen in LkCa 15 b [@Sallum15]. Shock surfaces on the surface of the circumplanetary disk and the polar region of the protoplanet could therefore also be important for the observational appearance of forming giant planets. Acknowledgments {#acknowledgments .unnumbered} =============== We are thankful to J. Owen, T. Guillot, G. Marleau for useful discussions and to the anonymous referee for improving this work. J. Sz. acknowledges the support from the ETH Post-doctoral Fellowship by the Swiss Federal Institute of Technology. This work has been in part carried out within the frame of the National Centre for Competence in Research “PlanetS" supported by the Swiss National Science Foundation. C.M. acknowledges the support from the Swiss National Science Foundation under grant BSSGI0$\_$155816 “PlanetsInTime”. Computations have been done on the “Mönch" machine (Swiss National Comp. Cent.) [99]{} Bell, K. R., & Lin, D. N. C. 1994, ApJ, 427, 987 Bodenheimer, P., D’Angelo, G., Lissauer, J. J. et al. 2013, ApJ, 770, 120 Boss, A. P. 1997, Science, 276, 1836 Burrows, A., Marley, M., Hubbard, W. B., et al. 1997, ApJ, 491, 856 Commer[ç]{}on, B., Teyssier, R., Audit, E., et al. 2011, A&A, 529, A35 de Val-Borro, M., Edgar, R. G., Artymowicz, P., et al. 2006, MNRAS, 370, 529 Forgan, D., & Rice, K. 2013, MNRAS, 432, 3168 Hayashi, C., 1981, Progress of Theoretical Physics Supplement, 70, 35 Kraus, A. L., & Ireland, M. J. 2012, ApJ, 745, 5 Lagrange, A.-M., Bonnefoy, M., Chauvin, G., et al. 2010, Science, 329, 57 Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348 Marleau, G.-D., & Cumming, A. 2014, MNRAS, 437, 1378 Marley, M. S., Fortney, J. J., Hubickyj, O., et al.  2007, ApJ, 655, 541 McBride, B., Gordon, S., & Reno, M. 1993, NASA Technical Memorandum, 4513 Mordasini, C., Alibert, Y., Klahr, H., & Henning, T. 2012, A&A, 547, A111 Mordasini, C. 2013, A&A, 558, A113 Mordasini, C., Molli[è]{}re, P., Dittkrist, K.-M., et al.  2015, Int. J. of Astrobiol., 14, 201 Owen, J. E., & Menou, K. 2016, ApJL, 819, L14 Pollack, J. B., Hubickyj, O., Bodenheimer, P., et al., 1996, Icarus, 124, 62 Quanz, S. P., Amara, A., Meyer, M. R., et al. 2015, ApJ, 807, 64 Sallum, S., Follette, K. B., Eisner, J. A., et al. 2015, Nature, 527, 342 Saumon, D., Chabrier, G., & van Horn, H. M. 1995, ApJS, 99, 713 Szul[á]{}gyi, J., Morbidelli, A., Crida, A., & Masset, F. 2014, ApJ, 782, 65 Szul[á]{}gyi, J., Masset, F., Lega, E., et al. 2016, MNRAS, 460, 2853 Vaytet, N., et al.  2013, J. of Quantitative Spectroscopy and Radiative Transfer, 125, 105 Zhu, Z. 2015, ApJ, 799, 16 \[lastpage\] [^1]: E-mail: judit.szulagyi@phys.ethz.ch
--- abstract: 'In order to investigate the effects of in-plane strain on the superconductivity of FeSe, epitaxial thin films of FeSe were fabricated on CaF$_2$ substrates. The films are compressed along the $a$-axis and their superconducting transition temperatures $T_{\mathrm c}^{\mathrm {zero}}$ reach 11.4 K, which is approximately 1.5 times higher than that of bulk crystals. The $T_{\mathrm c}$ values are weakly dependent on the ratio of the lattice constants, $c$/$a$, compared to that of Fe(Se,Te). Our results indicate that even a binary system FeSe has room for improvement, and will open a new route for the application of Fe-based superconductors.' author: - 'Fuyuki Nabeshima$^{1}$, Yoshinori Imai$^{1}$, Masafumi Hanawa$^{2}$, Ichiro Tsukada$^{2}$, and Atsutaka Maeda$^{1}$' title: Enhancement of the Superconducting Transition Temperature in FeSe Epitaxial Thin Films by Anisotropic Compression --- The discovery of iron-based materials with a high superconducting transition temperature $T_{\mathrm {c}}$ has attracted much attention for both fundamental studies and practical applications. [@Kamihara] Iron-chalcogenide superconductors [@HsuFeSe] (Fe$Ch$), Fe(Se,Te), have the simplest crystal structure, consisting of only two-dimensional conducting planes. Although the $T_{\mathrm {c}}$ values of these materials are low compared with those of other families, the $T_{\mathrm {c}}$ values are strongly dependent on the applied pressure. In fact, the onset of the resistive transition, $T_{\mathrm {c}}^{\mathrm {onset}}$, reaches 37 K under a hydrostatic pressure of approximately 4 GPa. [@JPSJ.78.063704; @nmat2491] Therefore, anisotropic pressure effects on $T_{\mathrm {c}}$ of Fe$Ch$ are of great interest. Studies on the film growth of the optimally doped Fe(Se,Te) have suggested that the in-plane ($ab$-plane) compressive strain can increase the $T_{\mathrm {c}}$ value above that of bulk crystals; such studies have also found enhanced superconducting properties. [@bellingeri:102512; @iida:202503; @APEX.4.053101; @WSiNatCom] Additionally, under a hydrostatic pressure, [@JPSJ.78.063705] FeSe shows a large increase in $T_{\mathrm {c}}$ compared to optimally doped Fe(Se,Te). Thus, we expect FeSe thin films to have $T_{\mathrm {c}}$ values higher than those of bulk single crystals when in-plane compressive strain is successfully introduced in the films. Recently, high $T_{\mathrm {c}}$ superconductivity was reported in single-unit-cell-thick FeSe films on SrTiO$_3$ substrates. [@0256-307X-29-3-037402] Although it is unclear whether this phenomenon is characteristic of the interface, this report provides another example showing that FeSe has potential as a very high $T_{\mathrm {c}}$ superconductor. Thus, a very important question is whether we can realize this high $T_{\mathrm {c}}$ superconductivity as a bulk nature. Additionally, from this viewpoint, we should investigate the effects of anisotropic strain in FeSe films. Several groups have reported the growth of FeSe films using oxide substrates. [@0953-8984-21-23-235702; @nie:242505; @PhysRevLett.103.117002; @jourdan:023913; @Jung20101977; @Chen2011515; @PhysRevLett.108.257003] However, the $T_{\mathrm {c}}$ values of the FeSe films reported to date are rather low, and there are few reports on the fabrication of FeSe thin films with good superconducting properties. The lattice constants of these FeSe films were similar to those of bulk crystals. In a previous study of Fe(Se,Te) thin film fabrication, [@APEX.3.043102] we found that $T_{\mathrm {c}}$ is positively correlated with the ratio of the lattice parameters, $c/a$. A subsequent study [@APEX.4.053101] revealed that compared with oxide substrates, the use of CaF$_2$ substrates can introduce a strong in-plane compressive strain in the films. We expect to observe the same effect for FeSe films and also expect an enhancement of superconducting properties, such as an increased $T_{\mathrm {c}}$. In this Letter, we report the fabrication of high-quality epitaxial thin films of FeSe on CaF$_2$ substrates using a pulsed laser deposition (PLD) method. We demonstrate that the films are compressed along the $a$-axis and their superconducting transition temperatures $T_{\mathrm c}^{\mathrm {zero}}$ reach 11.4 K, which is approximately 1.5 times higher than that of bulk crystals. Our results in this binary system are very promising, and will open a new route for the application of Fe-based superconductors. All of the films in this study were grown by the PLD method with a KrF laser. [@JJAP.49.023101; @APEX.3.043102] FeSe polycrystalline pellets were used as targets. The substrate temperature, the laser repetition rate, and the back pressure were 280$^\circ \mathrm{C}$, 10 Hz and 10$^{-6}$ Torr, respectively. Commercially available CaF$_2$ (100) substrates were used for the present experiments. Although some groups have reported that crystal orientation along the (101) direction, accomplished by using substrate temperatures of as high as 500$^\circ$C, is the key for the fabrication of FeSe thin films with high $T_{\mathrm {c}}$ values, [@PhysRevLett.103.117002; @Jung20101977] we adopted lower substrate temperatures and obtained $c$-axis preferred orientation. Nevertheless, as will be described later, our films show very good superconducting properties. Indeed, films with $c$-axis orientation are advantageous for measurements of the in-plane ($ab$-plane) conductivity. We prepared eight thin films with different thicknesses. The films were fabricated in a six-terminal shape through the use of a metal mask. The measured area was 0.95 mm long and 0.2 mm wide. The thicknesses of the grown films were measured using a Dektak 6M stylus profiler and were estimated to be 60 - 235 nm. The films are designated as C1 - C8 in order of the film thickness. The specifications of all of the films are summarized in table \[tab:FilmSpec\]. The crystal structures and the orientations of the films were characterized by four-circle X-ray diffraction (XRD) with Cu K$\alpha$ radiation at room temperature. The $c$-axis and $a$-axis lattice constants of the films were calculated from the positions of the 001-004 reflections and the 200 reflection, respectively. The electrical resistivity was measured using a physical property measurement system (PPMS) from 2 to 300 K under magnetic fields of up to 9 T applied perpendicularly to the film surface. Superconductivity was also confirmed by the magnetization measurement. ---- ----- ------- ------- ------- ------- --------- C1 60 3.761 5.537 6.43 5.42 $<$ 4.2 C2 75 3.747 5.549 8.60 7.48 6.11 C3 85 3.720 5.560 11.67 10.51 8.41 C4 92 3.730 5.567 11.53 10.80 9.33 C5 120 3.715 5.578 11.71 11.46 10.82 C6 150 3.714 5.584 12.35 11.92 11.38 C7 205 3.722 5.582 11.74 11.44 10.38 C8 235 3.730 5.579 11.24 11.03 10.47 ---- ----- ------- ------- ------- ------- --------- : \[tab:FilmSpec\]Specifications of the grown films. Fig.  \[XRD\](a) shows the $\omega$-2$\theta$ X-ray diffraction patterns of the eight FeSe thin films grown on CaF$_2$ (100) substrates (C1 - C8). These films show only the 00$l$ reflections of the tetragonal PbO structure, indicating that the films are well-oriented along the $c$-axis. Fig.  \[XRD\](b) shows the $\phi$ scans of the 101 reflection of film C6. A clear four-fold symmetry reflection was obtained. The full widths at half maximum (FWHM) is $\Delta \phi \sim 1.0^\circ$ (shown in the inset of fig.  \[XRD\](b)), which is comparable to those of Fe(Se,Te) films on CaF$_2$ substrates. [@APEX.4.053101] We have also confirmed the in-plane orientation to be FeSe\[100\] $\parallel$ CaF$_2$\[110\], similar to Fe(Se,Te). The calculated $a$- and $c$-axis lengths of the grown films are shown in table \[tab:FilmSpec\]. It is clear that the $a$-axis lengths are dependent on the film thickness; the $a$-axis decreases as the film thickness increases up to 150 nm, and then the $a$-axis slightly increases for films with larger thickness. This behavior is similar to that observed in Fe(Se,Te) films on LaAlO$_3$. [@bellingeri:102512] The authors in ref.  [@bellingeri:102512] explained that this thickness dependence is due to Volmer-Weber type growth of the Fe(Se,Te) film, which we believe is also the case with FeSe films on CaF$_2$. Compared with the lattice parameters for bulk single crystals ($a = 3.775 \pm 0.02$ Å, $c = 5.52 \pm 0.03$ Å), [@PhysRevB.79.014522; @Souza; @PhysRevB.83.224502] the films grown on CaF$_2$ are compressed along the $a$-axis and are simultaneously elongated along the $c$-axis. Such short $a$-axis lengths of the FeSe films can not be explained simply by the difference between the lattice constants of the substrate and the overlayer because the lattice constants of CaF$_2$ ($a / \sqrt 2 = 3.863$ Å) are longer than the $a$-axis of FeSe. The penetration of F$^{-}$ ions from the CaF$_2$ substrates into the films has been proposed as a possible mechanism for the contraction of the $a$-axis lengths of the Fe(Se,Te) films. [@IchinoseCaF2] The substitution of small F$^{-}$ ions for large Se$^{2-}$ ions shortens the $a$-axis. Thus it is natural to consider that the short $a$-axis lengths of the FeSe films on CaF$_2$ can be explained by the same mechanism proposed for Fe(Se,Te) films. We should note that the diffusion of other atoms (Fe, Te and Ca) was not detected in the previous measurement. [@IchinoseCaF2] ![(a) XRD patterns of $\omega$-$2\theta$ scans perpendicular to the substrate plane for the eight FeSe thin films on CaF$_2$ (C1 - C8). The asterisks represent the peaks resulting from the substrate, and the number signs represent the unidentified peaks. (b) XRD pattern of the $\phi$ scan of the 101 reflection from film C6. (c) Enlargement of (b) around the peak at approximately $\phi$ = 10 deg.[]{data-label="XRD"}](Fig1v5.EPS){width="25em"} Fig.  \[RhoT\](a) shows the temperature dependence of the resistivities of the grown films (C1 - C8). The temperature dependence of the resistivity shows metallic behavior similar to that of the bulk samples. The magnitudes of the resistivity at room temperature are 0.35 - 0.6 m$\Omega \cdot$cm, which are smaller than those of bulk single crystalline samples. [@PhysRevB.79.014522; @Souza; @PhysRevB.83.224502] All of the films show the onset of the superconducting transition at temperatures above 4.2 K, and zero resistivity was observed in all films except film C1. As the thickness becomes smaller for films with thickness less than 100 nm, the magnitude of the resistivity increases, and $T_{\mathrm {c}}$ decreases correspondingly. Although the tendency that $T_{\mathrm {c}}$ decreases with decreasing thickness has also been reported for FeSe films with $c$-axis preferred orientation [@PhysRevLett.108.257003] before, our films on CaF$_2$ show higher $T_{\mathrm {c}}$ even for films with much smaller thickness than those reported in ref.  [@PhysRevLett.108.257003]. It should be noted that for FeSe films with thicknesses less than 100 nm, $T_{\mathrm {c}}$ = 9.33 K is also far better than the previously reported results for films on oxide substrates with (101) orientation ($T_{\mathrm {c}}^{\mathrm {zero}} \sim$ 6.5 K for 140 nm, $T_{\mathrm {c}}^{\mathrm {zero}} \sim$ 8 K for 1.5 $\mu$m). [@PhysRevLett.103.117002; @Jung20101977] The thickness dependence of $T_{\mathrm {c}}$ is similar to that of the $a$-axis length of the films; $T_{\mathrm {c}}$ increases with increasing film thickness up to 150 nm and further increase of thickness results in the decrease in $T_{\mathrm {c}}$. It is remarkable that films C4 - C8 show $T_{\mathrm {c}}^{\mathrm {zero}}$ values higher than those of the bulk crystals. In particular, film C6 has a $T_{\mathrm {c}}^{\mathrm {zero}}$ of 11.4 K, which is approximately 1.5 times higher than those of the bulk samples. This is the first report demonstrating that FeSe films show higher $T_{\mathrm {c}}$ values than bulk single crystals, except for possible interface superconductivity [@0256-307X-29-3-037402] between FeSe and SrTiO$_3$. It should also be noted that these films do not require any buffer layers. ![(a) Temperature dependences of the resistivities of eight FeSe films on CaF$_2$ (C1 - C8). (b) Measurement of $\rho$ as a function of $T$ for film C6 under magnetic fields up to $\mu _0 H$ = 9 T applied along the $c$-axis. (c) Plot of $B_{\mathrm {c2}}^{H \parallel c}$ as a function of $T_{\mathrm c} ^{\mathrm {mid}}$ for film C6. The inset shows the linear extrapolation to $T$ = 0 K.[]{data-label="RhoT"}](Fig2v5.EPS){width="25em"} Fig.  \[RhoT\](b) shows the temperature dependence of the resistivity of film C6 under magnetic fields up to $\mu _0 H= 9$ T applied along the $c$-axis. The superconducting transition temperature decreases as the magnetic field increases. In fig. \[RhoT\](c) the upper critical field, $B_{\mathrm {c2}}^{H \parallel c}$, is plotted as a function of $T_{\mathrm {c}}^{\mathrm {mid}}$, the temperature at which the resistivity drops to a half of its value in the normal state. $B_{\mathrm c2}^{H \parallel c}$ increases almost linearly as the temperature decreases. The slight positive curvature of $B_{\mathrm c2}$($T$) observed at very low fields may derive from the multiband nature of this material. [@Bi2Pd] For multiband superconductors, it is hard to predict the upper critical field at 0 K, $B_{\mathrm c2,0}^{H \parallel c}$, from the low field data. Nevertheless, we show some estimated values in order to compare them to the data in the literature. The upper critical field at 0 K is estimated by linear extrapolation to be $B_{\mathrm c2,0}^{H \parallel c}= 33$ T. The conventional Werthamer-Helfand-Hohenberg theory ($B_{\mathrm c2,0} = - 0.69T_{\mathrm c} ({\mathrm d} B_{\mathrm c2} / {\mathrm d} T) \bigr| _{T = T_{\mathrm c}} $) predicts that $B_{\mathrm c2,0}^{H \parallel c} =$ 22.8 T; this value is higher than any reported value for FeSe thin films that was estimated in the same way. [@jourdan:023913; @Chen2011515] Using $B_{\mathrm c2,0}^{H \parallel c} = \mathit{\Phi}_0 / 2 \pi \xi _{ab} (0) ^2$, we can get a Ginzburg-Landau coherence length of $\xi_{\mathrm {ab}}(0) \sim 38.0$ Å, which is longer than that of FeSe$_{0.5}$Te$_{0.5}$ thin films on CaF$_2$ substrates. [@APEX.4.053101] These aspects of FeSe thin films on CaF$_2$ should be beneficial for applications of these materials. Our results for this binary system will open a new path for applied studies of iron-based superconductors. Finally we discuss the relation between $T_{\mathrm {c}}$ and the lattice parameters. As previously described, we expected that the in-plane compression of the films, represented by large $c/a$ values, would increase $T_{\mathrm {c}}$. Indeed our results support this expectation. In fig. \[VScovera\], we plot the $c/a$ dependence of $T_{\mathrm {c}}^{\mathrm {zero}}$ for the eight films and for bulk single crystals [@PhysRevB.79.014522; @Souza; @PhysRevB.83.224502] and films on oxide substrates. [@Jung20101977] $T_{\mathrm {c}}^{\mathrm {zero}}$ of the FeSe films on CaF$_2$ increases monotonically as $c/a$ increases, similar to that observed for FeSe$_{0.5}$Te$_{0.5}$ films. [@APEX.3.043102] However, in contrast to FeSe$_{0.5}$Te$_{0.5}$, the $T_{\mathrm {c}}$ values of single crystals and films on oxide substrates do not seem to exhibit the same dependence on $c/a$. One possible explanation for this new complexity is the effect of disorders. It is well-known that the excess Fe, which occupies an additional Fe site, strongly affects the superconductivity in FeSe. [@PhysRevB.79.014522] We speculate that because of the high vapor pressure of Se, the ratio of Se to Fe becomes smaller than the stoichiometric value near the surface of the film, leading to the increase in the amount of Fe which occupies the extra Fe site. Our result is that resistivity increases and $T_{\mathrm {c}}$ decreases with decreasing thickness for films with thickness smaller than 100 nm. This is interpreted that the influence of the regions with excess Fe becomes dominant as the films gets thinner. If we consider the superconducting transition width, $\Delta T_{\mathrm {c}}$ ($ = T_{\mathrm {c}}^{\mathrm {onset}} - T_{\mathrm {c}}^{\mathrm {zero}}$), to be an index of the crystal quality and consider only the data for films with $\Delta T_{\mathrm {c}} < 1$ K, $T_{\mathrm {c}}$ seems to correlate with $c/a$, as shown by an orange dashed line in fig.  \[VScovera\]. In this case, $c/a$ dependence of $T_{\mathrm {c}}$ of FeSe is weak compared to that of Fe(Se,Te), and the $T_{\mathrm {c}}$ values of FeSe films under anisotropic compression are unlikely to reach 37 K, which is accomplished by hydrostatic pressure. [@JPSJ.78.063704; @nmat2491] Therefore, the simultaneous compression of the $c$- and $a$-axis lengths may be a key to realize a very high $T_{\mathrm {c}}$ material. We should note, however, that the relation between $T_{\mathrm {c}}$ and $c/a$ is simplistic and is only a guiding principle in the search for high $T_{\mathrm {c}}$ materials. To understand the relation between $T_{\mathrm {c}}$ and crystal structure, we should evaluate specific structural parameters, such as the $Ch$-Fe-$Ch$ angle [@JPSJ.77.083704] and/or the $Ch$ height from the Fe layer. [@JPSJ.79.102001] ![${T_{\mathrm c}}^{\mathrm {zero}}$ values for eight FeSe films on CaF$_2$ (C1 - C8) as a function of $c/a$. The values for bulk single crystals [@PhysRevB.79.014522; @Souza; @PhysRevB.83.224502] and films on oxide substrates [@Jung20101977] are also plotted for comparison.[]{data-label="VScovera"}](Fig3v5.EPS){width="23em"} In summary, in order to investigate the effects of in-plane strain on the superconductivity of FeSe, we fabricated high-quality FeSe epitaxial thin films oriented along the $c$-axis on CaF$_2$ substrates. X-ray diffraction analysis showed that our films have shorter $a$-axis and longer $c$-axis lengths in comparison to bulk single crystals, demonstrating that a large in-plane compressive strain was introduced in the FeSe films. We demonstrated that $T_{\mathrm c}^{\mathrm {zero}}$ can reach 11.4 K, which is approximately 1.5 times greater than the $T_{\mathrm c}^{\mathrm {zero}}$ values of bulk crystals. Further studies of strain effects will lead to higher $T_{\mathrm {c}}$ values. Our results in this binary system are very promising and will open a new route for the application of Fe-based superconductors. We would like to thank S. Komiya and A. Ichinose for fruitful discussion. We also thank K. Fukawa at the Institute of Engineering Innovation, School of Engineering, University of Tokyo for supporting in the XRD measurements of the films. This research was supported by the Strategic International Collaboration Research Program (SICORP), Japan Science and Technology Agency. [26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1021/ja800073m) [****,  ()](\doibase 10.1073/pnas.0807325105) [****,  ()](\doibase 10.1143/JPSJ.78.063704) [****,  ()](\doibase 10.1038/nmat2491) [****,  ()](\doibase 10.1063/1.3358148) [****,  ()](\doibase 10.1063/1.3660257) [****,  ()](\doibase 10.1143/APEX.4.053101) [****,  ()](\doibase 10.1038/ncomms2337) [****, ()](\doibase 10.1143/JPSJ.78.063705) [****,  ()](http://stacks.iop.org/0256-307X/29/i=3/a=037402) [****,  ()](http://stacks.iop.org/0953-8984/21/i=23/a=235702) [****,  ()](\doibase 10.1063/1.3155441) [****,  ()](\doibase 10.1103/PhysRevLett.103.117002) [****, ()](\doibase 10.1063/1.3465082) [****,  ()](\doibase 10.1016/j.physc.2010.08.011) [****,  ()](\doibase 10.1016/j.physc.2011.05.248) [****, ()](\doibase 10.1103/PhysRevLett.108.257003) [****,  ()](\doibase 10.1143/APEX.3.043102) [****,  ()](\doibase 10.1143/JJAP.49.023101) [****,  ()](\doibase 10.1103/PhysRevB.79.014522) [****,  ()](\doibase 10.1140/epjb/e2010-00254-7) [****,  ()](\doibase 10.1103/PhysRevB.83.224502) [****,  ()](http://stacks.iop.org/0953-2048/26/i=7/a=075002) [****,  ()](\doibase 10.1143/JPSJ.81.113708) [****, ()](\doibase 10.1143/JPSJ.77.083704) [****,  ()](\doibase 10.1143/JPSJ.79.102001)
--- abstract: 'We present results of non-local and three terminal (3T) spin precession measurements on spin injection devices fabricated on epitaxial graphene on SiC. The measurements were performed before and after an annealing step at 150 $^{\circ}$C for 15 minutes in vacuum. The values of spin relaxation length $L_s$ and spin relaxation time $\tau_s$ obtained after annealing are reduced by a factor 2 and 4, respectively, compared to those before annealing. An apparent discrepancy between spin diffusion constant $D_s$ and charge diffusion constant $D_c$ can be resolved by investigating the temperature dependence of the $g$-factor, which is consistent with a model for paramagnetic magnetic moments.' author: - Bastian Birkner - Daniel Pachniowski - Andreas Sandner - Markus Ostler - Thomas Seyller - Jaroslav Fabian - Mariusz Ciorga - Dieter Weiss - Jonathan Eroms title: 'Annealing-induced magnetic moments detected by spin precession measurements in epitaxial graphene on SiC' --- Apart from its prospects for electronic devices[@Number1; @Number2] single layer graphene (SLG) is also a very promising candidate in the field of spintronics because it is expected that spin information can be passed in graphene over long distances[@Tombros] due to the weak spin-orbit coupling and low hyperfine interaction[@Huertas]. Up to now, however, the measured spin lifetimes in exfoliated SLG (0.5 ns at RT[@Han1], $\approx1$ ns at 4 K[@Han2]) and also in bilayer graphene ($\approx$ 2 ns at RT[@aachen]) on SiO$_2$ are still one order of magnitude smaller than in conventional semiconductor heterostructures. Even if the mobility $\mu$ for graphene on SiO$_2$ is modified by e.g. ligand-bound nanoparticles[@HanMobility] (2700 cm$^2$/Vs - 12000 cm$^2$/Vs) or by using high quality suspended graphene devices[@suspended] ($\mu>100\,000$ cm$^2$/Vs), measured spin lifetimes are below 2 ns. Similar values of $\tau_s$, slightly over 2 ns, were also reported for graphene epitaxially grown on a semi-insulating silicon carbide (SiC) substrate[@First; @Emtsev1] using a direct non-local measurement[@Maassen] while a huge $\tau_s$ was obtained by an indirect[@Fert] method. In Ref.  fitting the Hanle curves with a $g$-factor of 2 leads to a drastic difference between charge ($D_c$) and spin diffusion constant ($D_s$). Later the data were reinterpreted in a model employing a modified $g$-factor[@MaassenARXIV]. McCreary [*et al.*]{}[@Kawakamilocalmoments] studied the influence of artificially created paramagnetic moments on spin transport in graphene, and introduced an effective exchange field model leading to an enhanced $g$-factor. This variety of different results both at room and low temperature motivates further experiments on epitaxial graphene to understand the spin relaxation mechanism in order to control the spin information for future spintronic devices. Here we also use epitaxial graphene grown on the Si face of SiC and present non-local and three terminal[@Dash] spin precession measurements. The latter probes the spin accumulation[@Fabian] directly underneath the injector electrode induced electrically by a spin polarized current. We compare the results before and after an annealing step and observe that our measurements after annealing can be well explained with an enhanced $g$-factor assuming that $D_c$ and $D_s$ are equal. As the temperature dependence of this increased $g$-factor shows a clear $1/T$ (paramagnetic) behavior, we believe that annealing creates local magnetic moments which influence the spin transport properties. ![(Color online) **(a)** Schematic drawing of the non-local and 3T measurement setup which is used for electrical spin injection and detection. The magnetic field B$_{z}$ is applied along the z axis. **(b)** SEM picture of the epitaxial graphene spin valve device.[]{data-label="fig:sketch"}](Fig1.eps){width="7.5cm"} Fig. \[fig:sketch\][(a)]{} shows a sketch of the applied measurement methods, Fig. \[fig:sketch\][(b)]{} a SEM picture of the used epitaxial graphene spin injection device. The graphene stripes having a width of $W=30~\mu$m and a length of about 750 $\mu$m are produced using a negative resist based electron beam lithography (EBL) step and oxygen plasma etching for 30 s (30 mTorr O$_2$, 50 W). Afterwards a thin tunneling barrier (AlO$_x$) with a thickness of about 1 nm was produced by depositing Al atoms over the entire cooled sample (180 K) in a UHV system ($p\approx10^{-9}$ mbar) and subsequent oxidation in the load lock in pure oxygen atmosphere ($p\approx3\times10^{-2}$ mbar) at RT for 30 minutes. This AlO$_x$ tunneling barrier with a contact resistance $R_c\geq2~k\Omega$ provides high spin injection efficiencies and reduces spin relaxation induced by the contacts[@Pop]. The ferromagnetic (FM) cobalt electrodes (Co 20 nm) with a width of 200 nm (contact A) and 500 nm (contact B) and the non-magnetic palladium contacts (Pd 80 nm) were each patterned using a positive PMMA resist based EBL step. The evaporation is done via electron gun (Co) and thermally (Pd) at a base pressure of about $5\times10^{-7}$ mbar followed by a standard lift-off technique. The distance $L$ between the edges of the FM stripes is 2 $\mu$m. Finally the sample is glued into a chip carrier and the measurements are done using a standard DC setup in a Cryogenics He-4 cryostat ($T=1.6\ldots300$ K) equipped with a vector magnet ($B_{x,y,z}=-1\ldots1$ T). The complete sample fabrication is done without applying a high temperature cleaning step. ![(Color online) **(a)** Hanle spin precession measurement (background removed) with a DC current of +10 $\mu$A at 1.7 K and fit (continuous line) in non-local configuration. **(b)** 3T measurement (background removed) and Lorentzian fit (continuous line). Both measurements are done *before* an annealing step.[]{data-label="fig:before"}](Fig2.eps){width="8.5cm"} In Fig. \[fig:before\] typical Hanle curves in the non-local and three terminal setup at 1.7 K are shown when using contact A as an injector. The FM stripes are magnetized in parallel configuration and the magnetic field $B_z$ is applied out-of plane which leads to dephasing of the spin signal. In our convention, $R_{nl}$ is negative for parallel magnetization. The continuous curve for the non-local curve in Fig. \[fig:before\][(a)]{} is the numerical fit with the solution of the following equation[@Ciorga] $$\label{nonlocal} -R_\mathit{nl}=\frac{V_\mathit{nl}}{I}=\frac{P^2R_sL_s}{2W}\int_{0}^{\infty}\frac{cos(\omega_L t)}{\sqrt{4\pi D_s t}}e^{\frac{-(x_2-x_1)^2}{4D_s t}}e^{\frac{-t}{\tau_s}}dt$$ where $P$ is assumed to be the same for both FM stripes and $x_1$, $x_2$ are the points of injection and detection, respectively. $P$ is the spin injection efficiency, $I$ the injection current, $\omega_L=g\mu_BB_z/\hbar$ the Larmor frequency with the Landé factor $g$, $R_s$ is the sheet resistance of graphene, $W$ the width of the graphene stripe, $D_s$ is the spin diffusion constant, finally $\tau_s$ and $L_s=\sqrt{D_s\tau_s}$ are the spin relaxation time and length, respectively. An influence of drift can be neglected due to the low bias current of 10 $\mu$A [@Fabian; @Kameno]. The Hanle signal $R_\mathit{3T}$ in 3T configuration (Fig. \[fig:before\](b)) can be fitted with the following Lorentzian[@Dash] curve: $$\label{R3T} R_\mathit{3T}=\frac{V_\mathit{3T}}{I}=\frac{P^2R_sL_s}{2W(1+(\omega_L\cdot\tau_s)^2)}$$ We observe experimentally that $\tau_s$ from this fit coincides with $\tau_s$ obtained from the non-local measurement[@remark]. As the amplitude of the non-local signal is determined by the product of $P^2$ and $L_s$ these parameters are not independent in the fitting procedure. Therefore $L_s=1.18~\mu$m is estimated assuming an exponential decay of the spin signal given by $R_\mathit{3T}(B_z=0)$ at $ L=0~\mu$m and $R_\mathit{nl}(B_z=0)$ at $L=2~\mu$m. Now $P$ and $\tau_s$ are the free fitting parameters and $D_s$ can be calculated via $D_s=L^2_s/\tau_s$. In Fig. \[fig:before\] one can see that for both non-local and 3T spin precession measurements the results for $P$, $\tau_s$ and $D_s$ are almost identical. This agreement indicates that these signals originate from an induced spin accumulation into graphene. Slight differences especially in the spin injection efficiency $P$ can be explained by a small anisotropic magneto resistance contribution of about 0.5 $\Omega$ of the FM stripes to $R_\mathit{3T}$ determined in reference measurements (not shown). This small deviation leads to an absolute error of $L_s$ of about $\Delta L_s=200$ nm. Fitting the non-local measurements, we obtain $\tau_s=81.3$ ps which is slightly smaller than in exfoliated SLG[@Tombros; @Pop; @Han]. The resulting spin diffusion constant $D_s=171$ cm$^2$/s is comparable to the charge diffusion constant $D_c=\frac{1}{2}l_pv_F=158$ cm$^2$/s extracted from a reference sample grown with identical parameters and also covered with AlO$_x$ produced with the same processing steps as for the tunneling barriers. This similarity shows that the value of $D_s$ extracted from the Hanle fit is reliable. $l_p=\frac{\hbar}{e}\sqrt{n\pi}\mu$ is the mean free path, $v_F=10^6$ m/s is the Fermi velocity in graphene, $n=5.9\times10^{12}$ cm$^{-2}$ is the charge carrier density and $\mu=1126$ cm$^2$/Vs is the mobility of the reference sample. ---------- ------- ------- -------------- ---------- ------- ------- ---------- ------- ------------------ $\tau_s$ $D_s$ $g_0$ **Injector** $\tau_s$ $D_s$ $g_0$ $\tau_s$ $D_s$ $g_\mathit{eff}$ 81.3 171 2 A 95 37 2 22 160 8 108 208 2 B 165 21 2 22 160 11 ---------- ------- ------- -------------- ---------- ------- ------- ---------- ------- ------------------ : $D_s$ \[cm$^2$/s\], $\tau_s$ \[ps\] and $g$-factor before and after annealing for injector contact A and B at $T=1.7$ K. After annealing the measurements can also be fitted with an enhanced $g_\mathit{eff}>g_0$. In order to check if annealing influences the charge transport properties and/or the induced spin accumulation, a post annealing step is done at 150 $^{\circ}$C for 15 minutes in vacuum to avoid intercalation of hydrogen[@Speck; @Riedl] via forming gas. Then we repeat the same spin precession measurements as before and interestingly we observe that $\tau_s$ increases whereas $D_s$ is decreased by almost a factor of 5 if we assume the same $g$-factor as before annealing ($g=g_0=2$). For the configuration with contact B as an injector we observe an even bigger decrease of $D_s$ by a factor of about 10 (Table I) which can be explained by an inhomogenity of $R_s$ after AlO$_x$ deposition also observed in the reference sample. At this point we conclude that annealing affects the spin transport properties and we observe the same apparent reduction of $D_s$ as in Ref. , where a high temperature annealing step was included in the sample preparation procedure. $L_s=594$ nm is reduced by a factor of 2 after annealing and is again extracted from the exponential decay of the spin signal at the injection point ($R_\mathit{3T}(0)$) and at a distance $L=2$ $\mu$m ($R_{nl}(0)$). The fact that both $L_s$ and the 3T amplitude at zero magnetic field decrease by almost the same factor indicates that the applied post annealing step also affects the induced spin accumulation underneath the injector electrode. As the spin transport sample did not allow us to determine the mobility and charge carrier density independently we also annealed the reference sample (covered with AlO$_x$) under the same conditions as the spin transport sample. From low field Hall measurements at 1.7 K we get an enhanced charge carrier density $n=8.4\times10^{12}$ cm$^{-2}$ after annealing whereas the mobility just slightly increases to $\mu=1237$ cm$^2$/Vs. We conclude now that a change in $R_s$ is mainly caused by a change in the charge carrier density. Following the results of the reference sample we ascribe the small sheet resistance increase from $R_s=1.5$ k$\Omega$ before and $R_s=1.7$ k$\Omega$ after annealing of the spin transport sample to a minute reduction in doping. The charge diffusion constant $D_c$ (and also $D_s$) is therefore slightly decreased from 171 cm$^2$/s to 160 cm$^2$/s ($D_c\propto\sqrt{n}$) for the spin transport sample[@Supplemental]. In conclusion, the minor decrease in $D_c$ due to the annealing step cannot explain the strong reduction of $D_s$ extracted from the Hanle fits. That means we have the following situation: $D^\mathit{before}_c\approx D^\mathit{after}_c\gg D^\mathit{after}_s$. In an attempt to understand the discrepancy of $D_c$ and $D_s$ Maassen [*et al.*]{}[@Maassen] first considered localized states in the electrically inert buffer layer[@Emtsev2] (BL) which could provide hopping sites for the spins being able to change the spin transport properties but not the charge transport properties. The difference in $D_s$ and $D_c$ was also recently discussed by McCreary [*et al.*]{}[@Kawakamilocalmoments]. They assume a formation of local magnetic moments by Ar sputtering or from hydrogen adatoms on exfoliated graphene samples which provide an enhanced magnetic field for the diffusing spins, which can be modeled by an effective $g$-factor. Maassen [*et al.*]{}[@MaassenARXIV] reinterpreted their experiments[@Maassen] using a model of localized states, where the effective Larmor frequency is increased in the limit of strong coupling, which again can be expressed by an enhanced $g$-factor and then allows to set $D_c=D_s$. ![(Color online) **(a)** Hanle spin precession measurement (background removed) at 1.7 K and fit (continuous line) in non-local configuration. **(b)** 3T measurement (background removed) and Lorentzian fit (continuous line). Both measurements are done *after* an annealing step. The fittings are done treating the $g$-factor as a free parameter.[]{data-label="fig:after"}](Fig3.eps){width="8.5cm"} For this reason we also fit our non-local and 3T data after annealing, treating the $g$-factor in the Larmor frequency (Eqs. (\[nonlocal\]) and (\[R3T\])) as a free parameter, and assuming $D^\mathit{after}_c=D^\mathit{after}_s=160~$cm$^2$/s. Fig. \[fig:after\] shows that our data can also be well fitted with an enhanced $g$-factor of 8 in both the non-local and in the 3T setup. The oscillations observed in the Hanle curve (Fig. \[fig:after\][(a)]{}) at higher magnetic fields are phase coherent contributions and vanish at higher temperatures. If we summarize our experimental findings so far (Table I) we can conclude that our measurements after annealing can be explained either by $D^\mathit{after}_c\neq D^\mathit{after}_s$ or by an effective Landé factor $g_\mathit{eff}>2$ as both models reproduce the data equally well since Eqs. (\[nonlocal\]) and (\[R3T\]) contain the $g$-factor implicitly in the Larmor frequency $\omega_L$ and are invariant under a rescaling of $g$, $\tau_s$ and $D_s$ [@MaassenARXIV]. To determine which model (hopping or magnetic moments) is appropriate in our situation we study the temperature dependence of spin transport. We observe that the enhanced effective $g$-factor, as well as the amplitude of the spin signal, decrease with increasing temperature. From the $T$ dependence of the reference sample and the $T$ dependence of $R_s$ of the spin transport sample after annealing we conclude that $D_c$ is weakly influenced by the temperature. This was also included in the Hanle fits (Fig. \[fig:T\]). ![(Color online) Temperature dependent non-local Hanle measurements (background removed) with $g$-factor as further fitting parameter.[]{data-label="fig:T"}](Fig4.eps){width="8.5cm"} If $g_\mathit{eff}$ originates from magnetic moments then its temperature dependence can be described by the following equation[@Kawakamilocalmoments]: $$\label{gfactor} g_\mathit{eff}(T)=g_0+\frac{g_0\eta_M A_\mathit{ex}}{k_BT}$$ This is the low field approximation of the Brillouin function of a spin $1/2$ paramagnetic material. $A_\mathit{ex}$ is the strength of the exchange coupling, $\eta_M$ represents the filling density of the magnetic moments, $g_0=2$ is the $g$-factor for free electrons and $k_B$ is the Boltzmann constant. As it is seen in Fig. \[fig:TAbfall\] the measured temperature dependence of $g_\mathit{eff}$ is well described by Eq. (\[gfactor\]). This temperature dependence is compatible with the effective exchange field model proposed by McCreary [*et al.*]{}[@Kawakamilocalmoments] which describes the enhancement of the magnetic field felt by the diffusing spins due to localized paramagnetic moments. Maassen [*et al.*]{}[@MaassenARXIV], on the other hand, interpret their data by hopping of the diffusing spins into localized states which leads to an apparent enhancement of the $g$-factor in the Hanle fit. In their work the increase of $g$ is most pronounced at room temperature, whereas in our case the maximum $g_\mathit{eff}$ is obtained at low temperature. We therefore believe that post annealing creates an amount of randomly positioned magnetic moments resulting in an increased effective magnetic field $B_\mathit{eff}=B_z+B_\mathit{ex}$ composed of the applied out-of plane magnetic field $B_z$ and of an exchange field $B_\mathit{ex}$ coming from the induced magnetic moments[@Supplemental2]. This enhanced magnetic field can be modeled by an effective $g$-factor in the Larmor frequency $\omega_L$. As we nearly get the same Landé factor from the non-local Hanle and Lorentzian fit we can not decide if the magnetic moments are formed in the graphene/buffer layer transition or at the AlO$_x$/graphene interface. The difference between our experiment and the work of Maassen [*et al.*]{}[@Maassen] may be due to different annealing conditions. ![(Color online) Temperature dependence of the $g$-factor. The continuous curve is the fit according to Eq. (\[gfactor\]).[]{data-label="fig:TAbfall"}](Fig5.eps){width="8.4cm"} As to the origin of the magnetic moments we assume that defects or vacancies[@Yazyev] which are already present in our epitaxial graphene are modified via the annealing step at 150 $^{\circ}$C. One example could be step edges, which are known to occur frequently in epitaxial graphene on SiC[@Emtsev1]. This is supported by weak localization measurements on the reference sample, which yield a very short intervalley scattering length of $L_i\approx40$ nm and also by THz photocurrent experiments on the reference sample where photocurrents were detected in the bulk of the sample[@Ganichev1; @Ganichev2] at normal incidence, which can only be explained by a lowering of the symmetry[@Ganichev3]. Annealing then only change the termination of the step edges, which influences their magnetic behavior. In conclusion, an electrically induced spin imbalance from ferromagnetic Co stripes can be analyzed via spin precession measurements in both non-local and three terminal configuration. By introducing a post annealing step, we observe that the spin relaxation length as well as the non-local and 3T Hanle amplitude decrease. Fitting of the non-local and 3T data after annealing shows an increase of the $g$-factor if spin and charge diffusion constants are assumed to be the same. The origin of the $g$-factor enhancement are local magnetic moments formed by annealing. The reduced spin lifetime and length support this assumption because local magnetic moments act as an additional spin scattering source. Finally, the temperature dependence shows a clear evidence that paramagnetic moments are created as the effective $g$-factor scales with $1/T$ with increasing temperature. Support from the DFG within SFB 689 “Spin phenomena in reduced dimension” and SPP 1459 “Graphene” is gratefully acknowledged. We would like to thank F. Fromm, T. Maassen, S. Ganichev, and R. Kawakami for helpful discussions and T. Korn, F. Yaghobian, and C. Schüller for supporting Raman measurements. [99]{} A. H. Castro Neto, F. Guinea, N.M.R. Peres, K.S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature (London) **438**, 197 (2005). N. Tombros, C. Józsa, M. Popinciuc, H. T. Jonkman, and Bart J. van Wees, Nature (London) **448**, 571 (2007). D. Huertas-Hernando, F. Guinea, and A. Brataas, Phys. Rev. B **74**, 155426 (2006). W. Han, K. Pi, K. M. McCreary, Y. Li, J. J. I. Wong, A. G. Swartz, and R. K. Kawakami, Phys. Rev. Lett. **105**, 167202 (2010). W. Han, and R. K. Kawakami, Phys. Rev. Lett. **107**, 047207 (2011). T.-Y. Yang, J. Balakrishnan, F. Volmer, A. Avsar, M. Jaiswal, J. Samm, S.R. Ali, A. Pachoud, M. Zeng, M. Popinciuc, G. Güntherodt, B. Beschoten, and B. Özyilmaz, Phys. Rev. Lett. **107**, 047206 (2011). W. Han, J.-R. Chen, D. Wang, K. M. McCreary, H. Wen, A. G. Swartz, J. Shi, and R. Kawakami, Nano Lett. **12**, 3443 (2012). M. H. D. Guimarães, A. Veligura, P. J. Zomer, T. Maassen, I. J. Vera-Marun, N. Tombros, and B. J. van Wees, Nano Lett. **12**, 3512 (2012). P. N. First, P. N., W. A. de Heer, T. Seyller, C. Berger, J. A. Stroscio, J.-S. Moon, MRS Bull. **35**, 296 (2010). K. V. Emtsev, A. Bostwick, K. Horn, J. Jobst, G. L. Kellogg, L. Ley, J. L. McChesney, T. Ohta, S. A. Reshanov, J. Röhrl, E. Rotenberg, A. K. Schmid, D. Waldmann, H. B. Weber, and T. Seyller, Nat. Mat. **8**, 203 (2009). \[Maassen\] T. Maassen, J. J. van den Berg, N. IJbema, F. Fromm, T. Seyller, R. Yakimova, and B. J. van Wees, Nano Lett. **12**, 1498 (2012). B. Dlubak, M.-B Martin, C. Deranlot, B. Servet, S. Xavier, R. Mattana, M. Sprinkle, C. Berger, W. A. De Heer, F. Petroff, A. Anane, P. Seneor, and A. Fert, Nature Physics **8**, 557 (2012). T. Maassen, J. J. van den Berg, E. H. Huisman, H. Dijkstra, F. Fromm, T. Seyller and B. J. van Wees, arXiv:1208.3129v1 (2012). K. M. McCreary, A. G. Swartz, W. Han, J. Fabian and R. K. Kawakami, Phys. Rev. Lett. **109**, 186604 (2012). S. P. Dash, S. Sharma, R. S. Patel, M. P. de Jong, and R. Jansen, Nature **462**, 491 (2009). J. Fabian, A. Matos-Abiague, C. Ertler, P. Stano, and I. Zutic, Acta Phys. Slov. **57**, 565 (2007). M. Popinciuc, C. Józsa, P. J. Zomer, N. Tombros, A. Veligura, H. T. Jonkman, and B. J. van Wees, Phys. Rev. B **80**, 214427 (2009). M. Ciorga, A. Einwanger, U. Wurstbauer, D. Schuh, W. Wegscheider, and D. Weiss, Phys. Rev. B **79**, 165321 (2009). M. Kameno, Y. Ando, E. Shikoh, T. Shinjo, T. Sasaki, T. Oikawa, Y. Suzuki, T. Suzuki, and M. Shiraishi, Appl. Phys. Lett. **101**, 122413 (2012). Strictly speaking, the Lorentzian is only obtained for contacts much wider than the spin relaxation length. In our case, this may lead to an underestimated $\tau_s$ by a factor of 2. W. Han, and R. K. Kawakami, Phys. Rev. Lett. **107**, 047207 (2011). C. Riedl, C. Coletti, T. Iwasaki, A. A. Zakharov, and U. Starke, Phys. Rev. Lett. **103**, 246804 (2009). F. Speck, J. Jobst, F. Fromm, M. Ostler, D. Waldmann, M. Hundhausen, H. B. Weber, T. Seyller, Appl. Phys. Lett. **99**, 122106 (2011). See supplemental material for control measurements and more details on the effective exchange field model. K. V. Emtsev, F. Speck, T. Seyller, L. Ley and J. D. Riley, Phys. Rev. B **77**, 155303 (2008). For further discussion regarding the dip feature around $B=0$ when $B$ is applied along the ferromagnetic stripes see Ref. . O. V. Yazyev, and L. Helm, Phys. Rev. B **75**, 125408 (2007). J. Karch, C. Drexler, P. Olbrich, M. Fehrenbacher, M. Hirmer, M. M. Glazov, S. A. Tarasenko, E. L. Ivchenko, B. Birkner, J. Eroms, D. Weiss, R. Yakimova, S. Lara-Avila, S. Kubatkin, M. Ostler, T. Seyller, and S. D. Ganichev, Phys. Rev. Lett. **107**, 276601 (2011). J. Karch, P. Olbrich, M. Schmalzbauer, C. Zoth, C. Brinsteiner, M. Fehrenbacher, U. Wurstbauer, M. M. Glazov, S. A. Tarasenko, E. L. Ivchenko, D. Weiss, J. Eroms, R. Yakimova, S. Lara-Avila, S. Kubatkin, and S. D. Ganichev, Phys. Rev. Lett. **105**, 227402 (2010). S. D. Ganichev, private communication. W. Han, K. Pi, K. M. McCreary, Y. Li, J. J. I. Wong, A. G. Swartz, and R. K. Kawakami, Phys. Rev. Lett. **105**, 167202 (2010). [**Supplemental Material**]{} We report here on magnetotransport measurements of the reference sample before and after an annealing step and discuss the observability of a dip in the non-local in plane measurements. The strong modification of the Hanle curves before and after annealing can be explained by either an enormous reduction of the diffusion constant after annealing or by the creation of localized moments, resulting in a strongly enhanced $g$-factor of the electrons. To check that annealing barely changes the transport properties, we characterized the reference sample with magnetotransport measurements before and after an annealing step at 150 $^{\circ}$C for 15 minutes in vacuum. From these measurements we can extract the charge carrier density $n_s$ and the mobility $\mu$ in order to check the influence of annealing on these parameters. To this end we applied a perpendicular magnetic field and measured the Hall resistance R$_H$ at 1.7 K. We also determined the sheet resistance R$_s$ of graphene by applying the van der Pauw method. Finally, using the Drude formula, we calculated the carrier mobility $\mu=(R_sn_se)^{-1}$. Both carrier density and mobility before and after annealing are given as insets in Fig. \[fig:SHall\]. From this study we conclude that annealing can indeed change the charge carrier density and the sheet resistance slightly but the mobility is almost unaffected. That means that a change in the sheet resistance results mainly from a change in the charge carrier density. Taking the results from the reference sample we draw the following conclusion for the sample in which spin transport was studied. In this the sheet resistance increased slightly upon annealing which can be now ascribed to a minute reduction in the charge carrier density. Therefore the charge diffusion constant for our spin transport sample is only slightly decreased as $D_c\propto\sqrt{n_s}$ and can not explain the strong reduction of the spin diffusion constant extracted from the Hanle data when using the $g$-factor for a free electron. We also performed Hall measurements for a number of temperatures after the annealing step, and determined the mobility $\mu$ at $T=1.7\ldots40$ K. From those $T$-dependent measurements we conclude that the carrier density $n_s$ is barely $T$-dependent but the mobility increases as the temperature is lowered (Fig. \[fig:S2\]). This information, together with the $T$-dependence of the sheet resistance of the spin transport sample allows us to model the $T$-dependence of the diffusion constant, which we need to determine the $T$-dependence of the $g$-factor. ![Magnetotransport measurement of the reference sample before and after an annealing step.[]{data-label="fig:SHall"}](FigS1.eps){width="50.00000%"} ![Charge carrier density $n_s$ and mobility $\mu$ versus temperature $T$ of the reference sample.[]{data-label="fig:S2"}](FigS2.eps){width="50.00000%"} Furthermore we discuss the dip feature around $B=0$ in the non-local spin valve measurements which was experimentally observed in Ref. . Kawakami [*et al.*]{} observe a pronounced dip in the non-local resistance when they sweep the magnetic field in the direction along the ferromagnetic electrodes. This feature can be explained assuming the effective exchange-field model where the spin relaxation time $T^\mathit{total}_1$ is modeled in the following way: $$\frac{1}{T^\mathit{total}_1}=\frac{1}{\tau^{so}}+\frac{1}{\tau^{ex}_1} \label{Eq:T1total}$$ where $$\frac{1}{\tau^{ex}_1} = \frac{\frac{(\Delta B)^2}{\tau_c}\left(\frac{g_e}{g_e^*}\right)^2}{(B_\mathit{app,y})^2+\left(\frac{\hbar}{g_e^*\mu_B \tau_c}\right)^2}$$ Here, $g_e^*$ is the effective $g$-factor, $g_e=2$ is the bare electron $g$-factor, $\Delta B$ and $\tau_c$ are the fluctuation amplitude and correlation time of the fluctuating field, felt by the moving electron spins, and $B_\mathit{app,y}$ is the external field, applied in the direction of the ferromagnetic electrodes. In the case of Kawakami’s experiment, the $B$-independent spin lifetime $\tau^{so}$ obtained from non-local Hanle measurements is quite long so an ensemble of fluctuating spins with $\Delta B=6.78$ mT and $\tau_c=192$ ps leads to a sizeable reduction of $T^\mathit{total}_1$. Since the spin-flip length is $L_s = \sqrt{D_s\,T^\mathit{total}_1}$ and the non-local signal in the tunneling regime[@Pop; @Han2010] depends on $L_s$ according to $$\label{Rnl} R_\mathit{nl}=\frac{P^2R_sL_s}{2W}exp(-L/L_s) ,$$ a variation of $T^\mathit{total}_1$ directly modifies the observed non-local signal. In our case, the spin lifetime before and after annealing is 80 ps and 22 ps, respectively. Annealing creates localized moments, reducing the $B$-independent spin lifetime (modeled by $\tau^{so}$ in Eq. (\[Eq:T1total\])) due to spin-flip scattering with the localized moments, while the additional, $B$-dependent reduction resulting from the effective exchange field model is barely visible. This is shown in Fig. \[fig:S3\] and Fig. \[fig:S4\]. ![$T^\mathit{total}_1$ from Kawakami’s group.[]{data-label="fig:S3"}](FigS3.eps){width="50.00000%"} In Fig. \[fig:S3\] we reproduce the expected $T^\mathit{total}_1$-time for the experiment of the Kawakami group, showing a pronounced dip in $T^\mathit{total}_1$ around $B=0$. If we replace $\tau^{so}$ with the experimental value for our sample after annealing, we find that for the same $\Delta B=6.78$ mT and $\tau_c=192$ ps as in Kawakami’s experiment the dip feature is now almost invisible (Fig. \[fig:S4\]). ![$T^\mathit{total}_1$ with $\tau^{so}$ from our data.[]{data-label="fig:S4"}](FigS4.eps){width="50.00000%"} This also holds if we increase $\Delta B$ to 20 mT and $\tau_c$ to 3 ns (Fig. \[fig:S5\] and Fig. \[fig:S6\]). The corresponding tiny change in $R_{nl}$ of at most a few m$\Omega$ would be unobservable, given the noise level of about 10 m$\Omega$. Indeed, experimentally we did not observe such a feature. ![$T^\mathit{total}_1$ for different $\Delta B$.[]{data-label="fig:S5"}](FigS5.eps){width="50.00000%"} ![$T^\mathit{total}_1$ for different $\tau_c$.[]{data-label="fig:S6"}](FigS6.eps){width="50.00000%"}
--- abstract: 'A unified treatment of mass varying dark matter coupled to cosmon-[*like*]{} dark energy is shown to result in [*effective*]{} generalized Chaplygin gas (GCG) scenarios. The mass varying mechanism is treated as a cosmon field inherent effect. Coupling dark matter with dark energy allows for reproducing the conditions for the present cosmic acceleration and for recovering the stability resulted from a positive squared speed of sound $c_{s}^{\2}$, as in the GCG scenario. The scalar field mediates the nontrivial coupling between the dark matter sector and the sector responsible for the accelerated expansion of the universe. The equation of state of perturbations is the same as that of the background cosmology so that all the effective results from the GCG paradigm are maintained. Our results suggest the mass varying mechanism, when obtained from an exactly soluble field theory, as the right responsible for the stability issue and for the cosmic acceleration of the universe.' author: - 'A. E. Bernardini' date: - - title: Mass varying dark matter in effective GCG scenarios --- Introduction ============ The ultimate nature of the dark sector of the universe is the most relevant issue related with the negative pressure component required to understand why and how the universe is undergoing a period of accelerated expansion [@Zla98; @Wan99; @Ste99; @Bar99; @Ber00]. A natural and simplistic explanation for this is obtained in terms of a tiny positive cosmological constant introduced in the Einstein’s equation for the universe. Since the cosmological constant has a magnitude completely different from that predicted by theoretical arguments, and it is often confronted with conceptual problems, physicists have been compelled to consider other explanations for that [@Ame02; @Kam02; @Bil02; @Ber02; @Cal03; @Mot04; @Bro06A]. Motivated by the high energy physics, an alternative for obtaining a negative pressure equation of state considers that the dark energy can be attributed to the dynamics of a scalar field $\phi$ which realizes the present cosmic acceleration by evolving slowly down its potential $V\bb{\phi}$ [@Pee87; @Rat87]. These models assume that the vacuum energy can vary [@Bro33]. Following theoretical as well as phenomenological arguments, several possibilities have been proposed, such as $k$-essence [@Chi00; @Arm01], phantom energy [@Sch01; @Car03], cosmon fields [@Wet87], and also several types of modifications of gravity [@Def02; @Car04; @Ama06]. One of the most challenging proposals concerns mass varying particles [@Hun00; @Gu03; @Far04; @Bja08] coupled to the dark energy through a dynamical mass dependence on a light scalar field which drives the dark energy evolution in a kind of unified cosmological fluid. The idea in the well-known mass varying mechanism [@Far04; @Pec05; @Bro06A; @Bja08] is to introduce a coupling between a relic particle and the scalar field whose effective potential changes as a function of the relic particle density. This coupled fluid is either interpreted as dark energy plus neutrinos, or as dark energy plus dark matter [@Ber08A; @Ber08B]. Such theories can possess an adiabatic regime in which the scalar field always sits at the minimum of its effective potential, which is set by the local mass varying particle density. The relic particle mass is consequently generated from the vacuum expectation value of the scalar field and becomes linked to its dynamics by $m\bb{\phi}$. A scenario which congregate dark energy and some kind of mass varying dark matter in a unified negative pressure fluid can explain the origin of the cosmic acceleration. In particular, any background cosmological fluid with an effective behaviour as that of the generalized Chaplygin gas (GCG) [@Kam02; @Bil02; @Ber02] naturally offers this possibility. The GCG is particularly relevant in respect with other cosmological models as it is shown to be consistent with the observational constraints from CMB [@Ber03], supernova [@Sup1; @Ber04; @Ber05], gravitational lensing surveys [@Ber03B], and gamma ray bursts [@Ber06B]. Moreover, it has been shown that the GCG model can be accommodated within the standard structure formation mechanism [@Kam02; @Ber02; @Ber04]. In the scope of finding a natural explanation for the cosmic acceleration and the corresponding adequation to stability conditions for a background cosmological fluid, our purpose is to demonstrate that the GCG just corresponds to an effective description of a coupled fluid composed by dark energy with equation of state given by $p\bb{\phi} = -\rho\bb{\phi}$ and a cold dark matter (CDM) with a dynamical mass driven by the scalar field $\phi$. Once one has consistently obtained the mass dependence on $\phi$, which is model dependent, it can be noticed that the cosmological evolution of the composed fluid is governed by the same dynamics prescribed by cosmon field equations. It suggests that the mass varying mechanism embedded into the cosmon-[*like*]{} dynamics reproduces the effective behaviour of the GCG. In addition, coupling dark matter with dark energy by means of a dynamical mass driven by such a scalar field allows for reproducing the conditions for the present cosmic acceleration and for recovering the stability prescribed by a positive squared speed of sound $c_{s}^{\2}$. At least implicitly, it leads to the conclusion that the dynamical mass behaviour is the main agent of the stability issue and of the cosmic acceleration of the universe. At our approach, the dark matter is approximated by a degenerate fermion gas (DFG). In order to introduce the mass varying behaviour, we analyze the consequences of coupling it with and underlying dark energy scalar field driven by a cosmon-[*type*]{} equation of motion. We discuss all the relevant constraints on this in section II. In section III, we obtain the energy density and the equation of state for the unified fluid and compare them with the corresponding quantities for the GCG. In section IV, we discuss the stability issue and the accelerated expansion of the universe in the framework here proposed. The pertinent comparisons with a GCG scenario are evaluated. We draw our conclusions in section V by summarizing our findings and discussing their implications. Mass varying mechanism for a DFG coupled to cosmon-[*like*]{} scalar fields =========================================================================== To understand how the mass varying mechanism takes place for different particle species, it is convenient to describe the corresponding particle density, energy density and pressure as functionals of a statistical distribution. This counts the number of particles in a given region around a point of the phase space defined by the conjugate coordinates: momentum, [$\beta$]{}, and position, [$x$]{}. The statistical distribution can be defined by a function $f\bb{q}$ in terms of a comoving variable, $q = a\,|\mbox{\boldmath$\beta$}|$, where $a$ is the scale factor (cosmological radius) for the flat FRW universe, for which the metrics is given by $ds^{\2} = dt^{\2} - a^{\2}\bb{t}\delta_{\ii\j}dx^{\ii}dx^{\j}$. In the flat FRW scenario, the corresponding particle density, energy density and pressure are thus given by $$\begin{aligned} n\bb{a} &=&\frac{1}{\pi^{\2}\,a^{\3}} \int_{_{0}}^{^{\infty}}{\hspace{-0.3cm}dq\,q^{\2}\ \hspace{-0.1cm}f\bb{q}},\nonumber\\ \rho_m\bb{a, \phi} &=&\frac{1}{\pi^{\2}\,a^{\4}} \int_{_{0}}^{^{\infty}}{\hspace{-0.3cm}dq\,q^{\2}\, \left(q^{\2}+ m^{\2}\bb{\phi}\,a^{\2}\right)^{\1/\2}\hspace{-0.1cm}f\bb{q}},\\ p_m\bb{a, \phi} &=&\frac{1}{3\pi^{\2}\,a^{\4}}\int_{_{0}}^{^{\infty}}{\hspace{-0.3cm}dq\,q^{\4}\, \left(q^{\2}+ m^{\2}\bb{\phi}\,a^{\2}\right)^{\mi\1/\2}\hspace{-0.1cm} f\bb{q}}.~~~~ \nonumber \label{gcg01}\end{aligned}$$ where the last two can be depicted from the Einstein’s energy-momentum tensor [@Dod05]. For the case where $f\bb{q}$ is a Fermi-Dirac distribution function, it can be written as $$f\bb{q}= \left\{\exp{\left[(q - q_F)/T_{\0}\right]} + 1\right\}^{\mi\1}, \nonumber$$ where $T_{\0}$ is the relic particle background temperature at present. In the limit where $T_{\0}$ tends to $0$, it becomes a step function that yields an elementary integral for the above equations, with the upper limit equal to the Fermi momentum here written as $q_{F} = a \,\beta\bb{a}$. It results in the equations for a DFG [@ZelXX]. The equation of state can be expressed in terms of elementary functions of $\beta \equiv \beta\bb{a}$ and $m \equiv m\bb{\phi\bb{a}}$, $$\begin{aligned} n\bb{a} &=& \frac{1}{3 \pi^{\2}} \beta^{\3},\nonumber\\ \rho_m\bb{a} &=& \frac{1}{8 \pi^{\2}} \left[\beta(2 \beta^{\2} + m^{\2})\sqrt{\beta^{\2} + m^{\2}} - \mbox{arc}\sinh{\left(\beta/m\right)}\right],\\ p_m\bb{a} &=& \frac{1}{8 \pi^{\2}} \left[\beta (\frac{2}{3} \beta^{\2} - m^{\2})\sqrt{\beta^{\2} + m^{\2}} + \mbox{arc}\sinh{\left(\beta/m\right)}\right].\nonumber \label{gcg01B}\end{aligned}$$ One can notice that the DFG approach is useful for parameterizing the transition between ultra-relativistic (UR) and non-relativistic (NR) thermodynamic regimes. It is not mandatory for connecting the mass varying scenario with the GCG scenario. Simple mathematical manipulations allow one to easily demonstrate that $$n\bb{a} \frac{\partial \rho_m\bb{a}}{\partial n\bb{a}} = (\rho_m\bb{a} + p_m\bb{a}), \label{gcg02B}$$ and $$m\bb{a} \frac{\partial \rho_m\bb{a}}{\partial m\bb{a}} = (\rho_m\bb{a} - 3 p_m\bb{a}), \label{gcg02}$$ Noticing that the explicit dependence of $\rho_m$ on $a$ is intermediated by $\beta\bb{a}$ and $m\bb{a} \equiv m\bb{\phi\bb{a}}$, one can take the derivative of the energy density with respect to time in order to obtain $$\begin{aligned} \dot{\rho}_m &=& \dot{\beta}\bb{a} \frac{\partial \rho_m\bb{a}}{\partial \beta\bb{a}} + \dot{m}\bb{a} \frac{\partial \rho_m\bb{a}}{\partial m\bb{a}}\nonumber\\ &=& \dot{n}\bb{a} \frac{\partial \rho_m\bb{a}}{\partial n \bb{a}} + \dot{m}\bb{a} \frac{\partial \rho_m\bb{a}}{\partial m\bb{a}}\nonumber\\ &=& - 3\frac{\dot{a}}{a} n\bb{a} \frac{\partial \rho_m\bb{a}}{\partial n \bb{a}} + \dot{\phi}\frac{\mbox{d} m}{\mbox{d} \phi}\frac{\partial \rho_m\bb{a}}{\partial m\bb{a}}, \label{gcg03BB}\end{aligned}$$ where the [*overdot*]{} denotes differentiation with respect to time ($^{\cdot}\, \equiv\, d/dt$). The substitution of Eqs. (\[gcg02B\]-\[gcg02\]) into the above equation results in the energy conservation equation given by $$\dot{\rho}_m + 3 H (\rho_m + p_m) - \dot{\phi}\frac{\mbox{d} m}{\mbox{d} \phi} (\rho_m - 3 p_m) = 0, \label{gcg03}$$ where $H = \dot{a}/{a}$ is the expansion rate of the universe. If one performs the derivative with respect to $a$ directly from $\rho_m$ in the form given in Eq. (\[gcg01\]), the same result can be obtained. The coupling between relic particles and the scalar field as described by Eq. (\[gcg03\]) are effective just for NR fluids. Since the strength of the coupling is suppressed by the relativistic increase of pressure ($\rho\sim 3 p$), as long as particles become relativistic ($T\bb{a} = T_{\0}/a >> m\bb{\phi\bb{a}}$) the matter fluid and the scalar field fluid tend to decouple and evolve adiabatically. The mass varying mechanism expressed by Eq. (\[gcg03\]) translates the dependence of $m$ on $\phi$ into a dynamical behaviour. In particular, for a DFG, the consistent analytical transition between UR and NR regimes and their effects on coupling dark matter ($m$) and dark energy ($\phi$) are evident from Eq. (\[gcg03\]). The mass thus depends on the value of a slowly varying classical scalar field [@Wet94; @Bea08] which evolves like a [*cosmon*]{} field. The cosmon-[*type*]{} equation of motion for the scalar field $\phi$ is given by $$\ddot{\phi} + 3 H \dot{\phi} + \frac{\mbox{d} V\bb{\phi}}{\mbox{d} \phi} = Q\bb{\phi}. \label{gcg04}$$ where, in the mass varying scenario, one identifies $Q\bb{\phi}$ as $ - (\mbox{d} m/\mbox{d} \phi)/(\partial \rho_m/\partial m)$. The corresponding equation for energy conservation can be written as $$\dot{\rho_{\phi}} + 3 H (\rho_{\phi} + p_{\phi}) + \dot{\phi}\frac{\mbox{d} m}{\mbox{d} \phi} \frac{\partial \rho_m}{\partial m} = 0. \label{gcg05}$$ which, when added to Eq. (\[gcg03\]), results in the equation for a unified fluid $(\rho, p)$ with a dark energy component and a mass varying dark matter component, $$\dot{\rho} + 3 H (\rho + p) = 0, \label{gcg06}$$ where $\rho = \rho_{\phi} + \rho_m$ and $p = p_{\phi} + p_m$. As we shall notice in the following, this unified fluid corresponds to an effective description of the universe parameterized by a GCG equation of state. Decoupling mass varying dark matter from the effective GCG ========================================================== Irrespective of its origin, several studies yield convincing evidences that the GCG scenario is phenomenologically consistent with the accelerated expansion of the universe. This scenario is introduced by means of an exotic equation of state [@Ber02; @Kam02; @Ber03] given by $$p = - A_{\s} \left(\frac{\rho_{\0}}{\rho}\right)^{\al}, \label{gcg20}$$ which can be obtained from a generalized Born-Infeld action [@Ber02]. The constants $A_{\s}$ and $\alpha$ are positive and $0 < \alpha \leq 1$. Of course, $\alpha = 0$ corresponds to the $\Lambda$CDM model and we are assuming that the GCG model has an underlying scalar field, actually real [@Kam02; @Ber04] or complex [@Bil02; @Ber02]. The case $\alpha = 1$ corresponds to the equation of state of the Chaplygin gas scenario [@Kam02] and is already ruled out by data [@Ber03]. Notice that for $A_s =0$, GCG behaves always as matter whereas for $A_{\s} =1$, it behaves always as a cosmological constant. Hence to use it as a unified candidate for dark matter and dark energy one has to exclude these two possibilities so that $A_s$ must lie in the range $0 < A_{\s} < 1$. Inserting the above equation of state into the unperturbed energy conservation Eq. (\[gcg06\]), one obtains through a straightforward integration [@Kam02; @Ber02] $$\rho = \rho_{\0} \left[A_{\s} + \frac{(1-A_{\s})}{a^{\3(\1+\alpha)}}\right]^{\1/(\1 \pl \al)}, \label{gcg21}$$ and $$p = - A_{\s} \rho_{\0} \left[A_{\s} + \frac{(1-A_{\s})}{a^{\3(\1+\alpha)}}\right]^{-\al/(\1 \pl \al)}. \label{gcg22}$$ One of the most striking features of the GCG fluid is that its energy density interpolates between a dust dominated phase, $\rho \propto a^{-\3}$, in the past, and a de-Sitter phase, $\rho = -p$, at late times. This property makes the GCG model an interesting candidate for the unification of dark matter and dark energy. Indeed, it can be shown that the GCG model admits inhomogeneities and that, in particular, in the context of the Zeldovich approximation, these evolve in a qualitatively similar way as in the $\Lambda$CDM model [@Ber02]. Furthermore, this evolution is controlled by the model parameters, $\alpha$ and $A_{\s}$. Assuming the canonical parametrization of $\rho$ and $p$ in terms of a scalar field $\phi$, $$\begin{aligned} \rho &=& \frac{1}{2}\dot{\phi}^{\2} + V,\nonumber\\ p &=& \frac{1}{2}\dot{\phi}^{\2} - V, \label{pap01}\end{aligned}$$ allows for obtaining the effective dependence of the scalar field $\phi$ on the scale factor, $a$, and explicit expressions for $\rho$, $p$ and $V$ in terms of $\phi$. Following Ref. [@Ber04], one can obtain through Eq. (\[gcg05\]) the field dependence on $a$, $$\dot{\phi}^{\2}\bb{a} = \frac{\rho_{\0}(1 - A_{\s})}{a^{\3(\al\pl\1)}} \left[A_{\s} + \frac{(1-A_{\s})}{a^{\3(\al\pl\1)}}\right]^{-\al/(\al \pl \1)}, \label{pap02}$$ and assuming a flat evolving universe described by the Friedmann equation $H^{\2} = \rho$ (with $H$ in units of $H_{\0}$ and $\rho$ in units of $\rho_{\mbox{\tiny Crit}} = 3 H^{\2}_{\0}/ 8 \pi G)$, one obtains $$\phi\bb{a} = - \frac{1}{3(\alpha + 1)}\ln{\left[\frac{\sqrt{1 - A_{\s}(1 - a^{\3(\al \pl \1)})} - \sqrt{1 - A_{\s}}}{\sqrt{1 - A_{\s}(1 - a^{\3(\al \pl \1)})} + \sqrt{1 - A_{\s}}}\right]}, \label{pap03}$$ where it is assumed that $$\phi_{\0} = \phi\bb{a_{\0} = 1} = - \frac{1}{3(\alpha + 1)}\ln{\left[\frac{1 - \sqrt{1 - A_{\s}}}{1 + \sqrt{1 - A_{\s}}}\right]}. \label{pap04}$$ One then readily finds the scalar field potential, $$V\bb{\phi} = \frac{1}{2}A_{\s}^{\frac{\1}{\1 \pl \al}}\rho_{\0}\left\{ \left[\cosh{\left(3\bb{\alpha + 1} \phi/2\right)}\right]^{\frac{\2}{\al \pl \1}} + \left[\cosh{\left(3\bb{\alpha + 1} \phi/2\right)}\right]^{-\frac{\2\al}{\al \pl \1}} \right\}. \label{pap05}$$ If one supposes that energy density, $\rho$, may be decomposed into a mass varying CDM component, $\rho_{m}$, and a dark energy component, $\rho_{\phi}$, connected by the scalar field equations (\[gcg04\])-(\[gcg05\]), the equation of state (\[gcg20\]) is just assumed as an effective description of the cosmological background fluid of the universe. Since the CDM pressure, $p_m$, is null, the dark energy component of pressure, $p_{\phi}$, results in the GCG pressure, $p = p_{\phi}$. Assuming that dark energy obeys a de-Sitter phase equation of state, that is, $\rho_{\phi}\bb{\phi} = - p_{\phi}\bb{\phi}$, the dark energy density can be parameterized by a generic quintessence potential, $\rho_{\phi}\bb{\phi} = U\bb{\phi}$, since its kinetic component has to be null for a canonical formulation. It results in $U\bb{\phi} = - p_{\phi}\bb{\phi} = p$, where $p$ is the GCG pressure given by Eq. (\[gcg22\]). By substituting the result of Eq.(\[pap03\]) into the Eq.(\[gcg22\]), and observing that $H^{\2} = \rho$, with $\rho$ given by Eq.(\[gcg21\]), it is possible to rewrite the GCG pressure, $p$, in terms of $\phi$. It results in the following analytical expression for $U\bb{\phi}$, $$U\bb{\phi} = \rho_{\phi}\bb{\phi} = - p_{\phi}\bb{\phi} = \left[A_{s}\cosh{\left(\frac{3\bb{\alpha + 1}\phi}{2}\right)}\right]^{\frac{\2 \al}{1 \pl \al}}, \label{pap08}$$ which is consistent with the result for $V\bb{\phi} = (1/2)(\rho\bb{\phi} - p\bb{\phi})$ from Eq. (\[pap05\]). Since $\rho_{\phi}\bb{\phi} + p_{\phi}\bb{\phi} = 0$, the Eq. (\[gcg05\]) is thus reduced to $$\frac{\mbox{d} U\bb{\phi}}{\mbox{d}{\phi}} + \frac{\partial \rho_{m}}{\partial m} \frac{\mbox{d} m\bb{\phi}}{\mbox{d}\phi} = 0, \label{pap09}$$ and the problem is then reduced to finding a relation between the scalar potential $U\bb{\phi}$ and the variable mass $m\bb{\phi}$. From the above equation, the effective potential governing the evolution of the scalar field is naturally decomposed into a sum of two terms, one arising from the original quintessence potential $U\bb{\phi}$, and other from the dynamical mass $m\bb{\phi}$. For appropriate choices of potentials and coupling functions satisfying Eq. (\[pap09\]), the competition between these terms leads to a minimum of the effective potential. For [*quasi*]{}-static regimes, it is possible to adiabatically track the position of this minimum, in a kind of stationary condition. The timescale for $\phi$ to adjust itself to the dynamically modified minimum of the effective potential may be short compared to the timescale over which the background density is changing. In the adiabatic regime, the matter and scalar field are tightly coupled together and evolve as one effective fluid. At our approach, once we have assumed the dark energy equation of state as $p_{\phi} = - \rho_{\phi}$, the stationary condition is a natural issue that emerges without any additional constraint on cosmon-[*type*]{} equations. In the GCG cosmological scenario, the effective fluid description is valid for the background cosmology and for linear perturbations. The equation of state of perturbations is the same as that of the background cosmology where all the effective results of the GCG paradigm are maintained. The Eq. (\[pap08\]) leads to $\rho + p = \rho_{m} + p_{m}$ which, in the CDM limit, gives $$\rho\bb{a} + p\bb{a} = m\bb{a} \, n\bb{a} + p_{m} (\equiv 0) = \frac{1}{3\pi^{\2}}\,m\bb{a}\, \beta^{\3}\bb{a}. \label{pap10}$$ Since the dependence of $m$ on $a$ is exclusively intermediated by $\phi\bb{a}$, i. e. $m\bb{a} \equiv m\bb{\phi\bb{a}}$, from Eqs. (\[gcg21\]), (\[gcg22\]) and (\[pap03\]), after some mathematical manipulations, one obtains $$m\bb{\phi} = m_{\0} \left[\frac{\tanh{\left(3\bb{\alpha + 1}\frac{\phi}{2}\right)}}{\tanh{\left(3\bb{\alpha + 1}\frac{\phi_{\0}}{2}\right)}}\right]^{\frac{ \2 \al}{1 \pl \al}} \label{pap11}$$ which is consistent with Eq. (\[pap09\]) once $n \propto a^{\mi\3}$. One can thus infer that the adequacy to the adiabatic regime is left to the mass varying mechanism which drives the cosmological evolution of the dark matter component. To give the correct impression of the time evolution of the abovementioned dynamical quantities driven by $\phi$, in the Fig. \[Fpap-01\] we observe the behaviour of $m\bb{a}$ and $U\bb{a}$ in confront with $\phi\bb{a}$ and $V\bb{a}$ of the GCG. In the Fig. \[Fpap-02\] we verify how the energy density $\rho$ and the corresponding equations of state $\omega$ for the unified fluid, $\rho_{m} + \rho_{\phi}$, which imitates the GCG, deviates from the right GCG scenario. We assume that the mass varying dark matter behaves like a DFG in a relativistic regime (hot dark matter (HDM)) and in a non-relativistic regime (CDM). For mass varying CDM coupled with dark energy with $p_{\phi} = -\rho_{\phi}$, the effective GCC leads to similar predictions for $\omega$, independently of the scale parameter $a$. The same is not true for HDM which, in the DFG approach, when weakly coupled with dark energy, leads to the same behavior of the GCG just for late time values of $a$ ($a \sim 1$). As one can observe, the mass varying mechanism allows for reconstituting the GCG scenario in terms of non-exotic primitive entities: CDM and dark energy. Furthermore, as we shall notice, the dynamical mass expressed by Eq. (\[pap11\]) overpasses the problematic issue of stability. It is also important to emphasize that, as in the case of the Chaplygin gas, where $\alpha = 1$, the GCG model admits a d-brane connection as its Lagrangian density corresponds to the Born-Infeld action plus some soft logarithmic corrections. Space-time is shown to evolve from a phase that is initially dominated, in the absence of other degrees of freedom on the brane, by non-relativistic matter to a phase that is asymptotically De Sitter. The Chaplygin gas reproduces such a behaviour. In this context, the explicit dependence of the mass on a scalar field is relevant in suggesting that the mass varying mechanism, when obtained from an exactly soluble field theory, can be the right responsible for the cosmological dynamics. Stability and accelerated expansion =================================== Adiabatic instabilities in cosmological scenarios was predicted [@Afs05] in a context of a mass varying neutrino (MaVaN) model of dark energy. The dynamical dark energy, in this approach, is obtained by coupling a light scalar field to neutrinos but not to dark matter. Their consequent effects have been extensively discussed in the context of mass varying neutrinos, in which the light mass of the neutrino and the recent accelerative era are twinned together through a scalar field coupling. In the adiabatic regime, these models faces catastrophic instabilities on small scales characterized by a negative squared speed of sound for the effectively coupled fluid. Starting with a uniform fluid, such instabilities would give rise to exponential growth of small perturbations. The natural interpretation of this is that the Universe becomes inhomogeneous with neutrino overdensities subject to nonlinear fluctuations [@Mot08] which eventually collapses into compact localized regions. In opposition, in the usual treatment where dark matter are just coupled to dark energy, cosmic expansion together with the gravitational drag due to CDM have a major impact on the stability of the cosmological background fluid. Usually, for a general fluid for which we know the equation of state, the dominant effect on the sound speed squared $c_{s}^{\2}$ arises from the dark sector component and not by the neutrino component. For the models where the stationary condition (cf. Eq. (\[pap09\])) implies a cosmological constant type equation of state, $ p_{\phi} = - \rho_{\phi}$, one obtains $c_{s}^{\2} = -1$ from the very start of the analysis. The effective GCG is free from this inconsistency. The coupling of the dark energy component with dynamical dark matter is responsible for removing such inconsistency by setting $c_{s}^{\2} \simeq \frac{d p_{\phi}}{d\rho_{\phi}} > 0$. The exact behavior for dark energy plus mass varying dark matter fluid in correspondence with the GCG is exhibited in the Fig. \[Fpap-03\] for different GCG $\alpha$ parameters. A previous analysis of the stability conditions for the GCG in terms of the squared speed of sound was introduced in Ref. [@Ber04], from which positive $c_{s}^{\2}$ implies that $0 \leq \alpha \leq 1$. These results are consistent with the accelerated expansion of the universe ruled by the dynamical mass of Eq. (\[pap11\]) which sets positive values for $(1 + 3 (p_{\phi} + p_m)/(\rho_{\phi} + \rho_m))$, as we can notice in the Fig. \[Fpap-03\]. For CDM ($p << m$) the unified fluid reproduces the GCG scenario. For HDM ($p >> m$), in spite of not reproducing the GCG, the conditions for stability and cosmic acceleration are maintained. Fig. \[Fpap-03\] shows that the GCG can indeed be interpreted as the effective result for the coupling between mass varying dark matter and a kind of scalar field dark energy which is cosmologically driven by a $\Lambda$-type equation of state, $p_{\phi} = -\rho_{\phi}$. Conclusions =========== The dynamics of the cosmology of mass varying dark matter coupled with dark energy dynamically driven by cosmon-[*type*]{} equations was studied without introducing specific quintessence potentials, but assuming that the cosmological background unified fluid presents an effective behaviour similar to that of the GCG. We have comprehensively analyzed the stability characterized by a positive squared speed of sound and the cosmic acceleration conditions for such a dark matter coupled to dark energy fluid, that exists whenever such theories enter an adiabatic regime in which the scalar field faithfully tracks the minimum of the effective potential, and the coupling strength is strong compared to gravitational strength. The matter and scalar field are tightly coupled together and evolve as one effective fluid. The effective potential governing the evolution of the scalar field is decomposed into a sum of two terms, one arising from the original scalar field potential $U\bb{\phi}$, and the other from the dynamical mass of the dark matter. The mass varying behaviour of the dark matter component was determined from the assumption of a kind of $\Lambda$-type dark energy dynamics embedded in an effective cosmological scenario which reproduces the cosmological effects of the GCG. It is equivalent to decoupling mass varying dark matter from the effective GCG concomitantly with assuming the dark energy equation of state as $p_{\phi} = - \rho_{\phi}$. The adiabatic regime naturally occurs without any additional constraint on scalar field equations. The unified fluid description is valid for the background cosmology and for linear perturbations. The equation of state of perturbations is the same as that of the background cosmology where all the effective results of the GCG paradigm are maintained. Unfortunately, we cannot provide a sharp criterion on the potential and on the mass varying dependence on the scalar field to discriminate between these two possibilities: the GCG scenario or an effective unified fluid imitating the GCG via scalar fields driven by cosmon-[*type*]{} equations. Many results for specific quintessence potentials that are found in the literature, when they reproduce stability and cosmic acceleration, are recovered, and speculative predictions for new scenarios featuring other mass dependencies on scalar fields can be made. It remains open if the present approach can lead to a natural solution of the cosmological constant problem. In the meanwhile we take the cosmon model analogy as an interesting phenomenological approach, through which we can reproduce the main characteristics of the GCG. Given the fundamental nature of the underlying physics behind the Chaplygin gas and its generalizations, it appears that it contains some of the key ingredients in the description of the Universe dynamics at early as well as late times. Our results suggest that the mass varying mechanism, when eventually derived from an exactly soluble field theory, which is noway trivial, can be the effective agent for the stability issue and for the cosmic acceleration of the universe, once it can effectively reproduce the main characteristics of a GCG scenario. It also stimulates our subsequent investigation of the evolution of density perturbations, instabilities and the structure formation in such scenarios. To summarize, we expect that the future precise data can provide more strong evidence to judge whether the dark energy is the cosmological constant and whether dark energy and dark matter can be unified into one cosmological background effective component. We would like to thank for the financial support from the Brazilian Agencies FAPESP (grant 08/50671-0) and CNPq (grant 300627/2007-6). [99]{} I. Zlatev, L. M. Wang and P. J. Steinhardt, Phys. Rev. Lett. [**82**]{}, 896 (1999). L. M. Wang, R. R. Caldwell, J. P. Ostriker and P. J. Steinhardt, Astrophys. J. [**530**]{}, 17 (2000). P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. [**D59**]{}, 123504 (1999). T. Barreiro, E. J. Copeland and N. J. Nunes, Phys. Rev. [**D61**]{}, 127301 (2000). O. Bertolami and P. J. Martins, Phys. Rev. [**D61**]{}, 064007 (2000). L. Amendola and D. Tocchini-Valentini, Phys. Rev.[**D66**]{}, 043528 (2002); [*ibidem*]{}, Phys. Rev. [**D65**]{} 063508 (2002). A. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. [**B511**]{}, 265 (2001); N. Bilić, G. B. Tupper and R. D. Viollier, Phys. Lett. [**B535**]{}, 17 (2002). M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. [**D66**]{}, 043507 (2002); R. Caldwell, M. Doran, C. Mueller, G. Schaefer and C. Wetterich, AJ. [**591**]{}, L75 (2003). D. F. Mota and C. van de Bruck, Astron. Astrophys. [**421**]{}, 71 (2004). A. W. Brookfield, C. van de Bruck, D. F. Mota and D. Tocchini-Valentini, Phys. Rev. Lett. [**96**]{}, 061301 (2006); A. W. Brookfield, C. van de Bruck, D. F. Mota and D. Tocchini-Valentini, Phys. Rev. [**D73**]{}, 083515 (2006). P. J. E. Peebles and B. Ratra, Astrophys. J. [**325**]{}, L17 (1988). B. Ratra and P. J. E. Peebles, Phys. Rev. [**D37**]{}, 3406 (1988). M. Bronstein, Phys. Z. Sowjetunion [**3**]{}, 73 (1933); O. Bertolami, Il Nuovo Cimento, [**93B**]{}, 36 (1986); Fortschr. Physik, [**34**]{}, 829 (1986); M. Ozer and O. Taha, Phys. Lett. [**A171**]{}, 363 (1986); Nucl. Phys. [**B287**]{}, 776 (1987). T. Chiba, T. Okabe and M. Yamaguchi, Phys. Rev. [**D62**]{}, 023511 (2000). C. Armendariz-Picon, V. Mukhanov and P. J. Steinhardt, Phys. Rev. [**D63**]{}, 103510 (2001). A. E. Schulz and M. J. White, Phys. Rev. [**D64**]{}, 043514 (2001). S. M. Carroll, M. Hoffman and M. Trodden, Phys. Rev. [**D68**]{}, 023509 (2003). C. Wetterich, Nucl. Phys. [**B302**]{}, 668 (1988). C. Deffayet, G. R. Dvali and G. Gabadadze, Phys. Rev. [**D65**]{}, 044023 (2002). S. M. Carroll, V. Duvvuri, M. Trodden and M. S. Turner, Phys. Rev. [**D70**]{} 043528 (2004). M. Amarzguioui, O. Elgaroy, D. F. Mota and T. Multamaki, Astron. Astrophys. [**454**]{}, 707 (2006). P. Q. Hung, arXiv:hep-ph/0010126. P. Gu, X. Wang and X. Zhang, Phys. Rev. [**D68**]{}, 087301 (2003). R. Fardon, A. E. Nelson and N. Weiner, JCAP [**0410**]{} 005 (2004). O. E. Bjaelde [*et al.*]{}, JCAP [**0801**]{}, 026 (2008). R. D. Peccei, Phys. Rev. [**D71**]{}, 023527 (2005). A. E. Bernardini and O. Bertolami; Phys. Lett. [**B662**]{}, 97 (2008). A. E. Bernardini and O. Bertolami; Phys. Rev. [**D77**]{}, 083506 (2008); M. C. Bento, A. E. Bernardini and O. Bertolami, JPCS [**174**]{}, 012060 (2009). M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. [**D67**]{}, 063003 (2003); Phys. Lett. [**B575**]{}, 172 (2003); L. Amendola, F. Finelli, C. Burigana and D. Carturan, JCAP [**0307**]{}, 005 (2003). J. S. Fabris, S. V. Gonçalves and P. E. de Souza, astro-ph/0207430; A. Dev, J. S. Alcaniz and D. Jain, Phys. Rev. [**D67**]{}, 023515 (2003); V. Gorini, A. Kamenshchik and U. Moschella, Phys. Rev. [**D67**]{}, 063509 (2003); M. Makler, S. Q. de Oliveira and I. Waga, Phys. Lett. [**B555**]{}, 1 (2003); J. S. Alcaniz, D. Jain, and A. Dev, Phys. Rev. [**D67**]{}, 043514 (2003). O. Bertolami, A. A. Sen, S. Sen and P. T. Silva, Mon. Not. Roy. Astron. Soc. [**353**]{}, 329 (2004). M. C. Bento, O. Bertolami, A. A. Sen and N. C. Santos, Phys. Rev.[**D71**]{}, 063501 (2005). P. T. Silva and O. Bertolami, Ap. J. [**599**]{}, 829 (2003); A. Dev., D. Jain and J. S. Alcaniz, Astron. Astrophys. [**417**]{}, 847 (2004). O. Bertolami and P. T. Silva, Mon. Not. Roy. Astron. Soc. [**365**]{}, 1149 (2006). S. Dodelson, [*Modern Cosmology: Anisotropies and Inhomogeneities in the Universe*]{}, (Academic Press, New York, 2003). Ya. B. Zel’dovich and I. D. Novikov, [*Relativistic Astrophysics - Vol.I - Stars and relativity*]{} (University of Chicago Press, Chicago, 1974). C. Wetterich, Astron. Astrophys. [**301**]{}, 321 (1995). R. Bean, E. E. Flanagan and M. Trodden, arXiv:0709.1128 \[astro-ph\]. N. Afshordi, M. Zaldarriaga and K. Kohri, Phys. Rev. [**D72**]{}, 065024 (2005). D. F. Mota, V. Pettorino, G. Robbers and C. Wetterich, Phys. Lett. [**B663**]{}, 160 (2008).
--- abstract: 'Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches. Distributional methods, whose supervised variants are the current best performers, and path-based methods, which received less research attention. We suggest an improved path-based algorithm, in which the dependency paths are encoded using a recurrent neural network, that achieves results comparable to distributional methods. We then extend the approach to integrate both path-based and distributional signals, significantly improving upon the state-of-the-art on this task.' author: - | Vered Shwartz     Yoav Goldberg     Ido Dagan\ Computer Science Department\ Bar-Ilan University\ Ramat-Gan, Israel\ [vered1986@gmail.com]{}  [yoav.goldberg@gmail.com]{}  [dagan@cs.biu.ac.il]{} bibliography: - 'hypernymy-detection.bib' title: | Improving Hypernymy Detection\ with an Integrated Path-based and Distributional Method --- =1 Introduction ============ Hypernymy is an important lexical-semantic relation for NLP tasks. For instance, knowing that *Tom Cruise* is an *actor* can help a question answering system answer the question “which actors are involved in Scientology?”. While semantic taxonomies, like WordNet [@fellbaum1998wordnet], define hypernymy relations between word types, they are limited in scope and domain. Therefore, automated methods have been developed to determine, for a given term-pair $(x,y)$, whether $y$ is an hypernym of $x$, based on their occurrences in a large corpus. For a couple of decades, this task has been addressed by two types of approaches: distributional, and path-based. In distributional methods, the decision whether $y$ is a hypernym of $x$ is based on the distributional representations of these terms. Lately, with the popularity of word embeddings [@mikolov2013distributed], most focus has shifted towards supervised distributional methods, in which each $(x,y)$ term-pair is represented using some combination of the terms’ embedding vectors. In contrast to distributional methods, in which the decision is based on the *separate* contexts of $x$ and $y$, path-based methods base the decision on the lexico-syntactic paths connecting the *joint* occurrences of $x$ and $y$ in a corpus. identified a small set of frequent paths that indicate hypernymy, e.g. *Y such as X*. represented each $(x, y)$ term-pair as the multiset of dependency paths connecting their co-occurrences in a corpus, and trained a classifier to predict hypernymy, based on these features. Using individual paths as features results in a huge, sparse feature space. While some paths are rare, they often consist of certain unimportant components. For instance, “Spelt is a species of wheat” and “Fantasy is a genre of fiction” yield two different paths: *X be species of Y* and *X be genre of Y*, while both indicating that X is-a Y. A possible solution is to generalize paths by replacing words along the path with their part-of-speech tags or with wild cards, as done in the PATTY system [@nakashole2012patty]. Overall, the state-of-the-art path-based methods perform worse than the distributional ones. This stems from a major limitation of path-based methods: they require that the terms of the pair occur together in the corpus, limiting the recall of these methods. While distributional methods have no such requirement, they are usually less precise in detecting a specific semantic relation like hypernymy, and perform best on detecting broad semantic similarity between terms. Though these approaches seem complementary, there has been rather little work on integrating them [@mirkin2006integrating; @kaji2008using]. In this paper, we present HypeNET, an integrated path-based and distributional method for hypernymy detection. Inspired by recent progress in relation classification, we use a long short-term memory (LSTM) network [@hochreiter1997long] to encode dependency paths. In order to create enough training data for our network, we followed previous methodology of constructing a dataset based on knowledge resources. We first show that our path-based approach, on its own, substantially improves performance over prior path-based methods, yielding performance comparable to state-of-the-art distributional methods. Our analysis suggests that the neural path representation enables better generalizations. While coarse-grained generalizations, such as replacing a word by its POS tag, capture mostly syntactic similarities between paths, HypeNET captures also semantic similarities. We then show that we can easily integrate distributional signals in the network. The integration results confirm that the distributional and path-based signals indeed provide complementary information, with the combined model yielding an improvement of up to 14 $F_1$ points over each individual model.[^1] Background {#sec:background} ========== We introduce the two main approaches for hypernymy detection: distributional (Section \[sec:distributional\]), and path-based (Section \[sec:path-based\]). We then discuss the recent use of recurrent neural networks in the related task of relation classification (Section \[sec:relation-classification\]). Distributional Methods {#sec:distributional} ---------------------- Hypernymy detection is commonly addressed using distributional methods. In these methods, the decision whether $y$ is a hypernym of $x$ is based on the distributional representations of the two terms, i.e., the contexts with which each term occurs *separately* in the corpus. Earlier methods developed unsupervised measures for hypernymy, starting with symmetric similarity measures [@lin1998information], and followed by directional measures based on the distributional inclusion hypothesis [@weeds2003general; @kotlerman2010directional]. This hypothesis states that the contexts of a hyponym are expected to be largely included in those of its hypernym. More recent work [@santus2014chasing; @rimell2014distributional] introduce new measures, based on the assumption that the most typical linguistic contexts of a hypernym are less informative than those of its hyponyms. More recently, the focus of the distributional approach shifted to supervised methods. In these methods, the $(x,y)$ term-pair is represented by a feature vector, and a classifier is trained on these vectors to predict hypernymy. Several methods are used to represent term-pairs as a combination of each term’s embeddings vector: concatenation $\vec{x} \oplus \vec{y}$ [@baroni2012entailment], difference $\vec{y} - \vec{x}$ [@roller2014inclusive; @weeds2014learning], and dot-product $\vec{x} \cdot \vec{y}$. Using neural word embeddings [@mikolov2013distributed; @pennington2014glove], these methods are easy to apply, and show good results [@baroni2012entailment; @roller2014inclusive]. Path-based Methods {#sec:path-based} ------------------ A different approach to detecting hypernymy between a pair of terms $(x, y)$ considers the lexico-syntactic paths that connect the *joint* occurrences of $x$ and $y$ in a large corpus. Automatic acquisition of hypernyms from free text, based on such paths, was first proposed by , who identified a small set of lexico-syntactic paths that indicate hypernymy relations (e.g. *Y such as X*, *X and other Y*). In a later work, learned to detect hypernymy. Rather than searching for specific paths that indicate hypernymy, they represent each $(x, y)$ term-pair as the multiset of all dependency paths that connect $x$ and $y$ in the corpus, and train a logistic regression classifier to predict whether $y$ is a hypernym of $x$, based on these paths. Paths that indicate hypernymy are those that were assigned high weights by the classifier. The paths identified by this method were shown to subsume those found by , yielding improved performance. Variations of Snow et al.’s  method were later used in tasks such as taxonomy construction [@snow2006semantic; @kozareva2010semi; @carlson2010toward; @riedel2013relation], analogy identification [@turney2006similarity], and definition extraction [@borg2009evolutionary; @navigli2010learning]. A major limitation in relying on lexico-syntactic paths is the sparsity of the feature space. Since similar paths may somewhat vary at the lexical level, generalizing such variations into more abstract paths can increase recall. The PATTY algorithm [@nakashole2012patty] applied such generalizations for the purpose of acquiring a taxonomy of term relations from free text. For each path, they added generalized versions in which a subset of words along the path were replaced by either their POS tags, their ontological types or wild-cards. This generalization increased recall while maintaining the same level of precision. parrot & is & a & bird\ NOUN & VERB & DET & NOUN\ = [lcmtt]{}= RNNs for Relation Classification {#sec:relation-classification} -------------------------------- Relation classification is a related task whose goal is to classify the relation that is expressed between two target terms in a given sentence to one of predefined relation classes. To illustrate, consider the following sentence, from the SemEval-2010 relation classification task dataset [@hendrickx2009semeval]: “The $[$apples$]_{e_1}$ are in the $[$basket$]_{e_2}$". Here, the relation expressed between the target entities is $Content-Container(e_1, e_2)$. The shortest dependency paths between the target entities were shown to be informative for this task [@fundel2007relex]. Recently, deep learning techniques showed good performance in capturing the indicative information in such paths. In particular, several papers show improved performance using recurrent neural networks (RNN) that process a dependency path edge-by-edge. Xu et al.  apply a separate long short-term memory (LSTM) network to each sequence of words, POS tags, dependency labels and WordNet hypernyms along the path. A max-pooling layer on the LSTM outputs is used as the input of a network that predicts the classification. Other papers suggest incorporating additional network architectures to further improve performance [@nguyen2015combining; @liu2015dependency]. While relation classification and hypernymy detection are both concerned with identifying semantic relations that hold for pairs of terms, they differ in a major respect. In relation classification the relation should be expressed in the given text, while in hypernymy detection, the goal is to recognize a generic lexical-semantic relation between terms that holds in many contexts. Accordingly, in relation classification a term-pair is represented by a single dependency path, while in hypernymy detection it is represented by the multiset of all dependency paths in which they co-occur in the corpus. LSTM-based Hypernymy Detection {#sec:our-method} ============================== We present HypeNET, an LSTM-based method for hypernymy detection. We first focus on improving path representation (Section \[sec:our-path-based\]), and then integrate distributional signals into our network, resulting in a combined method (Section \[sec:combined\]). Path-based Network {#sec:our-path-based} ------------------ Similarly to prior work, we represent each dependency path as a sequence of edges that leads from $x$ to $y$ in the dependency tree.[^2] Each edge contains the lemma and part-of-speech tag of the source node, the dependency label, and the edge direction between two subsequent nodes. We denote each edge as $lemma/POS/dep/dir$. See figure \[fig:x\_is\_y\] for an illustration. Rather than treating an entire dependency path as a single feature, we encode the sequence of edges using a long short-term memory (LSTM) network. The vectors obtained for the different paths of a given $(x,y)$ pair are pooled, and the resulting vector is used for classification. Figure \[fig:lstm\] depicts the overall network structure, which is described below. #### Edge Representation We represent each edge by the concatenation of its components’ vectors: $$\vec{v_e} = [\vec{v_l}, \vec{v_{pos}}, \vec{v_{dep}}, \vec{v_{dir}}]$$ where $\vec{v_l}, \vec{v_{pos}}, \vec{v_{dep}}, \vec{v_{dir}}$ represent the embedding vectors of the lemma, part-of-speech, dependency label and dependency direction (along the path from $x$ to $y$), respectively. #### Path Representation For a path $p$ composed of edges $e_1,...,e_k$, the edge vectors $\vec{v_{e_1}}, ..., \vec{v_{e_k}}$ are fed in order to an LSTM encoder, resulting in a vector $\vec{o_p}$ representing the entire path $p$. The LSTM architecture is effective at capturing temporal patterns in sequences. We expect the training procedure to drive the LSTM encoder to focus on parts of the path that are more informative for the classification task while ignoring others. #### Term-Pair Classification Each $(x, y)$ term-pair is represented by the multiset of lexico-syntactic paths that connected $x$ and $y$ in the corpus, denoted as $paths(x,y)$, while the supervision is given for the term pairs. We represent each $(x, y)$ term-pair as the weighted-average of its path vectors, by applying average pooling on its path vectors, as follows: $$\resizebox{0.89\hsize}{!}{$\vec{v_{xy}} = \vec{v}_{paths(x,y)} = \frac{\sum_{p \in paths(x,y)} f_{p, (x,y)} \cdot \vec{o_p}}{\sum_{p \in paths(x,y)} f_{p, (x,y)}}$} \label{eq:fv1}$$ where $f_{p, (x,y)}$ is the frequency of $p$ in $paths(x,y)$. We then feed this path vector to a single-layer network that performs binary classification to decide whether $y$ is a hypernym of $x$. $$c = softmax(W \cdot \vec{v_{xy}}) \label{eq:network}$$ $c$ is a 2-dimensional vector whose components sum to 1, and we classify a pair as positive if $c[1] > 0.5$. #### Implementation Details To train the network, we used PyCNN.[^3] We minimize the cross entropy loss using gradient-based optimization, with mini-batches of size 10 and the Adam update rule [@kingma2014adam]. Regularization is applied by a dropout on each of the components’ embeddings. We tuned the hyper-parameters (learning rate and dropout rate) on the validation set (see the appendix for the hyper-parameters values). We initialized the lemma embeddings with the pre-trained GloVe word embeddings [@pennington2014glove], trained on Wikipedia. We tried both the 50-dimensional and 100-dimensional embedding vectors and selected the ones that yield better performance on the validation set.[^4] The other embeddings, as well as out-of-vocabulary lemmas, are initialized randomly. We update all embedding vectors during training. Integrated Network {#sec:combined} ------------------ The network presented in Section \[sec:our-path-based\] classifies each $(x, y)$ term-pair based on the paths that connect $x$ and $y$ in the corpus. Our goal was to improve upon previous path-based methods for hypernymy detection, and we show in Section \[sec:results\] that our network indeed outperforms them. Yet, as path-based and distributional methods are considered complementary, we present a simple way to integrate distributional features in the network, yielding improved performance. We extended the network to take into account distributional information on each term. Inspired by the supervised distributional concatenation method [@baroni2012entailment], we simply concatenate $x$ and $y$ word embeddings to the $(x, y)$ feature vector, redefining $\vec{v_{xy}}$: $$\vec{v_{xy}} = [\vec{v_{w_x}}, \vec{v}_{paths(x,y)}, \vec{v_{w_y}}] \label{eq:fv2}$$ where $\vec{v_{w_x}}$ and $\vec{v_{w_y}}$ are $x$ and $y$’s word embeddings, respectively, and $\vec{v}_{paths(x,y)}$ is the averaged path vector defined in equation \[eq:fv1\]. This way, each $(x, y)$ pair is represented using both the distributional features of $x$ and $y$, and their path-based features. Dataset {#sec:dataset} ======= Creating Instances {#sec:instances} ------------------ Neural networks typically require a large amount of training data, whereas the existing hypernymy datasets, like BLESS [@baroni2011we], are relatively small. Therefore, we followed the common methodology of creating a dataset using distant supervision from knowledge resources [@snow2004learning; @riedel2013relation]. Following , who constructed their dataset based on WordNet hypernymy, and aiming to create a larger dataset, we extract hypernymy relations from several resources: WordNet [@fellbaum1998wordnet], DBPedia [@auer2007dbpedia], Wikidata [@vrandevcic2012wikidata] and Yago [@suchanek2007yago]. **resource** **relations** -------------- ----------------------------- WordNet instance hypernym, hypernym DBPedia type Wikidata subclass of, instance of Yago subclass of : Hypernymy relations in each resource.[]{data-label="tab:relations"} All instances in our dataset, both positive and negative, are pairs of terms that are directly related in at least one of the resources. These resources contain thousands of relations, some of which indicate hypernymy with varying degrees of certainty. To avoid including questionable relation types, we consider as denoting positive examples only indisputable hypernymy relations (Table \[tab:relations\]), which we manually selected from the set of hypernymy indicating relations in . Term-pairs related by other relations (including hyponymy), are considered as negative instances. Using related rather than random term-pairs as negative instances tests our method’s ability to distinguish between hypernymy and other kinds of semantic relatedness. We maintain a ratio of 1:4 positive to negative pairs in the dataset. Like , we include only term-pairs that have joint occurrences in the corpus, requiring at least two different dependency paths for each pair. Random and Lexical Dataset Splits {#sec:train-test-validation} --------------------------------- As our primary dataset, we perform standard random splitting, with 70% train, 25% test and 5% validation sets. **train** **test** **validation** **all** -- ----------- ---------- ---------------- --------- 49,475 17,670 3,534 70,679 20,335 6,610 1,350 28,295 : The number of instances in each dataset.[]{data-label="tab:dataset-size"} **path** ------------------------------------------------------------------- `X/NOUN/dobj/> establish/VERB/ROOT/- as/ADP/prep/< Y/NOUN/pobj/<` `X/NOUN/dobj/> VERB as/ADP/prep/< Y/NOUN/pobj/<` `X/NOUN/dobj/> * as/ADP/prep/< Y/NOUN/pobj/<` `X/NOUN/dobj/> establish/VERB/ROOT/- ADP Y/NOUN/pobj/<` As pointed out by , supervised distributional lexical inference methods tend to perform “lexical memorization”, i.e., instead of learning a relation between the two terms, they mostly learn an independent property of a single term in the pair: whether it is a “prototypical hypernym” or not. For instance, if the training set contains term-pairs such as *(dog, animal)*, *(cat, animal)*, and *(cow, animal)*, all annotated as positive examples, the algorithm may learn that *animal* is a prototypical hypernym, classifying any new *(x, animal)* pair as positive, regardless of the relation between $x$ and *animal*. suggested to split the train and test sets such that each will contain a distinct vocabulary (“lexical split”), in order to prevent the model from overfitting by lexical memorization. To investigate such behaviors, we present results also for a lexical split of our dataset. In this case, we split the train, test and validation sets such that each contains a distinct vocabulary. We note that this differs from , who split only the train and the test sets, and dedicated a subset of the train for validation. We chose to deviate from because we noticed that when the validation set contains terms from the train set, the model is rewarded for lexical memorization when tuning the hyper-parameters, consequently yielding suboptimal performance on the lexically-distinct test set. When each set has a distinct vocabulary, the hyper-parameters are tuned to avoid lexical memorization and are likely to perform better on the test set. We tried to keep roughly the same 70/25/5 ratio in our lexical split.[^5] The sizes of the two datasets are shown in Table \[tab:dataset-size\]. Indeed, training a model on a lexically split dataset may result in a more general model, that can better handle pairs consisting of two unseen terms during inference. However, we argue that in the common applied scenario, the inference involves an unseen pair $(x, y)$, in which $x$ and/or $y$ have already been observed separately. Models trained on a random split may introduce the model with a term’s “prior probability” of being a hypernym or a hyponym, and this information can be exploited beneficially at inference time. Baselines {#sec:baselines} ========= We compare HypeNET with several state-of-the-art methods for hypernymy detection, as described in Section \[sec:background\]: path-based methods (Section \[sec:path-based-baseline\]), and distributional methods (Section \[sec:distributional-baselines\]). Due to different works using different datasets and corpora, we replicated the baselines rather than comparing to the reported results. We use the Wikipedia dump from May 2015 as the underlying corpus of all the methods, and parse it using spaCy.[^6] We perform model selection on the validation set to tune the hyper-parameters of each method.[^7] The best hyper-parameters are reported in the appendix. Path-based Methods {#sec:path-based-baseline} ------------------ #### Snow We follow the original paper, and extract all shortest paths of four edges or less between terms in a dependency tree. Like , we add paths with “satellite edges”, i.e., single words not already contained in the dependency path, which are connected to either X or Y, allowing paths like *such Y as X*. The number of distinct paths was 324,578. We apply $\chi^2$ feature selection to keep only the 100,000 most informative paths and train a logistic regression classifier. #### Generalization We also compare our method to a baseline that uses generalized dependency paths. Following PATTY’s approach to generalizing paths [@nakashole2012patty], we replace edges with their part-of-speech tags as well as with wild cards. We generate the powerset of all possible generalizations, including the original paths. See Table \[tab:gen-example\] for examples. The number of features after generalization went up to 2,093,220. Similarly to the first baseline, we apply feature selection, this time keeping the 1,000,000 most informative paths, and train a logistic regression classifier over the generalized paths.[^8] Distributional Methods {#sec:distributional-baselines} ---------------------- -- --------------- ------------ ----------- --------------- ------------ ----------- -- **precision** **recall** $F_1$ **precision** **recall** $F_1$ 0.843 0.452 0.589 0.760 0.438 0.852 0.561 0.676 0.759 0.530 0.811 0.716 0.761 0.691 **0.632** 0.491 0.737 0.589 0.375 0.610 0.901 0.637 0.746 0.754 0.551 **0.913** **0.890** **0.901** **0.809** 0.617 -- --------------- ------------ ----------- --------------- ------------ ----------- -- #### Unsupervised SLQS [@santus2014chasing] is an entropy-based measure for hypernymy detection, reported to outperform previous state-of-the-art unsupervised methods [@weeds2003general; @kotlerman2010directional]. The original paper was evaluated on the BLESS dataset [@baroni2011we], which consists of mostly frequent words. Applying the vanilla settings of SLQS on our dataset, that contains also rare terms, resulted in low performance. Therefore, we received assistance from Enrico Santus, who kindly provided the results of SLQS on our dataset after tuning the system as follows. The validation set was used to tune the threshold for classifying a pair as positive, as well as the maximum number of each term’s most associated contexts ($N$). In contrast to the original paper, in which the number of each term’s contexts is fixed to $N$, in this adaptation it was set to the minimum between the number of contexts with LMI score above zero and $N$. In addition, the SLQS scores were not multiplied by the cosine similarity scores between terms, and terms were lemmatized prior to computing the SLQS scores, significantly improving recall. As our results suggest, while this method is state-of-the-art for unsupervised hypernymy detection, it is basically designed for classifying specificity level of related terms, rather than hypernymy in particular. #### Supervised To represent term-pairs with distributional features, we tried several state-of-the-art methods: concatenation $\vec{x} \oplus \vec{y}$ [@baroni2012entailment], difference $\vec{y} - \vec{x}$ [@roller2014inclusive; @weeds2014learning], and dot-product $\vec{x} \cdot \vec{y}$. We downloaded several pre-trained embeddings [@mikolov2013distributed; @pennington2014glove] of different sizes, and trained a number of classifiers: logistic regression, SVM, and SVM with RBF kernel, which was reported by to perform best in this setting. We perform model selection on the validation set to select the best vectors, method and regularization factor (see the appendix). Results {#sec:results} ======= Table \[tab:results\] displays performance scores of HypeNET and the baselines. *HypeNET Path-based* is our path-based recurrent neural network model (Section \[sec:our-path-based\]) and *HypeNET Integrated* is our combined method (Section \[sec:combined\]). Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in . *HypeNET Path-based* outperforms both path-based baselines by a significant improvement in recall and with slightly lower precision. The recall boost is due to better path generalization, as demonstrated in Section \[sec:qualitative\]. Regarding distributional methods, the unsupervised SLQS baseline performed slightly worse on our dataset. The low precision stems from its inability to distinguish between hypernyms and meronyms, which are common in our dataset, causing many false positive pairs such as *(zabrze, poland)* and *(kibbutz, israel)*. We sampled 50 false positive pairs of each dataset split, and found that 38% of the false positive pairs in the random split and 48% of those in the lexical split were holonym-meronym pairs. In accordance with previously reported results, the supervised embedding-based method is the best performing baseline on our dataset as well. *HypeNET Path-based* performs slightly better, achieving state-of-the-art results. Adding distributional features to our method shows that these two approaches are indeed complementary. On both dataset splits, the performance differences between *HypeNET Integrated* and *HypeNET Path-based*, as well as the supervised distributional method, are substantial, and statistically significant with p-value of 1% (paired t-test). We also reassess that indeed supervised distributional methods perform worse on a lexical split [@levy2015supervised]. We further observe a similar reduction when using HypeNET, which is not a result of lexical memorization, but rather stems from over-generalization (Section \[sec:qualitative\]). [ | c | c | c | ]{} **method** & **path** & **example text**\ & ----------------------------------------------- `X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/<` `direct/VERB/acl/>` ----------------------------------------------- &\ & ----------------------------------------------- `X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/<` `publish/VERB/acl/>` ----------------------------------------------- &\ & ------------------------------------------- `X/NOUN/compound/> NOUN^* be/VERB/ROOT/-` `Y/NOUN/attr/< base/VERB/acl/>` ------------------------------------------- &\ & [@c@]{} -------------------------------------------- `X/NOUN/compound/> NOUN Y/NOUN/compound/<` -------------------------------------------- &\ & ----------------------------------------------- `X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/<` `(release|direct|produce|write)/VERB/acl/>` ----------------------------------------------- &\ & ----------------------------------------------------------- `X/NOUN/compound/>` `(association|co.|company|corporation|foundation` `|group|inc.|international|limited|ltd.)/NOUN/nsubj/>` `be/VERB/ROOT/- Y/NOUN/attr/<` `((create|found|headquarter|own|specialize)/VERB/acl/>)?` ----------------------------------------------------------- &\ Analysis {#sec:analysis} ======== Qualitative Analysis of Learned Paths {#sec:qualitative} ------------------------------------- We analyze HypeNET’s ability to generalize over path structures, by comparing prominent indicative paths which were learned by each of the path-based methods. We do so by finding high-scoring paths that contributed to the classification of true-positive pairs in the dataset. In the path-based baselines, these are the highest-weighted features as learned by the logistic regression classifier. In the LSTM-based method, it is less straightforward to identify the most indicative paths. We assess the contribution of a certain path $p$ to classification by regarding it as the only path that appeared for the term-pair, and compute its <span style="font-variant:small-caps;">true</span> label score from the class distribution: $softmax(W \cdot \vec{v_{xy}})[1]$, setting $\vec{v_{xy}} = [\vec{0}, \vec{o_p}, \vec{0}]$. A notable pattern is that Snow’s method learns specific paths, like *X is Y from* (e.g. ***Megadeth** is an American thrash metal **band** from Los Angeles*). While Snow’s method can only rely on verbatim paths, limiting its recall, the generalized version of Snow often makes coarse generalizations, such as *X VERB Y from*. Clearly, such a path is too general, and almost any verb assigned to it results in a non-indicative path (e.g. *X take Y from*). Efforts by the learning method to avoid such generalization, again, lower the recall. HypeNET provides a better midpoint, making fine-grained generalizations by learning additional semantically similar paths such as *X become Y from* and *X remain Y from*. See table \[tab:prominent-paths\] for additional example paths which illustrate these behaviors. We also noticed that while on the random split our model learns a range of specific paths such as *X is Y published* (learned for e.g. *Y=magazine*) and *X is Y produced* (*Y=film*), in the lexical split it only learns the general *X is Y* path for these relations. We note that *X is Y* is a rather “noisy” path, which may occur in ad-hoc contexts without indicating generic hypernymy relations (e.g. ***chocolate** is a big **problem*** in the context of children’s health). While such a model may identify hypernymy relations between unseen terms, based on general paths, it is prone to over-generalization, hurting its performance, as seen in Table \[tab:results\]. As discussed in § \[sec:train-test-validation\], we suspect that this scenario, in which both terms are unseen, is usually not common enough to justify this limiting training setup. Error Analysis {#sec:error_analysis} -------------- **Relation** **%** -------------------------- ----------- synonymy $21.37\%$ hyponymy $29.45\%$ holonymy / meronymy $9.36\%$ hypernymy-like relations $21.03\%$ other relations $18.77\%$ : Distribution of relations holding between each pair of terms in the resources among false positive pairs.[]{data-label="tab:false-positives"} #### False Positives We categorized the false positive pairs on the random split according to the relation holding between each pair of terms in the resources used to construct the dataset. We grouped several semantic relations from different resources to broad categories, e.g. *synonym* includes also *alias* and *Wikipedia redirection*. Table \[tab:false-positives\] displays the distribution of semantic relations among false positive pairs. More than 20% of the errors stem from confusing synonymy with hypernymy, which are known to be difficult to distinguish. An additional 30% of the term-pairs are reversed hypernym-hyponym pairs ($y$ is a hyponym of $x$). Examining a sample of these pairs suggests that they are usually near-synonyms, i.e., it is not that clear whether one term is truely more general than the other or not. For instance, *fiction* is annotated in WordNet as a hypernym of *story*, while our method classified *fiction* as its hyponym. A possible future research direction might be to quite simply extend our network to classify term-pairs simultaneously to multiple semantic relations, as in . Such a multiclass model can hopefully better distinguish between these similar semantic relations. Another notable category is hypernymy-like relations: these are other relations in the resources that could also be considered as hypernymy, but were annotated as negative due to our restrictive selection of only indisputable hypernymy relations from the resources (see Section \[sec:instances\]). These include instances like *(Goethe, occupation, novelist)* and *(Homo, subdivisionRanks, species)*. Lastly, other errors made by the model often correspond to term-pairs that co-occur very few times in the corpus, e.g. $xebec$, a studio producing Anime, was falsely classified as a hyponym of *anime*. **Error Type** **%** --- -------------------- -------- 1 low statistics $80\%$ 2 infrequent term $36\%$ 3 rare hyponym sense $16\%$ 4 annotation error $8\%$ : (Overlapping) categories of false negative pairs: (1) $x$ and $y$ co-occurred less than 25 times (average co-occurrences for true positive pairs is 99.7). (2) Either $x$ or $y$ is infrequent. (3) The hypernymy relation holds for a rare sense of $x$. (4) $(x,y)$ was incorrectly annotated as positive. []{data-label="tab:false-negatives"} #### False Negatives We sampled 50 term-pairs that were falsely annotated as negative, and analyzed the major (overlapping) types of errors (Table \[tab:false-negatives\]). Most of these pairs had only few co-occurrences in the corpus. This is often either due to infrequent terms (e.g. *cbc.ca*), or a rare sense of $x$ in which the hypernymy relation holds (e.g. *(night, play)* holding for “Night”, a dramatic sketch by Harold Pinter). Such a term-pair may have too few hypernymy-indicating paths, leading to classifying it as negative. Conclusion ========== We presented HypeNET: a neural-networks-based method for hypernymy detection. First, we focused on improving path representation using LSTM, resulting in a path-based model that performs significantly better than prior path-based methods, and matches the performance of the previously superior distributional methods. In particular, we demonstrated that the increase in recall is a result of generalizing semantically-similar paths, in contrast to prior methods, which either make no generalizations or over-generalize paths. We then extended our network by integrating distributional signals, yielding an improvement of additional 14 $F_1$ points, and demonstrating that the path-based and the distributional approaches are indeed complementary. Finally, our architecture seems straightforwardly applicable for multi-class classification, which, in future work, could be used to classify term-pairs to multiple semantic relations. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Omer Levy for his involvement and assistance in the early stage of this project and Enrico Santus for helping us by computing the results of SLQS [@santus2014chasing] on our dataset. This work was partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Best Hyper-parameters {#sec:hyper-params} ===================== Table \[tab:features\] displays the chosen hyper-parameters of each method, yielding the highest $F_1$ score on the validation set. [ | c | c | c | ]{} & **method** & **values**\ & Snow & regularization: $L_2$\ & Snow + Gen & regularization: $L_1$\ & ------ LSTM ------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ------------------------------- embeddings: GloVe-100-Wiki learning rate: $\alpha=0.001$ dropout: $d=0.5$ ------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------ SLQS ------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ----------------------------- N=100, threshold = 0.000464 ----------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------------ Best Supervised ------------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ---------------------------------------- method: concatenation, classifier: SVM embeddings: GloVe-300-Wiki ---------------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------------ LSTM- Integrated ------------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ------------------------------- embeddings: GloVe-50-Wiki learning rate: $\alpha=0.001$ word dropout: $d=0.3$ ------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & Snow & regularization: $L_2$\ & Snow + Gen & regularization: $L_2$\ & ------ LSTM ------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ------------------------------- embeddings: GloVe-50-Wiki learning rate: $\alpha=0.001$ dropout: $d=0.5$ ------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------ SLQS ------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ----------------------------- N=100, threshold = 0.007629 ----------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------------ Best Supervised ------------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ---------------------------------------- method: concatenation, classifier: SVM embeddings: GloVe-100-Wikipedia ---------------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ & ------------ LSTM- Integrated ------------ : The best hyper-parameters in every model.[]{data-label="tab:features"} & ------------------------------- embeddings: GloVe-50-Wiki learning rate: $\alpha=0.001$ word dropout: $d=0.3$ ------------------------------- : The best hyper-parameters in every model.[]{data-label="tab:features"} \ [^1]: Our code and data are available in:\ <https://github.com/vered1986/HypeNET> [^2]: Like , we added for each path, additional paths containing single daughters of $x$ or $y$ not already contained in the path, referred by as “satellite edges”. This enables including paths like *Such Y as X*, in which the word “such” is not in the path between $x$ and $y$. [^3]: <https://github.com/clab/cnn> [^4]: Higher-dimensional embeddings seem not to improve performance, while hurting the training runtime. [^5]: The lexical split discards many pairs consisting of cross-set terms. [^6]: <https://spacy.io/> [^7]: We applied grid search for a range of values, and picked the ones that yield the highest $F_1$ score on the validation set. [^8]: We also tried keeping the 100,000 most informative paths, but the performance was worse.
--- author: - 'F. Przygodda' - 'R. van Boekel' - 'P. Àbrahàm' - 'S. Y. Melnikov' - 'L. B. F. M Waters' - 'Ch. Leinert' date: 'Received 10 September 2003 / Accepted 6 November 2003' title: 'Evidence for grain growth in TTauri disks[^1]' --- Introduction ============ TTauri stars are low mass young stellar objects with ages of $10^6-10^7$yr. One of the characteristics of so-called classical TTauri stars is a distinctive IR excess. The origin of the IR radiation is the accretion disk which plays a crucial role in star formation by transporting material to the central star while angular momentum is transferred outwards. Different models were proposed to bring the observed facts into a general framework ([@pringle74]; [@adams87]; [@chiang97]). Yet there remain many open questions concerning the details of configuration, chemistry and evolution of disks. A particular feature of the infrared spectra of such disks is the silicate band in the region from 8 to 12$\mu$m. The band originates from the stretching mode of the Si-O bond of silicate minerals like olivine (\[Mg,Fe\]$_2$SiO$_4$), forsterite (Mg$_2$SiO$_4$), enstatite (MgSiO$_3$) and silica (SiO$_2$). The silicate feature observed in TTauri stars appears in emission as well as in absorption ([@cohen85]). Silicate emission is assumed to emerge from a warm, optically thin disk layer (disk atmosphere) which is heated by the radiation of the central star ([@chiang97], [@natta00]). Honda et al. (2003) showed that crystalline silicates are present in T Tauri stars, indicating substantial grain processing. In this Paper we present studies on the silicate emission from a sample of 14 TTauri stars based on mid-infrared spectroscopy performed during three observation campaigns in February, June and December 2002 on the ESO 3.6m telescope at La Silla. We analyse the spectra in particular with respect to a possible correlation between shape and strength of the silicate feature. Such a correlation has been recently observed for Herbig Ae/Be stars ([@boekel03]) and has been interpreted as evidence of grain processing in the circumstellar disks of those stars. Observations and data reduction {#redu} =============================== The observations of the TTauri stars were performed with TIMMI2, the Thermal Infrared Multi Mode Instrument 2 ([@reimann98]) installed at the 3.6 m telescope at ESO’s observatory at La Silla, Chile. We observed each object in two modes: first we used the imaging mode to obtain photometry at 11.9$\mu$m (band width 1.2$\mu$m). In a number of cases in addition we used the 8.9$\mu$m filter (band width 0.8$\mu$m). Second we performed spectroscopy using the low-resolution grism mode (R=160) with a slit width of 1.2. In both modes the system was working with chopping and nodding with an amplitude of 10. In our data reduction, the measurements of the spectrophotometric standards were used to determine the shape of the spectra, and the photometric data at 11.9$\mu$m were used to establish the absolute flux level. The spectrum of the atmospheric extinction, which is needed to perform the first of these steps, was determined from observations of standard stars at different airmass. We regularly observed standard stars to monitor variations in atmospheric transmission. The flux data for the spectrophotometric standard stars, (based on models by [@cohen98]), were taken from ESO’s TIMMI2 webpage. In total we observed 21 TTauri stars with the silicate feature in emission. We selected the 14 objects for which the silicate feature was measured with a signal-to-noise ratio better than 2 (see Tab.\[tab1\]). Several of these are known to be binary or multiple systems. In two cases we were able to obtain separate spectra of the components. The remaining 7 sources (DOTau, DQTau, FUOri, BBW76, VZCha, VWCha, Sz82) were degraded by noise to the extent that a meaningful spectral analysis was not possible. For some stars the observations were repeated to verify the results. ----------- ------------ ------------------------ ------------------------ ----------- Object Flux at Observed Multiple ISOPHOT 11.9$\mu$m at Run $^{\mathrm{a}}$ Source $^{\mathrm{b}}$ spectrum (Jy) available AK Sco 2.1 B \* \* AS 205 8.4 B \* CR Cha 0.8 C \* DR Tau 2.0 C \* GG Tau 0.8 C \* GW Ori 6.3 C \* Glass I 8.8 A,B,C \* \* Haro 1-16 0.9 B HBC 639 2.3 A RU Lup 2.4 B S CrA 5.1 B \* \* SU Aur 3.6 C \* TW Hya 0.6 C WW Cha 5.3 B,C \* ----------- ------------ ------------------------ ------------------------ ----------- : List of observed sources with the silicate feature in emission.[]{data-label="tab1"} A: 5.+6. Feb. 2002, B: 7.+8. Jun. 2002, C: 24.-27. Dec. 2002 for AS 205 and S CrA we were able to obtain separate spectra of the components For our studies we assumed that the continuum outside of the silicate band is actually reached at the edges of our measured spectral range (at 8.2$\mu$m and at 13.0$\mu$m). This assumption is supported by comparison of our spectra with ISOPHOT spectra (Ábrahám et. al 2003 in prep, see also [@natta00]). Figure \[plot12\] illustrates this for two examples. Here, the continuum was estimated by fitting a second order polynomial to all data points shortwards of 8$\mu$m and beyond 12.8$\mu$m. Note that the fitted curve is matching the ends of the TIMMI2 spectra and has an almost linear characteristic over the range of the silicate feature. Since we were not able to find such additional data of good quality for all objects of our sample, we estimated the continuum level within the silicate feature by a linear interpolation between the end points of the TIMMI2 spectra at 8.2$\mu$m and at 13.0$\mu$m, where the values for these end points were obtained by averaging over the data over an interval of 0.2$\mu$m. The continuum level determined this way agreed with the second order fit, where available, to typically $\pm$10% . For all sources we consistently used the derived linearly varying continuum level to determine the continuum normalized[^2] spectra; these are shown in Fig.\[plot3\]. Analysis of the silicate feature ================================ The spectra in Fig.\[plot3\] show a variety in strength and shape. In some spectra (AKSco, Haro1-16, CRCha) a strong silicate emission with a peak near 9.8$\mu$m is visible. In other cases (RuLup, DRTau, HBC639), the silicate emission is weaker and the profile looks more like a plateau. It is known that the shape of the band is strongly affected by the chemical composition and in particular by the size of the dust grains ([@henning95], [@bouwman01]). A correlation between strength and shape of the silicate emission could be expected if there are systematic changes in particle size within the group of objects. lambda ($\mu$m) To estimate the strength of the silicate emission, we use the peak of the continuum normalised spectrum. For the characterisation of the band shape we measured the continuum subtracted flux at two wavelengths, 11.3 and 9.8$\mu$m. The flux ratio for these wavelengths can be interpreted as a indicator for the grain size composition following Bouwman et al. (2001). They point out that (see Fig.\[plot4\]) - the mid-infrared absorption coefficient of small amorphous olivine dust grains (with a size of 0.1$\mu$m) has a triangular shape with the maximum at 9.8$\mu$m. This spectrum is compatible with observations of the silicate emission of unprocessed dust of the ISM (see also [@bowey98]). - the mid-infrared absorption coefficient of larger amorphous olivine dust grains shows a plateau-like shape in the range from 9.5 to 12$\mu$m. \[plot4\] Assuming a dust temperature of around 300K, the blackbody thermal emission will not vary very much over our wavelength band, and the silicate emission profiles would have a shape quite close to the shape of the absorption coefficients. Since the absorption cross section per unit mass of large grains is in general smaller than that of small grains of similar shape, the emission of large grains is in general expected to be weaker than that of small grains. Our observations suggest that most observed mid-infrared spectra of young low or intermediate mass stars can be characterised simply by emission corresponding to the two grain sizes covered in Fig.\[plot5\]. We note that the ratio of absorption coefficients at the wavelengths of 11.3$\mu$m and 9.8$\mu$m as function of grain size varies from 0.43 (for 0.1$\mu$m grains) to about 1.0 (2.0$\mu$m grains). Therefore, the ratio of the (continuum subtracted) flux at 11.3$\mu$m over that at 9.8$\mu$m should be a measure of the grain size. In Fig.\[plot5\] we plotted this ratio against the strength of the silicate feature. The distribution of the data points shows a clear correlation. Strong silicate features have 11.3/9.8 flux ratios down to 0.53 (for CRCha), while weak features exhibit higher ratios up to about 1.0 (in case of the SCrA NW-component even 1.42). The remaining points are distributed over the range between these two extremes. The error bars quoted in Fig.\[plot5\] take into account the signal to noise of the observations, and the uncertainty in the continuum estimate. A weighted linear fit to the data points with the relation $y = a - bx$, where x is the strength of the feature and y the flux ratio, gives the coefficients: $a=1.11\pm0.11$ and $b=-0.16\pm0.04$. A very similar correlation between strength and shape of the silicate emission was found by [@boekel03] for isolated Herbig Ae/Be stars, with coefficients of $a=1.48\pm0.09$, $b=-0.28\pm0.04$. Emission by Polycyclic Aromatic Hydrocarbons (PAHs) at 11.3$\mu$m does not appear to be a problem for our data set. There was no evidence for PAH emission in our observed spectra nor in the available ISOPHOT-S spectra. This absence of PAH emission is not unexpected, since one can assume that the UV radiation of TTauri stars normally is too weak to cause a significant excitation of PAH ([@natta95]). Discussion ========== The large difference in geometric cross section per unit mass between large and small dust grains implies that if the silicate emission band is dominated by small grains, a substantial amount of large grains could be present without being noticed. On the other hand, the spectra with plateau-like emission are in geneal weak, likely due to a much lower number of small grains present in the upper disk regions. In some spectra (SCrA both components, RULup) the flux at 11.3$\mu$m is even higher than the flux at 9.8$\mu$m, which results to a ratio greater than 1. This cannot be explained by the assumption of amorphous olivine dust with the absorption spectra shown above. It may be that crystalline forsterite (Mg$_2$SiO$_4$) is responsible for this effect, since its absorption spectrum shows a peak at 11.3$\mu$m (see [@bouwman01], [@honda03]). Possible indications for crystalline forsterite are also seen in some other objects with stronger features (SUAur, GGTau). We also note that the silicate emission emerges from warm, optically thin material, assumed here to be located in the upper layer of a flared accretion disk, which is irradiated by the central star. The dust in these upper layers of different TTauri disks appears to vary continously between two extreme types: a) small amorphous dust grains or a mixture of small and large grains, with a spectrum very similar to the dust of the ISM and b) large amorphous dust grains with a possible addition of crystalline silicate. If we think of these differences in terms of evolution, then the reason for the difference between these two dust types could be either the removal of the small grains from the upper disk layers or the coagulation of small grains to larger ones. A removal of small dust grains by pure settling to the mid-plane of the disk is unlikely to explain the observed kind of differentiation since the settling time for larger grains is shorter than for the smaller ones. A further mechanism which could be responsible for the vanishing of small grains is by the radiation pressure of the central star. A star of solar luminosity would be quite effective in removing tenth micron sized particles. But in this case, a permanent diffusion of particles from the mid-plane on shorter time scales should avoid a significant decrease of the amount of small particles in the upper layer ([@takeuchi03]). The effect of coagulation is a possible explanation, if constructive collisions to larger grains dominate over destructive collisions which produce again small grains. This should be the case in TTauri disks where the SED can be well fit by gas-rich disks in hydrostatic equilibrium. An interesting detail of our studies is the availability of separate spectra from the components of two binary TTauri systems (AS205 and SCrA). We observed significant differences in strength and also differences in the shape of the silicate band of the components, with the secondary having the - relatively - stronger emission. Since the components presumably have the same age, the difference must have other reasons than the evolutionary state of the stars. This work is partly supported by the Hungarian Research Fund (OTKA No. T037508) and by the Bolyai Fellowship (P.Àbrahàm). S.Y. Melnikov thanks the German Academic Exchange Service (DAAD) for the DAAD fellowship grant. Adams, F. C., Lada, C. J., & Shu, F. H. 1987, ApJ, 312, 788 van den Ancker, M. E., Wesselius, P. R., Tielens, A. G. G. M., van Dishoeck, E. F., & Spinoglio, L. 1999, A&A, 348, 877 van Boekel, R., Waters, L. B. F. M., Dominik, C., Bouwman, J., de Koter, A., Dullemond, C. P., & Paresce, F. 2003, A&A, 400, L21 Bouwman, J., Meeus, G., de Koter, A., Hony, S., Dominik, C., & Waters, L. B. F. M. 2001, A&A, 375, 950 Bowey, J. E., Adamson, A. J., & Whittet, D. C. B. 1998, MNRAS, 298, 131 Chiang, E. I. & Goldreich, P. 1997, ApJ, 490, 368 Cohen, M. & Witteborn, F. C. 1985, ApJ, 294, 345 Cohen, M. 1998, AJ, 115, 2092 Henning, T., Begemann, B., Mutschke, H., & Dorschner, J. 1995, A&AS, 112, 143 Honda, M., Kataza, H., Okamoto, Y. K., Miyata, T., Yamashita, T., Sako, S., Takubo, S., & Onaka, T. 2003, ApJ , 585, L59 Lynden-Bell, D. & Pringle, J. E. 1974, MNRAS, 168, 603 Natta, A. & Kruegel, E. 1995, A&A , 302, 849 Natta, A., Meyer, M. R., & Beckwith, S. V. W. 2000, ApJ, 534, 838 Reimann, H., Weinert, U., & Wagner, S. 1998, Proc.SPIE, 3354, 865 Takeuchi, T. & Lin, D. N. C. 2003, ApJ, 593, 524 [^1]: Based on observations made with the ESO 3.6m Telescope at the La Silla Observatory under program ID 68.D-0537(A), 69.C-0268 and 70.C-0544. [^2]: to preserve the shape of the emission feature we define:\ $ F_{\mbox{\tiny norm}}(\lambda) = 1+[F_{\mbox{\tiny total}}(\lambda)-F_{\mbox{\tiny continuum}}(\lambda)]/ \langle F_{\mbox{\tiny continuum}} \rangle$
--- abstract: 'Work on generalizations of the Cohen-Lenstra [@CL] and Cohen-Martinet [@CM] heuristics has drawn attention to probability measures on the space of isomorphism classes of profinite groups. As is common in probability theory, it would be desirable to know that these measures are determined by their moments, which in this context are the expected number of surjections to a fixed finite group. We show a wide class of measures, including those appearing in a recent paper of Liu, Wood, and Zurieck-Brown [@LWZ], have this property. The method is to work “locally" with groups that are extensions of a fixed group by a product of finite simple groups. This eventually reduces the problem to the case of powers of a fixed finite simple group, which can be handled by a simple explicit calculation. We can also prove a similar theorem for random modules over an algebra.' author: - Will Sawin title: 'Identifying measures on non-abelian groups and modules by their moments via reduction to a local problem' --- Introduction ============ The primary application of the methods of this paper is a new result in function field number theory, which builds heavily on prior work in [@LWZ]. Thus, we begin by reviewing some notation from [@LWZ]. Let $\Gamma$ be a finite group. A $\Gamma$-group is a profinite group with a continuous action of $\Gamma$. Let $\mathbb F_q$ be a finite field of order $q$ prime to $|\Gamma|$. A totally real $\Gamma$-extension $K/ \mathbb F_q(t)$ is a Galois extension $K/\mathbb F_q(t)$, totally split over $\infty$, together with an isomorphism $\operatorname{Gal}(K/\mathbb F_q(t)) \cong \Gamma$. For such a $K$, define $K^{\#}$ to be the maximal everywhere unramified extension of $K$ that is totally split over $\infty$ and of order relatively prime to $q(q-1)|\Gamma|$. Then $\Gal (K^\#/K)$ is a $\Gamma$-group, with $\Gamma$ acting by conjugation [@LWZ Definition 2.1]. Let $n_K$ be the sum of the degrees of the primes in $\mathbb F_q(t)$ that ramify in $K$, and let $E_\Gamma(d,q) $ be the set of totally real $\Gamma$-extensions $K/ \mathbb F_q(T)$ with $n_K=d$. We would like to study the distribution of $\Gal (K^\#/K)$ as a $\Gamma$-group. (This provides a model for the distribution of the Galois groups of the maximal unramified extensions of totally real $\Gamma$-extensions of $\mathbb Q$ - see [@LWZ] for more on this.) To do this, following [@LWZ], we consider quotients of $\Gal (K^\#/K)$ that embed as a subquotient into a product of finite groups from a fixed list. This is the analogue of studying the distribution of the class group by first studying the distribution of its $n$-torsion part for fixed $n$ - it simplifies the structure of the individual groups under consideration and prevents escape of mass. For $\mathcal C$ a finite set of $\Gamma$-groups, we say a finite $\Gamma$-group is level-$\mathcal C$ if it is a $\Gamma$-invariant quotient of a $\Gamma$-invariant subgroup of a product of $\Gamma$-groups in $\mathcal C$. For $G$ a $\Gamma$-group, we say $G^{\mathcal C}$ is the inverse limit of all level-$\mathcal C$ quotients of $G$. Let $|\mathcal C|$ be the least common multiple of the orders of the elements of $\mathcal C$. Our main application calculates the probability that $ \Gal(K^{\#} /K)^{\mathcal C} $ is a given finite level-$\mathcal C$ $\Gamma$-group $H$, in the limit as $q$ goes to $\infty$ first and $d$ goes to $\infty$ second, subject to congruence conditions on $q$. (This is generally the easiest kind of limit studied in number theory over function fields.) \[main-function-field-theorem\] Let $\mathcal C$ be a finite set of $\Gamma$-groups and let $H$ be a finite level-$\mathcal C$ $\Gamma$-group. Assume $\gcd( |\Gamma|, |\mathcal C|)=1$. Then $$\lim_{n \to \infty} \lim_{\substack{ q \to \infty \\ \gcd(q, |\Gamma| |\mathcal C|)=1 \\ \gcd(q-1, |\mathcal C|)=1 }} \frac{ \sum_{ d =0}^{n} \left|\left \{ K \in E_{\Gamma}(d,q)\mid \Gal(K^{\#} /K)^{\mathcal C} \cong H \right\} \right| } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } = \mu_{\Gamma} (U_{\mathcal C, H})$$ where $\mu_{\Gamma}(U_{\mathcal C,H})$ is defined in [@LWZ (3.15)] and given by an explicit formula in [@LWZ Theorem 5.14] (taking $u=1$ in both cases). Notably, it follows from [@LWZ Theorem 5.12] that $$\sum_{ \substack{ H \textrm{ a level }\mathcal C\textrm{ }\Gamma\textrm{group} }} \mu_{\Gamma} ( U_{\mathcal C,H})=1.$$ In other words, there is no escape of mass in this limit. This is the main reason that we needed to consider the level-$\mathcal C$ quotients of the Galois group. (We could equivalently define, as [@LWZ] does, a topological space with topology generated by the sets of $G$ such that $G^{\mathcal C}= H$ for all pairs $\mathcal C,H$ and then obtain a statement on convergence of Borel probability measures on this topological space, but since our arguments proceed entirely with level-$\mathcal C$ $\Gamma$-groups, we avoid this.) This verifies the function field case of [@LWZ Conjecture 1.3] with an additional $q \to \infty$ limit. To prove this, we use the analogous limiting statement [@LWZ Theorem 1.4] for the moments of this distribution, in other words for the sums $$\frac{ \sum_{ d =0}^{n} \sum_{ K \in E_{\Gamma}(d,q) } \Surj_\Gamma ( \Gal(K^{\#} /K)^\mathcal C, H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\}\right | }$$ where $\Surj_\Gamma$ denotes the number of $\Gamma$-equivariant surjections between two $\Gamma$-groups. To make this deduction, we need to know that the probability distribution assigning measure $\mu_{\Gamma} (U_{\mathcal C, H})$ to $H$ is determined by its moments. In fact, we prove a statement involving a much more general class of measures on the set of isomorphism classes of finite level-$\mathcal C$ $\Gamma$-groups. (Whenever we discuss a measure on a set, we take that set to have the discrete topology unless otherwise specified.) \[main-bound-group-intro\] Let $\Gamma$ be a finite group. Let $\mathcal C$ be a finite set of finite $\Gamma$-groups. Assume $\gcd(|\Gamma|,|\mathcal C|)=1$. Let $\mu$ be a measure on the set of isomorphism classes of finite level-$\mathcal C$ $\Gamma$-groups. Let $\mu_t$ be a sequence of measures on the same set. Assume that, for each finite level-$\mathcal C$ $\Gamma$-group $H$, we have $$\label{assumption-limit-intro} \lim_{t \to \infty} \int \Surj_\Gamma (X, H) d\mu_t(X) = \int \Surj_\Gamma(X, H) d\mu(X) .$$ and $$\label{assumption-bound-intro} \int \Surj_\Gamma(X, H) d\mu(X) = O ( |H|^{O(1) })$$ Then for every finite level-$\mathcal C$ $\Gamma$-group $H$ we have $$\lim_{t \to \infty} \mu_t(H) = \mu(H) .$$ This generalizes [@BW Theorem 1.4], which proved that a specific measure on $p$-groups is determined by its moments. It also generalizes [@WW], who proved that a measure on abelian $\Gamma$-groups is determined by its moments. (In each case, checking that every measure that agrees with the fixed one on $G^{\mathcal C}$ for all finites set $\mathcal C$ agrees on the nose is straightforward.) [@WW] Itself generalizes much earlier work on the abelian case, including [@EVW], The idea of the proof is, to determine the probability of a random group being a fixed group $H$, we zoom in to a measure supported on extensions of $H$, then zoom in further to extensions of $H$ by products of finite simple groups, then forget the extension data and only look at the product of simple groups. This reduces the measure-from-moments problem to a simpler problem for a measure on products of finite simple groups. Moreover, it suffices to calculate the measure of the group $1$. This we can do by an explicit inclusion-exclusion using $q$-binomial series identities. This strategy is a more general analogue of that used in [@HB-i] to deduce [@HB-i Lemma 18] from [@HB-i Lemma 17]. We expect a similar strategy, that involves first passing to a measure supported on extensions of $H$, then further to a measure supported on extensions of $H$ by abelian groups, to be able to answer questions raised in [@LWZ p. 6] on whether the measure $\mu_{\Gamma}$ is supported on groups whose finite index subgroups have finite abelianization. We can also prove the analogous statement for finite modules over an algebra. (Jacob Tsimerman alerted me to the importance of this case.) \[main-bound-algebra-intro\] Let $R$ be an associative algebra. Assume that there are finitely many isomorphism classes of finite simple $R$-modules, and that $\operatorname{Ext}^1_R$ between two finite $R$-modules is finite. (e.g. $R$ could be finite over $\mathbb Z_p$ for some prime $p$) Let $\mu$ be a measure on the set of isomorphism classes of finite $R$-modules. Let $\mu_t$ be a sequence of measures on the same set. Assume that, for $M$ a finite $R$-module, $$\lim_{t \to \infty} \int \Surj_R (X, M) d\mu_t(X) = \int \Surj_\Gamma(X, M) d\mu(X) .$$ and $$\int \Surj_R(X, M) d\mu(X) = O ( |M|^{O(1) })$$ Then for all finite $R$-modules $M$ we have $$\lim_{t \to \infty} \mu_t(M) = \mu(M) .$$ I would like to thank Melanie Wood and Jacob Tsimerman for helpful conversations. While working on this project, I served as a Clay Research Fellow. $\Gamma$-groups ================ Given a $\Gamma$-group $H$, we will understand those $\Gamma$ groups $G$ that map to $H$ by understanding the kernels of maps $\pi : G \to H$. These kernels carry extra structure, because $H$ and $\Gamma$ both act on them by outer automorphisms, and we will need the following ad-hoc definition to keep track of the extra structure: For a $\Gamma$-group $H$, let $H'= H \rtimes \Gamma$. We say an $[H']$-group is a group $G$ together with a map from $H'$ to $\operatorname{Out}(G)$. Note that $\Gamma$ also acts on $\ker \pi$ by honest automorphisms instead of outer automorphisms. We ignore this extra structure to simplify the definition and to simplify certain proofs, as it is not necessary for our arguments. We say a homomorphism $G_1\to G_2$ of $[H']$-groups is a $[H']$-homomorphism if for each element of $h\in H'$, for each lift $\sigma_1$ of $h$ from $\Out(G_1)$ to $\Aut(G_1)$, there is a lift $\sigma_2$ of $h$ from $\Out(G_2)$ to $\Aut(G_2)$ such that $\sigma_2 \circ f = f \circ \sigma_1$. We say $\Surj_{[H']} (G_1, G_2)$ is the number of surjective $[H']$-homomorphisms from $G_1$ to $G_2$. $\Out(G)$ acts on the set of normal subgroups of $G$, so $H'$ acts on the set of normal subgroups of a $[H']$-group $G$. We say a nontrivial $[H']$-group is simple if there are no nontrivial proper fixed points of this action. It is easy to see that the composition of two $[H']$-homomorphisms is an $[H']$-homomorphism, so $[H']$-groups form a category. We will calculate the probability that a random $\Gamma$-group $X$ is isomorphic to $H$ by studying the probability that the kernel of random $\Gamma$-surjection $\pi: X\to H$ is trivial. Thus, our first few lemmas will focus on identifying the probability that an $[H']$-group is trivial from the moments of a distribution on $[H']$-groups. By quotienting by the intersection of all maximal proper $H\rtimes \Gamma$-invariant normal subgroups, we will be able to restrict attention to products of finite simple $[H']$-groups. Thus, our first few lemmas will involve finite simple $[H']$-groups and their products. We begin with a series of lemmas that let us calculate $\Surj_{[H']} (G_1, G_2)$ for products of finite simple $[H']$-groups. \[abelian-surjection-count\] Let $G$ be an abelian finite simple $[H']$-group, and let $h$ be the number of $[H']$-homomorphisms from $G$ to $G$. Then $$\label{eq-abelian-surjection-count} \Surj_{[H']}(G^e,G^k) = (h^{e}-1) (h^e- h) \dots (h^{e}- h^{k-1} ) .$$ Because $G$ is abelian, its outer automorphism group is equal to its automorphism group, so we may view $G$ as simply an abelian group with an action of $H'$, or as a $\mathbb Z[ H']$-module. In this view, $[H']$-homomorphisms between $G^e$ and $G^k$ are module homomorphisms. Because $G$ is finite simple as a $[H']$-group, it is a simple module with finite cardinality, so its endomorphism algebra as a module is a finite field $\mathbb F_h$. Then maps from $G^e$ to $G^k$ are $k \times e$ matrices over $\mathbb F_h$, and are surjective if and only if this matrix is surjective. The result then follows because $(h^{e}-1) (h^e- h) \dots (h^{e}- h^{k-1} )$ is the number of $k \times e$ surjective matrices. \[non-abelian-surjection-count\] Let $G$ be a non-abelian finite simple $[H']$-group, and let $| \Aut_{[H']}(G)|$ be the number of $[H']$-bijections from $G$ to $G$. Then $$\label{eq-non-abelian-surjection-count} \Surj_{[H']}{(G^e,G^k)} = e (e-1) \dots (e+1-k) | \Aut_{[H']}(G)|^k .$$ We first prove that any $[H']$-surjection from $G^e$ to $G$ is the composition of projection onto one factor with an $[H']$-automorphism of $G$. To do this, note when $f: G^e \to G$ is restricted to each copy of $G$, it defines an $[H']$-homomorphism from $G$ to $G$, which because $G$ is $[H']$-simple must be either an $[H']$-automorphism or the trivial map. These restrictions cannot all be trivial or else $f$ would fail to be surjective, and two cannot be automorphisms or else the images of two different copies of $G$ would each be all of $G$ and would fail to commute with each other, so one is an isomorphism and the rest are trivial. Thus $f$ is the composition of a projection with an $[H']$-automorphism. Now consider an $[H']$-surjection from $G^e$ to $G^k$. Its composition with each of the $k$ projections remains an $[H']$-surjection, so each of these is a projection onto one factor composed with an automorphism. Conversely, given $k$ such projections onto one of $e$ factors and $k$ automorphisms, the induced map $G^e \to G^k$ is an $[H']$-homomorphism, because the $[H']$-automorphism condition for each of the $k$ factors gives a lift from $\Out(G) $ to $\Aut(G)$, hence a lift from $\Out(G)^k \subseteq \Out(G^k)$ to $\Aut(G)^k \subseteq \Aut(G^k)$. Furthermore, the induced map $G^e \to G^k$ is surjective if and only if the same projection never appears twice. (We can check this on the level of groups, where it is obvious.) Because the number of ordered $k$-tuples of choices of one out of $e$ projections, never repeating, is $ e (e-1) \dots (e+1-k) $, and the number of $k$ tuples of $[H']$-automorphisms is $| \Aut_{[H']}(G)|^k$, we obtain . \[surjection-count-splitting\] Let $G_1,\dots, G_m$ be finite simple $[H']$-groups that are not pairwise $[H']$-isomorphic. Then $$\label{eq-surjection-count-splitting} \Surj_{[H']} \Bigl( \prod_{i=1}^m G_i^{e_i}, \prod_{i=1}^m G_i^{k_i}\Bigr) = \prod_{i=1}^m \Surj_{[H']} \left(G_i^{e_i}, G_i^{k_i} \right) .$$ First observe that, given a tuple $f_1,\dots, f_m$ of $[H']$-surjections $f_i: G_i^{e_i}\to G_i^{k_i}$, the product $$\prod_{i=1}^m f_i\hspace{5pt} :\hspace{5pt} \prod_{i=1}^m G_i^{e_i}\to \prod_{i=1}^m G_i^{k_i}$$ is an $[H']$-surjection. Certainly $ \prod_{i=1}^m f_i$ is a surjective group homomorphism, so it suffices to check that for any $\sigma_1\in \Aut ( \prod_{i=1}^m G_i^{e_i}) $ lifting $h \in H'$, we can find $\sigma_2 \in \Aut( \prod_{i=1}^m G_i^{k_i} ) )$ lifting $h$ such that $\sigma_2 \circ \prod_{i=1}^m f_i = \prod_{i=1}^m f_i \circ \sigma_1$. Because $\sigma_1$ lifts an outer automorphism that stabilizes the individual factors $G_i^{e_i}$, $\sigma_1$ stabilizes the factors $G_i^{e_i}$, so in fact $\sigma_1 \in \prod_{i=1}^m \Aut( G_i^{e_i})$. For each $i$ we can apply the $[H']$-homomorphism property of $f_i$ to the restriction of $\sigma_1$ to $\Aut( G_i^{e_i})$ to find a suitable element of $\Aut(G_i^{k_i})$, and then define $\sigma_2$ to be the product of these elements. This verifies the $[H']$-homomorphism condition. Thus, it suffices to prove that every $[H']$-surjection $f: \prod_{i=1}^m G_i^{e_i}\to \prod_{i=1}^m G_i^{k_i}$ arises this way. Let us first check that for any $i \neq j$ and and any $1 \leq a \leq e_i$, $1 \leq b \leq k_j$, the restriction of $f$ to a map from the $a$th copy of $G_i$ to the $b$th copy of $G_j$ is zero. Because the $a$’th copy of $G_i$ is a normal subgroup of $\prod_{i=1}^m G_i^{e_i}$, and the image of a normal subgroup under any surjection is normal, the image of the restricted map $G_i \to G_j$ is a normal subgroup of $G_j$. By the homomorphism condition, this image is also $H'$-invariant, so because $G_j$ is $[H']$-simple the image must be all of $G_j$ or trivial. Similarly, the kernel of this map from $G_i $ to $ G_j$ is $[H']$-invariant, so must be $G_i$ or trivial. Thus the map from $G_i$ to $G_j$ either a bijection or zero. It cannot be a bijection as, by assumption, $G_i$ and $G_j$ are not isomorphic, so it must be zero, as desired. It follows that, as a group homomorphism, $f$ is the product of maps $f_i: G_i^{e_i}\to G_i^{k_i}$. Because $f$ is surjective, these maps $f_i$ are surjective. Because $f$ is an $[H']$-homomorphism, and $f_i$ is the composition of $f$ with the inclusion $G_i^{e_i } \to \prod_{i=1}^m G_i^{e_i}$ and projection $\prod_{i=1}^m G_i^{k_i} \to G_i^{k_i}$, which are both $[H']$-homomorphisms, $f_i$ must be an $[H']$-homomorphism, finishing the proof. The next two lemmas let us solve a basic version of the problem of reconstructing measures from moments, where we look at a measure on groups which consist of powers of a single group, and reconstruct from the moments only the measure of the trivial group. We define in Lemma \[c-k-exists\] a sequence of coefficients, and describe its useful properties. In Lemma \[one-variable\] we use these properties to prove the reconstruction statement, in a uniform way in both the abelian and non-abelian cases. \[c-k-exists\] Let $H'$ be a group and let $G$ be a finite simple $[H']$-group. Then there exist constants $c_k \in \mathbb R$, depending on $G$, such that 1. $c_0=1$, 2. for any $e>0$ we have $$\sum_{k=0}^r c_k \Surj_{[H']}( G^{e}, G^k) \geq 0$$ if $r$ is even and $$\sum_{k=0}^r c_k \Surj_{[H']} ( G^{e}, G^k) \leq 0$$ if $r$ is odd, and 3. $|c_k|$ converges to $0$ superexponentially in $k$. To interpret these conditions, note that (1) and (2) together imply the identity $$\sum_{k=0}^{\infty} c_k \Surj_{[H']}( G^{e}, G^k) = \begin{cases} 1 &\textrm{if }e=0\\ 0&\textrm{otherwise}\end{cases}.$$ Thus, for a measure $\mu$ on the positive natural numbers, we can attempt to reconstruct $\mu(0)$ from the moments $\sum_{e=0}^{\infty}\Surj_{[H']}( G^{e}, G^k) $ by summing the $k$th moment against $c_k$. Using all three conditions, we can prove that this attempt works. The formula for $c_k$ in the case $G$ abelian is essentially due to Heath-Brown [@HB-o Equation (22)], who used this identity and part (3), but not part (2), to prove a weaker uniqueness statement. If $G$ is not abelian we have $\Surj_{[H']}( G^{e}, G^k) = e (e-1) \dots (e+1-k) | \Aut_{[H']}(G)|^k$ by Lemma \[non-abelian-surjection-count\] and we take $$c_k = \frac{ (-1)^k }{ k! | \Aut_{[H']}(G)|^k }.$$ Here (1) is clear, (2) follows from the identity $$\sum_{k=0}^{r} (-1)^k {e\choose k} = (-1)^r { e-1 \choose r}$$ which is $\geq 0$ if $r$ is even and $\leq 0 $ if $r$ is odd, and (3) is clear. If $G$ is abelian, let $h= \Hom_{[H']}(G, G) $. The number of surjections is given by from Lemma \[abelian-surjection-count\] and we take $c_k = \frac{(-1)^ k } { (h-1) \dots (h^k -1 ) }$. Then (1) is clear, (2) follows from the identity $$\label{q-binomial-identity} \sum_{k=0}^r (-1)^k { e \choose k}_h h^{ k \choose 2} = (-1)^r { e-1 \choose r }_h h^{ r+1 \choose 2}$$ which is $\geq 0$ if $r$ is even and $\leq 0$ if $r$ is odd, and (3) is clear. In this identity, the binomial coefficients are interpreted as $q$-binomial coefficients, except with $q=h$. The identity follows from the more standard identity $$\binom{e}{k}_h = h^k \binom{e-1}{k}_h + \binom{e-1}{k-1}_h$$ by a telescoping sum. \[one-variable\] Let $H$ be a $\Gamma$-group and let $G$ be a finite simple $[H']$-group. Let $m$ and $m_t$ for $t \in \mathbb N$ be functions from $\mathbb N$ to the nonnegative real numbers. Assume that $$\label{one-var-bound} \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) = O \left( O(1)^k \right)$$ and $$\label{one-var-limit} \lim_{t\to\infty} \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m_t (e)= \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e)$$ Then $$\label{one-var-conc} \lim_{t\to \infty} m_t(0) = m(0) .$$ By Lemma \[c-k-exists\], part (1) and Lemma \[c-k-exists\], part (2), we have $$\sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) = \sum_{e=0}^{\infty} \left(\sum_{k=0}^rc_k \Surj_{[H']} ( G^e, G^k) \right) m(e)$$$$= m(0) + \sum_{e=1}^{\infty} \left(\sum_{k=0}^rc_k \Surj_{[H']} ( G^e, G^k) \right) m(e)$$$$\geq m(0)$$ if $r$ is even and $$\leq m(0)$$ if $r$ is odd. Moreover, the analogous inequalities hold for $m_t$ for all $t$. By Lemma \[c-k-exists\](3) and our assumption , $\sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) $ is a convergent series, so $$\lim_{r\to \infty} \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) = m(0).$$ Therefore for all $r$ even we have $$\lim\sup_{t\to\infty} m_t(0) \leq \lim\sup_{t\to \infty} \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m_t(e)= \sum_{k=0}^r c_k \lim_{t\to\infty} \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m_t(e)$$ $$= \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e)$$ and thus $$\label{one-var-upper} \lim\sup_{t\to\infty} m_t(0) \leq \lim_{ r\to\infty} \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) = m(0)$$ and for $r$ odd we have $$\lim\inf_{t\to\infty} m_t(0) \geq \lim\inf_{t\to \infty} \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m_t(e)= \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \lim_{t\to\infty} \Surj_{[H']} ( G^e, G^k) m_t(e)$$ $$= \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e)$$ and thus $$\label{one-var-lower} \lim\inf_{t\to\infty} m_t(0) \geq \lim_{ r\to\infty} \sum_{k=0}^r c_k \sum_{e=0}^{\infty} \Surj_{[H']} ( G^e, G^k) m(e) = m(0).$$ Combining and , we get our desired conclusion . The next two lemmas improve the statement of Lemma \[one-variable\] from powers of a single group to products of powers of a finite list of groups. This is based on an inductive strategy where we handle one group at a time. Lemma \[inductive-step\] will give the inductive step and Lemma \[flat-case\] will complete the argument. \[inductive-step\] Let $H$ be a $\Gamma$-group and let $G_1,\dots, G_m$ be finite simple $[H']$-groups. Let $\tilde{\mu}$ be a measure on the set of isomorphism classes of groups $\prod_{i=1}^m G_i ^{e_i}$ and let $\tilde{\mu}_t$ be a sequence of such measures. Let $j$ be a natural number from $1$ to $m$. Assume that for all $k_j,\dots, k_m \in \mathbb N$ we have $$\label{ind-bound} \sum_{e_{j}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j}^m G_i^{e_i} , \prod_{i=j}^m G_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=j}^m G_i^{e_i} \right) = O \left( O(1)^{ \sum_{i=j}^m k_i } \right)$$ and $$\label{ind-limit} \lim_{t\to\infty} \sum_{e_{j}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j}^m G_i^{e_i} , \prod_{i=j}^m G_i^{k_i}\right) \tilde{\mu}_t \left( \prod_{i=j}^m G_i^{e_i} \right) = \sum_{e_{j}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j}^m G_i^{e_i} , \prod_{i=j}^m G_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=j}^m G_i^{e_i} \right).$$ Then for all $k_{j+1},\dots, k_m \in \mathbb N$ we have $$\label{ind-conc} \begin{split} \lim_{t \to \infty} \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \Bigl( \prod_{i=j+1}^m G_i^{e_i} , \prod_{i=j+1}^m G_i^{k_i}\Bigr) \tilde{\mu}_t \Bigl( \prod_{i=j+1}^m G_i^{e_i} \Bigr)\\ = \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \Bigl( \prod_{i=j+1}^m G_i^{e_i} , \prod_{i=j+1}^m G_i^{k_i}\Bigr) \tilde{\mu} \Bigl( \prod_{i=j+1}^m G_i^{e_i} \Bigr). \end{split}$$ Fix $k_{j+1},\dots, k_m$. Define $$m(e) = \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j+1}^m G_i^{e_i} , \prod_{i=j+1}^m G_i^{k_i}\right) \tilde{\mu} \left( G_j^ e \times \prod_{i=j+1}^m G_i^{e_i} \right)$$ and $$m_t(e) = \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j+1}^m G_i^{e_i} , \prod_{i=j+1}^m G_i^{k_i}\right) \tilde{\mu}_t \left( G_j^ e \times \prod_{i=j+1}^m G_i^{e_i} \right).$$ Thus, for any natural number $k$, defining $k_j=k$, we have by Lemma \[surjection-count-splitting\], $$\label{mu-m-conversion} \begin{split} & \sum_{e=0}^{\infty} m(e) \Surj_{[H'] } (G_j^e, G_j^k) \\ =& \sum_{e=0}^{\infty} \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j+1}^m G_i^{e_i} , \prod_{i=j+1}^m G_i^{k_i}\right) \Surj_{[H'] } (G_j^e, G_j^k)\tilde{\mu} \left( G_j^ e \times \prod_{i=j+1}^m G_i^{e_i} \right)\\ = &\sum_{e=0}^{\infty} \sum_{e_{j+1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left(G_j^e \times \prod_{i=j+1}^m G_i^{e_i} , G_j^k \times \prod_{i=j+1}^m G_i^{k_i}\right) \tilde{\mu} \left( G_j^ e \times \prod_{i=j+1}^m G_i^{e_i} \right)\\=& \sum_{e_{j}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=j}^m G_i^{e_i} , \prod_{i=j}^m G_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=j}^m G_i^{e_i} \right), \end{split}$$ and a similar identity holds with $m_t$ and $\tilde{\mu}_t$. We now apply Lemma \[one-variable\] to $m$ and $m_t$. Because the number of surjections from any group to another is nonnegative, $m$ and $m_t$ are nonnegative. Applying for $m$ and $m_t$, we see that the hypothesis of Lemma \[one-variable\] is exactly our assumption . Applying , the hypothesis of Lemma \[one-variable\] follows from our assumption since with $k_{j+1},\dots, k_m$ fixed, $O \left( O(1)^{ \sum_{i=j}^m k_i } \right) = O \left( O(1)^{k_j} \right) =O \left( O(1)^{k} \right) $. Because we have verified both hypotheses, we can apply Lemma \[one-variable\]. We use the definition of $m$ and $m_t$ to see that the conclusion is exactly our desired . \[flat-case\] Let $[H']$ be a finite group and let $G_1,\dots, G_m$ be finite simple $[H']$-groups. Let $\tilde{\mu}$ be a measure on the set of isomorphism classes of groups $\prod_{i=1}^m G_i ^{e_i}$ and let $\tilde{\mu}_t$ be a sequence of such measures. Assume that for all $k_1,\dots, k_m \in \mathbb N$ we have $$\label{flat-bound} \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=1}^m G_i^{e_i} , \prod_{i=1}^m G_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=1}^m G_i^{e_i} \right) = O \left( O(1)^{ \sum_{i=1}^m k_i } \right)$$ and $$\label{flat-limit} \lim_{t \to \infty} \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=1}^m G_i^{e_i} , \prod_{i=1}^m G_i^{k_i}\right) \tilde{\mu}_t \left( \prod_{i=1}^m G_i^{e_i} \right) = \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{[H']} \left( \prod_{i=1}^m G_i^{e_i} , \prod_{i=1}^m G_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=1}^m G_i^{e_i} \right).$$ Then for all $k_{j+1},\dots, k_m \in \mathbb N$ we have $$\label{flat-conc} \lim_{t \to \infty} \tilde{\mu}_t ( 1 ) = \tilde{\mu} ( 1) .$$ The hypothesis and conclusion of Lemma \[inductive-step\] are identical, except that the conclusion has $j+1$ where the hypothesis has $j$. This is exactly what we need for an inductive argument. Based on this idea, we will prove that holds for all $j$ from $1$ to $m+1$ by induction on $j$. The base case of this induction is the $j=1$ case of , which is exactly our assumption . The induction step follows from Lemma \[inductive-step\] once we check the other hypothesis of Lemma \[inductive-step\]. This hypothesis follows from our assumption because requires us to bound the sum in , restricted to the case when $k_1,\dots, k_{j-1}=0$. Because all terms in are nonnegative, the restricted sum is bounded by the original sum. This verifies the induction, and finally we observe that the $j=m+1$ case of is our desired conclusion . We have now obtained a special case of Theorem \[main-bound-group-intro\], where the measure is supported on the products of elements from a specific finite list of groups and we only reconstruct the measure of the identity from the moments, rather than the measure of an arbitrary element. We now reduce the probability that a random group $X$ is $H$ to the probability that a random surjection $X \to H$ is an isomorphism, which we reduce to the probability that the kernel of the random surjection is trivial, which we reduce to the probability that the quotient of the kernel by the intersection of all its maximal proper subgroups is trivial. This strategy was already used in [@LWZ], and less explicitly in other works, to compute a measure on random groups. The key additional observation is that we can calculate the moments of this transformed random model straightforwardly from our original moments, which then allows us to apply Lemma \[inductive-step\]. Our first lemmas study the quotient of an $[H']$-group by the intersection of its maximal proper $H'$-invariant normal subgroups: For $T$ an $[H']$-group, we define $Q(T)$ to be the quotient of $T$ by the intersection of all its maximal proper $H'$-invariant normal subgroups. Because this intersection is $H'$-invariant, $Q(T)$ carries a natural $[H']$-structure. \[Q-finite-simple\] For $T$ a finite $[H']$-group, $Q(T)$ is a product of finite simple $[H']$-groups. Let us check that the quotient by any intersection of $n$ maximal proper $H'$-invariant normal subgroups is a product of finite simple $[H']$-groups, by induction on $n$. For the induction step, let $Z$ be the intersection of $n$ such subgroups, and let $W$ be another such subgroup. If $W$ contains $Z$, $W \cap Z=Z$ and we are done. Otherwise, $WZ$ is an $H'$-invariant normal subgroup containing $W$, but not equal to $W$, and so $WZ=T$ Because $WZ=T$, the natural map $T/ (W \cap Z) \to (T/W) \times (T/Z)$ is an isomorphism. Because $W$ and $Z$ are $ H'$-invariant, this map is an $[H']$-homomorphism. By the induction hypothesis, $T/Z$ is a product of finite simple $[H']$-groups. Finally, $(T/W)$ is a finite simple $[H']$-group because the inverse image in $T$ of any nontrivial proper $[H']$-invariant normal subgroup would properly contain $W$, contradicting the maximality of $W$. It follows that $T/ (W \cap Z)$ is a product of finite simple $[H']$-groups, so this verifies the induction step. Because the base case $n=0$ is trivial, the induction is complete. \[finitely-many-isomorphism-classes\] There exist finitely many finite simple $[H']$-groups $G_i$, up to isomorphism, such that there exists an extension $1 \to G_i \to G \to H \to 1$ of $\Gamma$-groups compatible with the actions of $H $ and $ \Gamma$ on $G_i$ by outer automorphisms, where $G$ is a level-$\mathcal C$ $\Gamma$-group. Consider such a $G_i$. Let $G_i'$ be a finite simple group that is a quotient of $G_i$. We must have $G_i$ isomorphic as a group to a power of $G_i'$, since otherwise the intersection of the kernels of all surjections $G_i \to G_i'$ would be a nontrivial proper characteristic subgroup, hence a nontrivial proper $H \rtimes \Gamma$-invariant normal subgroup. Because $G_i'$ is a Jordan-Hölder factor of a level-$\mathcal C$ $\Gamma$-group, it is a Jordan-Hölder factor of a subquotient of a product of elements of $\mathcal C$, and this it must be a Jordan-Hölder factor of an element of $\mathcal C$. Let us fix one such Jordan-Hölder factor $G_i'$, and show that powers of it give finitely many isomorphism classes. Because there are only finitely many Jordan-Hölder factors of the finitely many groups in $\mathcal C$, this means finitely many isomorphism classes overall, as desired. First consider the case where $G_i'$ is abelian simple, thus isomorphic to $\mathbb F_p$ for some prime $p$. Then any power $G_i$ of $G_i'$ is a vector space over $\mathbb F_p$. Because $G_i$ is abelian, its outer automorphism group is its automorphism group. Thus, we can describe $G_i$ as a vector space over $\mathbb F_p$ with an action of $H'$, or in other words a $\mathbb F_p [ H']$-module. Because there are finitely many isomorphism classes of simple $\mathbb F_p [ H']$-modules, there are finitely many such $G_i$. Next consider the case when $G_i'$ is non-abelian simple. In this case, $\Out ((G_i')^n) = \Out(G_i')^n \rtimes S_n$. Given a homorphism from $H'$ to $\Out(G_i')^n \rtimes S_n$, if the associated $[H']$-group is simple then the image of $H'$ in $S_n$ must be transitive, because otherwise we could split $G_i$ into a product of two $[H']$-groups corresponding to two orbits. Hence $n$ is at most $|H'|$, the maximal size of a transitive $H'$-action. Because $n$ is bounded, there are only finitely homomorphisms, thus only finitely many isomorphism classes of $G_i$, as desired. Let $G_1,\dots, G_m$ be pairwise non-isomorphic representatives of the finitely many isomorphism classes discussed in Lemma \[finitely-many-isomorphism-classes\]. \[Q-nice-description\] For $G$ a finite level-$\mathcal C$ $\Gamma$-group and $\pi: G \to H$ a homomorphism, we have an isomorphism of $[H']$-groups $$Q( \ker \pi) \cong \prod_{i=1}^m G_i ^{e_i}$$ for some $e_1,\dots, e_m \in \mathbb N$. By Lemma \[Q-finite-simple\], $Q(\ker \pi)$ is a product of finite simple $[H']$-groups, so it suffices by definition to show that for each factor $G'$ in the product, there exists an extension $1 \to G' \to G^*\to H \to 1$ of $\Gamma$-groups compatible with the actions of $H$ and $\Gamma$, where $G^*$ is a level-$\mathcal C$ $\Gamma$-group. To do this, observe that $Q(\ker\pi)$ is the product of $G'$ with some other groups, so $G'$ is a quotient of the $Q(\ker \pi)$ by a $H'$-invariant normal subgroup $Z$. Because $Z$ is $H' = H \rtimes \Gamma$-invariant, $Z$ remains normal and $\Gamma$-invariant as a subgroup of $G$, and $G/Z$ is the desired $G^*$. The quotient $G^*$ is level-$\mathcal C$ because the class of level-$\mathcal C$ $\Gamma$-groups is closed under quotients. Now that we understand the image of the map $Q$, we can define for each measure $\mu$ on level-$\mathcal C$ $\Gamma$-groups a localized measure $\mu^H$. For $\mu$ a measure on the set of isomorphism classes of finite level-$\mathcal C$ $\Gamma$-groups, and $H$ a finite level-$\mathcal C$ $\Gamma$-group, define a measure $\mu^H$ on the set of isomorphism classes of $[H']$-groups of the form $ \prod_{i=1}^m G_i ^{e_i} $ by $$\mu^H (E) = \int \left| \{ \pi : X \to H \textrm{ surjective} \mid Q( \ker \pi) \cong E \} \right| d\mu(X) .$$ \[mu-H-formula\] $$\mu^H(1) = \left| \Aut(H)\right| \mu(H)$$ A surjection $X \to H$ is an isomorphism if and only if its kernel is trivial, so $$\left|\{ \pi: X \to H \textrm{ surjective} | (\ker \pi)/I \cong 1 \} \right|$$ is the number of isomorphisms of $X$ with $H$, and thus vanishes for $X \neq H$ and equals $|\Aut(H)|$ for $X= H$ Next, we will prove a couple lemmas to compare the moments of $\mu^H$ and the moments of $\mu$. The first will count surjections from $Q(\ker \pi)$ to $F$ using a sum over exact sequences, the second will bound the number of exact sequences, and the third will use the count of surjections to compare the moments. \[surjection-formula\] Let $X$ and $H$ be finite level-$\mathcal C$ $\Gamma$-groups. Fix $k_1,\dots, k_m\in \mathbb N$ and let $F = \prod_{i=1}^m G_i^{k_i}$. The number of pairs of a surjection $\pi: X\to H$ of $\Gamma$-groups and a surjection $f: Q(\ker \pi) \to F$ of $[H']$-groups is equal to $$\sum_{ \substack{1 \to F \to G \to H\to 1} }\frac{ \Surj_\Gamma(X, G) }{ \Aut_{F,H}(G)}$$ where the sum is over exact sequences of $\Gamma$-groups compatible with the actions of $H$ and $\Gamma$ on $F$ by outer automorphisms. First note that we can equivalently take $f: \ker \pi \to F$, because every map $f: \ker \pi \to F$ of $[H']$-groups factors uniquely through the projection $\ker \pi \to Q(\ker \pi)$. Indeed, the kernel of any surjection to a finite simple $[H']$-group is a maximal proper $H \rtimes \Gamma$-invariant normal subgroup. Hence, the kernel of the surjection $f$ to a product of finite simple $[H']$-groups contains the intersection of all maximal proper $H'$-invariant normal subgroups. Thus, $f$ factors uniquely through the projection. Given $X$ and a surjection $u: X\to G$, we obtain a surjection $\pi: X \to H$ by composition and a surjection $f: \ker \pi \to K$ by restricting $u$ to $\ker \pi$ and noting that its image is the kernel of the natural map $G \to H$, which is $K$. This is described by the following commutative diagram, where the square is Cartesian. $$\begin{tikzcd} \ker \pi \arrow[d,"f"] \arrow[r] & X \arrow[d,"u"] \arrow[dr,"\pi"] \\ K \arrow[r] & G \arrow[r] & H \\ \end{tikzcd}$$ Because $\pi$ is a composition of two $\Gamma$-invariant maps, it is $\Gamma$-invariant. Because any lift of an element of $H \rtimes \Gamma$ to an automorphism of $\ker \pi$ is the action by conjugation of an element $x \in X$ times the action of an element $\gamma \in \Gamma$, we can find a corresponding automorphism of $K$ by applying $\gamma$ and then conjugating by $u(x)$, so $f$ is an $[H']$-homomorphism. Composing $u$ with an automorphism of $G$ fixing the inclusion of $K$ and the projection onto $H$ preserves $\pi$ and $f$. Thus, we have defined a map from the set of $\Aut_{F,H}(G)$-orbits on pairs of an exact sequence $1 \to F \to G \to H\to 1$ of $\Gamma$-groups, compatible with the actions of $H$ and $\Gamma$ by outer automorphisms on $F$, together with a surjection of $\Gamma$-groups from $X \to G$, to pairs of a surjection $\pi: X\to H$ of $\Gamma$-groups and a surjection $f: Q(\ker \pi) \to F$ of $[H']$-groups. Because automorphisms of $G$ act freely on surjections $X \to G$, the number of orbits is $\sum_{ \substack{1 \to F \to G \to H\to 1} }\frac{ \Surj_\Gamma(X, G) }{ \Aut_{F,H}(G)}$, so it suffices to find an inverse map. To do this, define $G = X/ \ker f $. Take $G \to H$ to be the projection $X/\ker f \to X/\ker \pi $. Take $F \to G$ to be the inclusion $\ker \pi/\ker f \to X /\ker f $. Take $X \to G$ the quotient map $X \to X/\ker f = G$. These are all maps of $\Gamma$-groups, they form an exact sequence, and it is not hard to check these operations are inverses. \[extension-counting-bound\] For $F = \prod_{i=1}^m G_i^{k_i}$, the number of isomorphism classes of extensions $1 \to F \to G \to H$ compatible with the action of $H$ on $G$ by outer automorphisms is $O \left( O(1)^{\sum_{i=1}^m k_i}\right) $. An extension of $\Gamma$-groups $1 \to F \to G \to H$ is equivalent to an extension of groups $1 \to F \to G \rtimes \Gamma \to H \rtimes \Gamma$. To classify such extensions, fix for each element $h\in H \rtimes \Gamma$ a lift $\sigma_h$ of the associated outer automorphism of $F$ to an automorphism of $F$. Then, given such an extension, choose for each $h$ an element $\alpha_h \in G\rtimes \Gamma$ whose image in $H \rtimes \Gamma$ is $h$ and whose action by conjugation on $F$ is by $\sigma_h$. This is possible as we can adjust the conjugation action by an inner automorphism by multiplying by an appropriate element of $F$. Because projection to $H \rtimes \Gamma $ is a group homomorphism with kernel $F$, there exists for each $h_1,h_2 \in H \rtimes \Gamma$ an element $f_{h_1,h_2} \in F$ such that $\alpha_{h_1} \alpha_{h_2}= f_{h_1,h_2} \alpha_{h_1h_2}$. We can express each element of $G\rtimes \Gamma$ as $f \alpha_h$ for some $f\in F$, and we have $$f_1 \alpha_{h_1} f_2 \alpha_{h_2} = f_1 (\alpha_{h_1} f_2 \alpha_{h_1}^{-1}) \alpha_{h_1} \alpha_{h_2} = f_1 \sigma_{h_1}(f_2) f_{h_1 ,h_2} \alpha_{h_1h_2}$$ so the multiplication table of $G \rtimes \Gamma$, expressed this way, is determined by the elements $f_{h_1,h_2}$. To describe the group with this multiplcation table, which comes equipped with a projection to $\Gamma$, as a semidirect product $G \rtimes \Gamma$, we need to fix a subgroup that maps isomorphically $\Gamma$ under the projection $H \rtimes \Gamma$. This requires fixing a lift of each element of $\Gamma$, which represents at most $|F|^{\Gamma}$ additional choices. Thus, the number of possible isomorphism classes of exact sequences is at most $$|F|^{ ( |H \rtimes \Gamma|^2 + |\Gamma| ) } \leq \prod_{i=1}^m |G_i|^{ k_i ( |H \rtimes \Gamma|^2 + |\Gamma| ) } = O(1)^{ \sum_{i=1}^m k_i}.$$ We can likely express the data $\alpha_{h_1h_2}$ in this proof using group cohomology to get a more precise count, but this isn’t necessary for the bound. \[surjection-integral-formula\] For $\mu$ a measure on the set of isomorphism classes of finite level-$\mathcal C$ $\Gamma$-groups, $H$ a finite level-$\mathcal C$ $\Gamma$-group, and $F = \prod_{i=1}^m G_i^{k_i}$, we have $$\int \Surj_{[H']} (E, F) d\mu^H (E) = \sum_{ 1 \to F \to G \to H \to 1} \int \Surj_{\Gamma} (X, G) d\mu(X) / \Aut_{F,H}(G).$$ We have $$\int \Surj_{[H']} (E, F) d\mu^H (E) = \int \sum_ { \pi : X \to H \textrm{ surjective}} \Surj( Q(\ker \pi), F) d\mu(X)$$ $$= \int \sum_{ 1 \to F \to G \to H \to 1} \frac{ \Surj_{\Gamma} (X, G) }{ \Aut_{F,H}(G)} d\mu(X) = \sum_{ 1 \to F \to G \to H \to 1} \int \Surj_{\Gamma} (X, G) d\mu(X) / \Aut_{F,H}(G).$$ where the first identity is by definition, the second is Lemma \[surjection-formula\], and the third exchanges the integral with a sum which, by Lemma \[extension-counting-bound\], is finite. Now we are finally ready to prove the main theorem: \[main-bound-group\] Let $\Gamma$ be a finite group. Let $\mathcal C$ be a finite set of finite $\Gamma$-groups. Let $\mu$ be a measure on the set of isomorphism classes of finite level-$\mathcal C$ $\Gamma$-groups. Let $\mu_t$ be a sequence of measures on the same set. Assume that, for $H$ a finite level-$\mathcal C$ $\Gamma$-group, we have $$\label{assumption-limit} \lim_{t \to \infty} \int \Surj(X, H) d\mu_t(X) = \int \Surj(X, H) d\mu(X) .$$ and $$\label{assumption-bound} \int \Surj(X, H) d\mu(X) = O ( |H|^{O(1) })$$ Then for all finite level-$\mathcal C$ $\Gamma$-groups $H$ we have $$\label{groups-conclusion} \lim_{t \to \infty} \mu_t(H) = \mu(H) .$$ First, we will check that the hypotheses of Lemma \[flat-case\] apply to the measures $\mu^H$ and $\mu^H_t$. Let $F = \prod_{i=1}^m G_i^{k_i}$. By applying Lemma \[surjection-integral-formula\] to $\mu$ and $\mu_t$, we have $$\lim_{t \to \infty} \int \Surj_{[H']} (E, F) \mu_t^H (E) = \lim_{t\to \infty} \sum_{ 1 \to F \to G \to H \to 1} \int \Surj_\Gamma (X, G) d\mu_t(X) / \Aut_{F,H}(G)$$$$= \sum_{ 1 \to F \to G \to H \to 1} \lim_{t\to \infty} \int \Surj_\Gamma (X, G) d\mu_t(X) / \Aut_{F,H}(G)$$$$= \sum_{ 1 \to F \to G \to H \to 1} \int \Surj_\Gamma(X, G) d\mu(X) / \Aut_{F,H}(G) = \int \Surj_{[H']} (E, F) \mu^H (E)$$ where we use the finiteness of the sum, from Lemma \[extension-counting-bound\], to exchange it with a limit and also our assumption to calculate the limit. This verifies the assumption of Lemma \[flat-case\]. By Lemma \[surjection-integral-formula\] and assumption we have$$\int \Surj_{[H']} (E, F) d\mu^H (E) = \sum_{ 1 \to F \to G \to H \to 1} \int \Surj_{\Gamma} (X, G) d\mu(X) / \Aut_{F,H}(G)$$ $$= \sum_{ 1 \to F \to G \to H \to 1} O ( |G|^{O(1)} ).$$ Because $|G| = |F| |H| =|H| \prod_{i=1}^{m} |G_i|^{k_i} = O \left( O(1)^{\sum_{i=1}^m k_i}\right)$, and by Lemma \[extension-counting-bound\] the number of terms is $O \left( O(1)^{\sum_{i=1}^m k_i}\right)$, the total sum is $O \left( O(1)^{\sum_{i=1}^m k_i}\right)$, giving the assumption of . So we may apply Lemma \[flat-case\], obtaining $$\lim_{t \to \infty} \mu^H_t(1) = \mu^H(1),$$ which by Lemma \[mu-H-formula\], applied to both $\mu$ and $\mu_t$, gives our desired . Fix $\Gamma$ and $\mathcal C$, and for $G$ any level-$\mathcal C$ $\Gamma$-group, let $$\mu_{q, n} ( X) = \frac{ \sum_{ d=0}^{n} \left|\left \{ K \in E_{\Gamma}(d,q)\mid \Gal(K^{\#} /K)^{\mathcal C} \cong X \right\} \right| } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| }$$ and $$\mu(X) = \mu_{\Gamma} (U_{\mathcal C, X})$$ for $\mu_{\Gamma}$ the measure defined in [@LWZ] and $U_{\mathcal C,X}$ the inverse image of $X$ under the map $G \mapsto G^{\mathcal C}$. We have $$\label{ff-moment-unfolding}\begin{split} \int \Surj(X, H) d\mu_{q, n} (X) = \sum_{X \textrm{ level-}\mathcal C\textrm{ }\Gamma\textrm{-group}} \frac{ \sum_{ d=0}^{n} \left|\left \{ K \in E_{\Gamma}(d,q)\mid \Gal(K^{\#} /K)^{\mathcal C} \cong X \right\} \right| \Surj_\Gamma(X, H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } \\ %\[ = \sum_{X \textrm{ level-}\mathcal C\textrm{ }\Gamma\textrm{-group}} \frac{ \sum_{ g =0}^{g'} \sum_{\substack{ K \in E_{\Gamma}(d,q)\\ \Gal(K^{\#} /K)^{\mathcal C} \cong X }} \Surj_\Gamma(\Gal(K^{\#} /K)^{\mathcal C} , H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } \] = \frac{ \sum_{ d=0}^{n} \sum_{\substack{ K \in E_{\Gamma}(d,q) }} \Surj_\Gamma(\Gal(K^{\#} /K)^{\mathcal C} , H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } = \frac{ \sum_{ d =0}^{n} \sum_{\substack{ K \in E_{\Gamma}(d,q) }} \Surj_\Gamma(\Gal(K^{\#} /K) , H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } \end{split}$$ so $$\label{ff-case-limit} \begin{split} &\lim_{n \to \infty} \lim_{\substack{ q \to \infty \\ \gcd(q, |\Gamma| |\mathcal C|)=1 \\ \gcd(q-1, |\mathcal C|)=1 }} \int \Surj_\Gamma(X, H) d\mu_{q, n} (X) \\ =&\lim_{n \to \infty} \lim_{\substack{ q \to \infty \\ \gcd(q, |\Gamma| |\mathcal C|)=1 \\ \gcd(q-1, |\mathcal C|)=1 }} \frac{ \sum_{d=0}^{n} \sum_{\substack{ K \in E_{\Gamma}(d,q) }} \Surj_\Gamma(\Gal(K^{\#} /K) , H) } { \sum_{ d=0}^{n} \left|\left \{ E_{\Gamma}(d,q)\right\} \right| } =\int \Surj_{\Gamma}(G, H) d\mu_{\Gamma}(G) \\ & = \int \Surj_\Gamma (G^{\mathcal C}, H) d \mu_{\Gamma }(G) = \int \Surj_{\Gamma} (X, H) d\mu (X) , \end{split}$$ where the first identity is , the second identity is [@LWZ Theorem 1.4], the third identity is because $H$ is level-$\mathcal C$, and the last identity follows from the definition of $\mu$ in terms of $\mu_{\Gamma}$. Furthermore, we have by [@LWZ Theorem 6.2] $$\label{ff-case-bound} \int \Surj_{\Gamma} (X, H) d\mu (X) = \int \Surj_{\Gamma}(G, H) d\mu_{\Gamma}(G) = \frac{1}{[H : H^\Gamma] }= O ( |H|^{O(1)} ).$$ Hence taking any sequence of $g'$ and $q$ going to $\infty$, with $q$ satisfying the congruence conditions and growing sufficiently fast with respect to $g$, and verify the assumptions and of Theorem \[main-bound-group\]. Hence we can apply Theorem \[main-bound-group\], and its conclusion is exactly our desired statement. Modules ======= In this section, we assume that $R$ is an associative algebra such that there are finitely many finite simple $R$-modules, and we let $M_1,\dots M_m$ be representatives of these isomorphism classes, pairwise non-isomorphic. Let $I$ be the intersection in $R$ of the annihilators of the finite simple $R$-modules. Because $R/I$ is a product of finite simple algebras, every $R/I$-module is a product of finite simple $R$-modules, and so has the form $\prod_{i=1}^m M_i^{e_i}$ for $e_1,\dots e_m \in \mathbb N$. \[module-flat-case\] Let $\tilde{\mu}$ be a measure on the set of isomorphism classes of modules $\prod_{i=1}^m M_i^{e_i}$ and let $\tilde{\mu}_t$ be a sequence of such measures. Assume that for all $k_1,\dots, k_m \in \mathbb N$ we have $$\label{module-flat-bound} \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{R} \left( \prod_{i=1}^m M_i^{e_i} , \prod_{i=1}^m M_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=1}^m M_i^{e_i} \right) = O \left( O(1)^{ \sum_{i=1}^m k_i } \right)$$ and $$\label{module-flat-limit} \lim_{t \to \infty} \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{R} \left( \prod_{i=1}^m M_i^{e_i} , \prod_{i=1}^m M_i^{k_i}\right) \tilde{\mu}_t \left( \prod_{i=1}^m M_i^{e_i} \right)$$$$= \sum_{e_{1}, \dots ,e_m=0}^{\infty} \Surj_{R} \left( \prod_{i=1}^m M_i^{e_i} , \prod_{i=1}^m M_i^{k_i}\right) \tilde{\mu} \left( \prod_{i=1}^m M_i^{e_i} \right).$$ Then for all $k_{j+1},\dots, k_m \in \mathbb N$ we have $$\label{module-flat-conc} \lim_{t \to \infty} \tilde{\mu}_t ( 0 ) = \tilde{\mu} ( 0) .$$ The proof is almost word-for-word identical to the combination of the proofs of Lemmas \[c-k-exists\], \[one-variable\], \[inductive-step\], and \[flat-case\], except that we replace every occurrence of “finite simple $[H']$-group" in those lemmas with “finite simple $R$-module", we replace $\Surj_{[H']}$ with $\Surj_R$, and in Lemma \[c-k-exists\] we ignore the non-abelian case. (The analogues of Lemmas \[abelian-surjection-count\] and \[surjection-count-splitting\] are immediate in this setting.) Hence we do not repeat the proof. For $M$ a finite $R$-module, and $\mu$ a measure on the set of isomorphism classes of finite $R$-modules, we define a measure $\mu^M$ on the set of $R/I$-modules $N$ by $$\mu^M(N) = \int \left| \{ \pi: X \to M \textrm{ surjective} \mid (\ker \pi)/I \cong N \} \right| d\mu(X) .$$ \[module-measure-zero\] For $\mu$ a measure on the set of isomorphism classes of finite $R$-modules, we have $$\mu^M(0)= | \Aut(M)| \mu(H).$$ A surjection $X \to M$ is an isomorphism if and only if its kernel is zero, so  $$\left|\{ \pi: X \to M \textrm{ surjective} | (\ker \pi)/I \cong 0 \} \right|$$ is the number of isomorphisms of $X$ with $M$, and thus vanishes for $X \neq M$ and equals $|\Aut(M)|$ for $X= M$. \[module-surjection-formula\] Let $X$ and $M$ be finite $R$-modules and fix $k_1,\dots, k_m, \in \mathbb N$. Let $N = \prod_{i=1}^m M_i^{k_i}$ . The number of pairs of a surjection $\pi: X\to N$ and a surjection $f: \ker \pi/I \to N$ is equal to $$\sum_{ \substack{0 \to N \to M' \to M\to 0} }\frac{ \Surj_R(X, M') }{ | \operatorname{Hom} (M, N )| }$$ where the sum is over isomorphism classes of exact sequences of $R$-modules. Because $N$ is an $R/I$-module, surjections $f: \ker \pi/I \to N$ are in bijection with surjections $f: \ker \pi \to N$, which we will use for the remainder of the proof. We check that there is a bijection between (isomorphism classes of) pairs of a surjection $\pi: X\to M$ and $f: \ker \pi \to N $ and (isomorphism classes of) pairs of an exact sequence $0 \to N \to M' \to M\to 0$ with a surjection $X\to M'$. We will then deduce the counting formula. Given surjections $\pi: X\to M$ and $f: \ker \pi \to N $, we can define $M' = X/ (\ker \pi)$. The filtration $(\ker \pi) \subseteq (\ker f) \subseteq X$ gives an exact sequence $0 \to N \to M' \to M \to 0$ where we use $M = X/(\ker f)$ and $M' = (\ker \pi)/(\ker f)$. We then have a surjection $X \to M' $ because $M'$ is a quotient of $X$. Conversely, given an exact sequence $0 \to N \to M' \to M\to 0$ and a surjection $X \to N$, the composition $X \to M' \to M$ is a surjection, and its kernel is $X \times_M 0 = X\times_{M'} (M' \times_M 0) = X \times_{M'} N $ and thus surjects onto $N$. It is not too hard to check that these operations are inverse. Finally, note that the number of isomorphism classes of pairs of an exact sequence $0 \to N \to M' \to M\to 0$ with a surjection $X\to M'$ is equal to the sum over isomorphism classes of exact sequences of the number of surjections $X \to M'$, up to automorphisms of that exact sequence. The automorphisms of the exact sequence are $\Hom(M, N)$, and they act freely on surjections, so we simply divide the count by $|\Hom(M, N)|$. \[module-surjection-integral-formula\] For $\mu$ a measure on the set of isomorphism classes of finite $R$-modules, $M$ a finite $R$-module, and $N = \prod_{i=1}^m M_i^{k_i}$, we have $$\int \Surj_{R} (S,N ) d\mu^M (S) = \sum_{ 0 \to N \to M' \to M \to 0} \int \Surj_{R} (X, M') d\mu(X) / |\Hom (M,N)| .$$ We have $$\int \Surj_{R} (S,N ) d\mu^M (S)= \int \sum_ { \pi : X \to M \textrm{ surjective}} \Surj_R( (\ker \pi)/I, N ) d\mu(X)$$ $$= \int \sum_{ 0 \to N \to M' \to M\to 0 } \frac{ \Surj_{R} (X, M') }{ |\Hom(M,N)| } d\mu(X) = \sum_{ 0 \to N \to M' \to M\to 0 } \int \Surj_{R} (X, M') d\mu(X) /|\Hom(M,N)|$$ where the first identity is by definition, the second is Lemma \[module-surjection-formula\], and the third exchanges the integral with a sum of nonnegative functions. \[main-bound-algebra\] Let $R$ be an associative algebra. Assume that there are finitely many isomorphism classes of finite simple $R$-modules and that $\operatorname{Ext}^1$-groups between finite $R$-modules are finite. Let $\mu$ be a measure on the set of isomorphism classes of finite $R$-modules. Let $\mu_t$ be a sequence of measures on the same set. Assume that, for $M$ a finite $R$-module, $$\label{algebra-limit}\lim_{t \to \infty} \int \Surj_R (X, M) d\mu_t(X) = \int \Surj_R(X, M) d\mu(X) .$$ and $$\label{algebra-bound} \int \Surj_R(X, M) d\mu(X) = O ( |M|^{O(1) })$$ Then for all finite $R$-modules $M$ we have $$\label{algebra-conclusion} \lim_{t \to \infty} \mu_t(M) = \mu(M) .$$ Let us check that the hypotheses of Lemma \[module-flat-case\] apply to the measures $\mu^M$ and $\mu^M_t$. By applying Lemma \[module-surjection-integral-formula\] to $\mu$ and $\mu_t$, we have $$\lim_{t \to \infty} \int \Surj_{R} (S, N) d\mu_t^M (S) = \lim_{t\to \infty} \sum_{ 0 \to N \to M'\to M \to 0} \int \Surj_R (X, M') d\mu_t(X) /|\Hom(M,N)|$$$$= \sum_{ 0 \to N \to M'\to M \to 0} \lim_{t\to \infty} \int \Surj_R (X, M') d\mu_t(X) /|\Hom(M,N)$$$$= \sum_{ 0 \to N \to M'\to M \to 0} \int \Surj_R (X, M') d\mu(X) /|\Hom(M,N) = \int \Surj_{R} (S, N) d\mu^M (S)$$ where we use the assumed finiteness of $\operatorname{Ext}^1$ to exchange the sum with a limit and also the assumed to calculate the limit. This verifies the assumption of Lemma \[module-flat-case\]. By Lemma \[module-surjection-integral-formula\] and assumption we have$$\int \Surj_{R} (S, N) d \mu^M (S) = \sum_{ 0 \to N \to M'\to M \to 0} \int \Surj_R (X, M') d\mu(X) /|\Hom(M,N)$$ $$\label{module-sum-to-bound}= \sum_{ 0 \to N \to M'\to M \to 0} O (|M'|^{O(1}) /|\Hom(M,N)| .$$ For $N = \prod_{i=1}^m M_i^{k_i}$, $$|M'| = |M| |N| =|M| \prod_{i=1}^{m} |M_i|^{k_i} = O \left( O(1)^{\sum_{i=1}^m k_i}\right),$$ and the number of terms is $$| \Ext^1( M,N) | = \prod_{i=1}^m |\Ext^1 (M, M_i)|^{k_1} = O \left( O(1)^{\sum_{i=1}^m k_i }\right),$$ so the total sum is $O \left( O(1)^{\sum_{i=1}^m k_i}\right)$, giving the assumption of . So we may apply Lemma \[module-flat-case\], obtaining $$\lim_{t \to \infty} \mu^M_t(1) = \mu^M(1),$$ which by Lemma \[module-measure-zero\], applied to both $\mu$ and $\mu_t$, gives our desired . [9]{} Nigel Boston and Melanie Matchett Wood, Nonabelian Cohen-Lenstra Heuristics over Function Fields, [arXiv:1604.03433](https://arxiv.org/abs/1604.03433). H. Cohen and H. W. Lenstra, Jr. Heuristics on class groups of number fields. In [*NumberTheory, Noordwijkerhout 1983*]{}, Lecture Notes in Mathematics [**1068**]{} (1984), 33-62. H. Cohen and J. Martinet. Class groups of number fields: numerical heuristics. [*Mathematics of Computation*]{}, [**48**]{} (1987), 123-137. Jordan S. Ellenberg, Akshay Venkatesh, and Craig Westerland. Homological stability for Hurwitz spaces and the Cohen-Lenstra conjecture over function fields. [*Annals of Mathematics*]{}, [**183**]{} (2016), 729-786. D.R. Heath-Brown, The size of the Selmer groups for the congruent number problem, II, [preprint version](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.538.9147&rep=rep1&type=pdf) D.R. Heath-Brown, The size of the Selmer groups for the congruent number problem, II, [*Inventiones Mathematicae*]{} [**118**]{} (1994), 331-370. Yuan Liu, Melanie Matchett Wood, and David Zurieck-Brown, A predicted distribution for Galois groups of maximal unramified extensions, [arXiv:1907.05002](https://arxiv.org/abs/1907.05002) Weitong Wang and Melanie Matchett Wood, Moments and interpretations of the Cohen-Lenstra-Martinet heuristics, [arXiv:1907.11201](https://arxiv.org/abs/1907.11201).
--- abstract: 'This article analyzes the design of a low-noise amplifier intended as the input front-end for the measurement of the low-frequency components (below 10 Hz) of a 50  source. Low residual flicker is the main desired performance. This feature can only be appreciated if white noise is sufficiently low, and if an appropriate design ensures dc stability. An optimal solution is proposed, in which the low-noise and dc-stability features are achieved at a reasonable complexity. Gain is accurate to more than 100 kHz, which makes the amplifier an appealing external front-end for fast Fourier transform (FFT) analyzers.' author: - 'Enrico Rubiola[^1]' - 'Franck Lardet-Vieudrin[^2]' bibliography: - '\\bibfile{ref-short}.bib' - | % \\bibfile{references}.bib - | % \\bibfile{rubiola}.bib date: | Cite this article as:\ E. Rubiola, F. Lardet-Vieudrin, “Low flicker-noise amplifier for 50  sources”, *Review of Scientific Instruments* vol. 75 no. 5 pp.1323–1326, May 2004 title: 'Low flicker-noise DC amplifier for 50  sources' --- Introduction {#sec:intro} ============ Often the experimentalist needs a low-noise preamplifier for the analysis of low-frequency components (below 10 Hz) from a 50   source. The desired amplifier chiefly exhibits low residual flicker and high thermal stability, besides low white noise. Thermal stability without need for temperature control is a desirable feature. In fact the problem with temperature control, worse than complexity, is that in a nonstabilized environment thermal gradients fluctuate, and in turn low-frequency noise is taken in. A low-noise amplifier may be regarded as an old subject, nonetheless innovation in analysis methods and in available parts provides insight and new design. The application we initially had in mind is the postdetection preamplifier for phase noise measurements [@rubiola02rsi]. Yet, there resulted a versatile general-purpose scheme useful in experimental electronics and physics. Design Strategy {#sec:strategy} =============== The choice of the input stage determines the success of a precision amplifier. This issue involves the choice of appropriate devices and of the topology. Available low-noise devices are the junction field-effect transistor (JFET) and the bipolar transistor (BJT), either as part of an operational amplifier or as a stand-alone component. The white noise of these devices is well understood [@van.der.ziel:noise-ssdc; @van.der.ziel:fluctuations; @netzer81pieee; @erdi81jssc]. Conversely, flicker noise is still elusive and relies upon models, the most accredited of which are due to McWhorter [@mcwhorter57] and Hooge [@hooge69pla], or on smart narrow-domain analyses, like [@green85jpd-1; @green85jpd-2; @jamaldeen99jap], rather than on a unified theory. Even worse, aging and thermal drift chiefly depend on proprietary technologies, thus scientific literature ends up to be of scarce usefulness. The JFET is appealing because of the inherently low white noise. The noise temperature can be as low as a fraction of a degree Kelvin. Unfortunately, the low noise of the JFET derives from low input current, hence a high input resistance (some M) is necessary. The JFET noise voltage is hardly lower than 5 , some five to six times higher than the thermal noise of a 50  resistor ($\sqrt{4kTR}=0.89$ ). The JFET is therefore discarded in favor of the BJT. A feedback scheme, in which the gain is determined by a resistive network, is necessary for gain accuracy and flatness over frequency. Besides the well known differential stage, a single-transistor configuration is possible (Ref. [@motchenbacher:low-noise:1ed], page 123), in which the input is connected to the base and the feedback to the emitter. This configuration was popular in early audio hi-fi amplifiers. The advantage of the single-transistor scheme is that noise power is half the noise of a differential stage. On the other hand, in a dc-coupled circuit thermal effects are difficult to compensate without reintroducing noise, while thermal compensation of the differential stage is guaranteed by the symmetry of the base-emitter junctions. Hence we opt for the differential pair. OP27 LT1028 MAT02 MAT03 --------------------------------------------- ----------------------- ----------------------- ----------------------- ----------------------- ------ ----------------------- width0pt height2.5ex depth0ex WHITE NOISE noise voltage$\sqrt{h_{0,v}}$ 3 0.9 0.9 0.7 0.8 noise current$\sqrt{h_{0,i}}$ 0.4 1 0.9 1.4  1.2 noise power $2\sqrt{h_{0,v}h_{0,i}}$ $2.4{\times}10^{-21}$ $1.8{\times}10^{-21}$ $1.6{\times}10^{-21}$ $2.0{\times}10^{-21}$ $1.9{\times}10^{-21}$ noise temperature $T_w$ 174 130 117 142 K 139 optimum resistance $R_{b,w}$ 7500 900 1000 500 667 $2{\times}50$-input noise 3.3 1.55 1.55 1.5 1.5  width0pt height2.5ex depth0ex FLICKER NOISE noise voltage$\sqrt{h_{-1,v}}$ 4.3 1.7 1.6 1.2 ( $0.4$ ) noise current$\sqrt{h_{-1,i}}$ 4.7 16 1.6 n.a. 11 noise power $2\sqrt{h_{-1,v}h_{-1,i}}$ $4.1{\times}10^{-20}$ $5.3{\times}10^{-20}$ $5.1{\times}10^{-21}$ – (…) 1-Hz noise temperature $T_f$ 2950 3850 370 – K (…) optimum resistance $R_{b,f}$ 910 106 1000 – (…) $2{\times}50$-input noise 4.3 2.3 1.6 – 1.1  width0pt height2.5ex depth0ex THERMAL DRIFT 200 250 100 300 nV/K – : \[tab:opa\] width0pt height2.5ex depth2ex Selection of some low-noise BJT amplifiers. Table \[tab:opa\] compares a selection of low-noise bipolar amplifiers. The first columns are based on the specifications available on the web sites [@www.analog-devices; @www.linear-technology]. The right-hand column derives from our measurements, discussed in Secs. \[sec:frontend\] and \[sec:results\]. Noise is described in terms of a pair of random sources, voltage and current, which are assumed independent. This refers to the Rothe-Dahlke model [@rothe56ire]. Nonetheless, a correlation factor arises in measurements, due to the distributed base resistance $r_{bb'}$. Whether and how $r_{bb'}$ is accounted for in the specifications is often unclear. The noise spectra are approximated with the power law $S(f)=\sum_{\alpha}h_\alpha f^\alpha$. This model, commonly used in the domain of time and frequency, fits to the observations and provides simple rules of transformation of spectra into two-sample (Allan) variance $\sigma_y(\tau)$. This variance is an effective way to describe the stability of a quantity $y$ as a function of the measurement time $\tau$, avoiding the divergence problem of the $f^\alpha$ processes in which $\alpha\le-1$. References [@rutman78pieee] and [@rubiola01im] provide the background on this subject, and application to operational amplifiers. The noise power spectrum $2\sqrt{h_vh_i}$ is the minimum noise of the device, i.e., the noise that we expect when the input is connected to a cold (0 K) resistor of value $R_b=\sqrt{h_{v}/h_{i}}$, still under the assumption that voltage and current are uncorrelated. When the input resistance takes the optimum value $R_b$, voltage and current contributions to noise are equal. The optimum resistance is $R_{b,w}$ for white noise and $R_{b,f}$ for flicker. Denoting by $f_{c}$ the corner frequency at which flicker noise is equal to white noise, thus $f_{c,v}$ for voltage and $f_{c,i}$ for current, it holds that $R_{b,w}/R_{b,f}=\sqrt{f_{c,i}/f_{c,v}}$. Interestingly, with most bipolar operational amplifiers we find $f_{c,i}/f_{c,v}\approx50{-}80$, hence $R_{b,w}/R_{b,f}\approx7{-9}$. Whereas we have no explanation for this result, the lower value of the flicker optimum resistance is a fortunate outcome. The equivalent temperature is the noise power spectrum divided by the Boltzmann constant $k=1.38{\times}10^{-23}$ J/K. A crucial parameter of Table \[tab:opa\] is the total noise when each input is connected to a 50  resistor at room temperature. This calculated value includes noise voltage and current, and the thermal noise of the two resistors. In a complete amplifier two resistors are needed, at the input and in the feedback circuit. Still from Table \[tab:opa\], the transistor pairs show lower noise than the operational amplifiers, although the PNP pair is only partially documented. Experience indicates that PNP transistors are not as good as NPN ones to most extents, but exhibit lower noise. In other domains, frequency multipliers and radio-frequency oscillators make use of PNP transistors for critical application because of the lower flicker noise. Encouraged by this fact, we tried a differential amplifier design based on the MAT03, after independent measurement of some samples. Input Stage {#sec:frontend} =========== ![Noise measurement of a transistor pair. For clarity, the distributed base resistance $r_{bb'}$ is extracted from the transistors.[]{data-label="fig:measure-mat"}](measure-mat) The typical noise spectrum of the MAT03, reported in the data sheet, shows an anomalous slope at low frequencies (0.1–1 Hz), significantly different from $f^{-1}$. This is particularly visible at low collector current (10–100 $\mu$A), but also noticeable at $I_C=1$ mA. We suspect that the typical spectrum reflects the temperature fluctuation of the environment through the temperature coefficient of the offset voltage $V_{OS}$ rather than providing information on the flicker noise inherent in the transistor pair. The measurement of a spectrum from 0.1 Hz takes some 5 min. At that time scale, in a normal laboratory environment the dominant fluctuation is a drift. If the drift is linear, $v(t)=ct$ starting at $t=0$, the Fourier transform is $V(\omega)=j\pi c\delta(\omega)-c/\omega^2$. Dropping off the term $\delta(\omega)$, which is a dc term not visible in a log-log scale, the power spectrum density, i.e., the squared Fourier transform, is $$\label{eq:f-drift} S_v(\omega)=\frac{c^2}{\omega^4} \qquad\mbox{or}\qquad S_v(f)=\frac{(2\pi)^4c^2}{f^4}~~.$$ A parabolic drift—seldom encountered in practice—has a spectrum proportional to $f^{-6}$, while a smoothly walking drift tends to be of the $f^{-5}$ type. As a consequence, a thermal drift can be mistaken for a random process of slope $f^{4}$ to $f^{5}$, which may hide the inherent $f^{-1}$ noise of the device. For this reason, the test circuit (Fig. \[fig:measure-mat\]) must be enclosed in an appropriate environment. We used, with similar results, a Dewar flask coupled to the environment via a heat exchanger, and a metal box mounted on a heat sink that has a mass of 1 kg and a thermal resistance of 0.6 K/W. These odd layouts provide passive temperature stabilization through a time constant and by eliminating convection, and evacuate the small amount of heat (200 mW) dissipated by the circuit. ![Typical spectrum of the noise voltage.[]{data-label="fig:f695"}](f695) Due to the low value of $r_{bb'}$ (15–20 ) the current measurement can be made independent of voltage noise, but not vice versa. Thus, we first measure the noise current setting $R_B=8$ k, which is limited by the offset current; then we measure the noise voltage setting $R_B=10$ . A technical difficulty is that at 1 Hz and below most spectrum analyzers—including our one—must be coupled in dc, hence high offset stability is needed in order to prevent saturation of the analyzer. The measured spectra are $S_i(f)=1.45{\times}10^{-24}+1.2{\times}10^{-22}f^{-1}$ (i.e., 1.2 white, and 11 flicker), and $S_v(f)=10^{-18}+1.8{\times}10^{-19}f^{-1}$ (i.e., 1 white, and 425 flicker). The current spectrum is the inherent noise current of the differential pair. Conversely, with the voltage spectrum (Fig. \[fig:f695\]) we must account for the effect of $R_B$ and $r_{bb'}$. With our test circuit, the expected white noise is $h_{0,v}=4kTR+2qI_BR\simeq1.7{\times}10^{-20}R$ , which is the sum of thermal noise and the shot noise of the base current $I_B$. $R=2(R_B+r_{bb'})$ is the equivalent base resistance, while the shot noise of the collector current is neglected. Assuming $r_{bb'}=16$  (from the data sheet), the estimated noise is $h_{0,v}\simeq9{\times}10^{19}$ . This is in agreement with the measured value of $10^{-18}$ . Then, we observe the effect of the current flickering on the test circuit is $R^2h_{-1,i}\simeq1.6{\times}10^{-19}$ . The latter is close to the measured value $1.8{\times}10^{-19}$ . Hence, the observed voltage flickering derives from the current noise through the external resistors $R_B$ and the internal distributed resistance $r_{bb'}$ of the transistors. Voltage and current are therefore highly correlated. As a further consequence, the product $2\sqrt{h_{-1,v}h_{-1,i}}$ is not the minimum noise power, and the ratio $\sqrt{h_{-1,v}/h_{-1,i}}$ is not the optimum resistance. The corresponding places in Table \[tab:opa\] are left blank. Due to the measurement uncertainty, we can only state that a true independent voltage flickering, if any, is not greater than $4{\times}10^{-20}$ . The same uncertainty affects the optimum resistance $R_{b,f}$, which is close to zero. The measured white noise is in agreement with the data sheet. On the other hand, our measurements of flicker noise are made in such unusual conditions that the results should not be considered in contradiction with the specifications, as the specifications reflect the the low-frequency behavior of the device in a normal environment. Implementation and Results {#sec:results} ========================== ![Scheme of the low-noise amplifier.[]{data-label="fig:scheme"}](scheme) Figure \[fig:scheme\] shows the scheme of the complete amplifier, inspired to the “super low-noise amplifier” proposed in Fig. 3a of the MAT03 data sheet. The NPN version is also discussed in Ref. [@franco:opa] (p. 344). The original circuit makes use of three differential pairs connected in parallel, as it is designed for the lowest white noise with low impedance sources ($\ll50$ ), like coil microphones. In our case, using more than one differential pair would increase the flicker because of current noise. The collector current $I_C=1.05$ mA results as a trade-off between white noise, which is lower at high $I_C$, dc stability, which is better at low dissipated power, flicker, and practical convenience. The gain of the differential pair is $g_mR_C=205$, where $g_m=I_C/V_T=41$ mA/V is the transistor transconductance, and $R_C=5$ k is the collector resistance. The overall gain is $1+R_G/R_B\simeq500$. Hence the gain of the OP27 is of 2.5, which guarantees the closed-loop stability (here, oscillation-free operation). If a lower gain is needed, the gain of the differential stage must be lowered by inserting $R_A$. The trick is that the midpoint of $R_A$ is a ground for the dynamic signal, hence the equivalent collector resistance that sets the gain is $R_C$ in parallel to $\frac{1}{2}R_G$. The bias current source is a cascode Wilson scheme, which includes a light emitting diode (LED) that provides some temperature compensation. The stability of the collector resistors $R_C$ is a crucial point because the voltage across them is of 5 V. If each of these resistors has a temperature coefficient of $10^{-6}$/K, in the worst case there results a temperature coefficient of 10 $\mu$V/K at the differential output, which is equivalent to an input thermal drift of 50 nV/K. This is 1/6 of the thermal coefficient of the differential pair. In addition, absolute accuracy is important in order to match the collector currents. This is necessary to take the full benefit from the symmetry of the transistor pair. ![Prototype of the low-noise amplifier.[]{data-label="fig:prototype"}](franck-ampli-small){width="\textwidth"} Two equal amplifiers are assembled on a printed circuit board, and inserted in a $10{\times}10{\times}2.8$ , 4 mm thick aluminum box (Fig. \[fig:prototype\]). The box provides thermal coupling to the environment with a suitable time constant, and prevents fluctuations due to convection. $LC$ filters, of the type commonly used in HF/VHF circuits, are inserted in series to the power supply, in addition to the usual bypass capacitors. For best stability, and also for mechanical compatibility with our equipment, input and output connector are of the SMA type. Input cables should not PTFE-insulated because of piezoelectricity (see the review paper [@fukada00uffc]). ![Residual noise of the complete amplifier, input terminated to a 50  resistor.[]{data-label="fig:f691"}](f691) Figure \[fig:f691\] shows the noise spectrum of one prototype input terminated to a 50  resistor. The measured noise is $\sqrt{h_0}=1.5$ (white) and $\sqrt{h_{-1}}=1.1$ (flicker). The corner frequency at which the white and flicker noise are equal is $f_c=0.5$ Hz. Converting the flicker noise into two-sample (Allan) deviation, we get $\sigma_v(\tau)=1.3$ nV, independent of the measurement time $\tau$. Finally, we made a simple experiment aimed to explain in practical terms the importance of a proper mechanical assembly. We first removed the Al cover, exposing the circuit to the air flow of the room, yet in a quiet environment, far from doors, fans, etc., and then we replaced the cover with a sheet of plain paper (80 ). The low-frequency spectrum (Fig. \[fig:f694\]) is $5{\times}10^{-19}f^{-5}$ in the first case, and about $1.6{\times}10^{-19}f^{-4}$ in the second case. This indicates the presence of an irregular drift, smoothed by the paper protection. Interestingly, Hashiguchi [@sikula03arw] reports on thermal effects with the same slope and similar cutoff frequencies, observed on a low-noise JFET amplifier for high impedance sources. ![Thermal effects on the amplifier.[]{data-label="fig:f694"}](f694) \#1[/Users/rubiola/Documents?workocs/bib/\#1]{} [^1]: Université Henri Poincaré, Nancy, France, `www.rubiola.org`, e-mail `enrico@rubiola.org` [^2]: Dept. <span style="font-variant:small-caps;">lpmo</span>, <span style="font-variant:small-caps;">femto-st</span> Besançon, France, e-mail `lardet@lpmo.edu`
--- abstract: 'Gaussian sum-rules, which are related to a two-parameter Gaussian-weighted integral of a hadronic spectral function, are able to examine the possibility that more than one resonance makes a significant contribution to the spectral function. The Gaussian sum-rules, including instanton effects, for scalar gluonic and non-strange scalar quark currents clearly indicate a distribution of the resonance strength in their respective spectral functions. Furthermore, analysis of a two narrow resonance model leads to excellent agreement between theory and phenomenology in both channels. The scalar quark and gluonic sum-rules are remarkably consistent in their prediction of masses of approximately $ 1\;{\rm GeV}$ and $1.4\;{\rm GeV}$ within this model. Such a similarity would be expected from hadronic states which are mixtures of gluonium and quark mesons.' author: - | T.G. Steele\ [*Department of Physics & Engineering Physics, University of Saskatchewan*]{}\ [*Saskatoon, SK  S7N 5E2, Canada*]{}\ D. Harnett\ [*Department of Physics, University College of the Fraser Valley*]{}\ [*Abbotsford, BC  V2S 7M8, Canada*]{}\ G. Orlandini\ [*Dipartimento di Fisica and INFN Gruppo Collegato di Trento*]{}\ [*Università di Trento, I-38050 Povo, Italy*]{} title: ' Gaussian Sum-Rule Analysis of Scalar Gluonium and Quark Mesons ' --- Introduction ============ The multitude of scalar hadronic states with masses above $1\;{\rm GeV}$ [@pdg] is frequently noted as evidence that a $q\bar q$ nonet is insufficient to accommodate these states, as would be expected from the existence of a (scalar) gluonium state. Gaussian QCD sum-rules have been shown to be sensitive to the hadronic spectral function over a broad energy range, and analysis techniques have been developed to exploit this dependence to determine how resonance strength is distributed in the spectral function [@orl00; @har01; @utica02]. Thus Gaussian sum-rules provide a valuable technique for determining how the quark and gluonium content is distributed amongst scalar hadronic states. The simplest Gaussian sum-rule (GSR) has the form [@gauss] $$G_0\left(\hat s,\tau\right)=\frac{1}{\sqrt{4{\ensuremath{\mathrm{\pi}}}\tau}} \int\limits_{t_0}^\infty \exp\left[\frac{-\left(t-\hat{s}\right)^2}{4\tau}\right]\,\frac{1}{{\ensuremath{\mathrm{\pi}}}}\rho(t)\;{\ensuremath{ \mathrm{d}\,t }} \quad,\quad\tau>0 \label{basic_gauss}$$ and relates the QCD prediction $G_0\left(\hat s,\tau\right)$ to an integral of its associated hadronic spectral function $\rho(t)$. The smearing of the spectral function by the Gaussian kernel peaked at $t=\hat s$ through the (approximate) region $\hat s-2\sqrt{\tau}\le t\le \hat s+2\sqrt{\tau}$ provides a clear conceptual implementation of quark-hadron duality. The width of this duality interval is constrained by QCD since renormalization-group improvement of the QCD (left-hand) side of  results in identifying the renormalization scale $\nu$ through $\nu^2=\sqrt{\tau}$ [@orl00; @gauss]; therefore it is not possible to achieve the formal $\tau\to 0$ limit where complete knowledge of the spectral function could be obtained through $$\lim_{\tau\to 0}G_0\left(\hat s,\tau\right)=\frac{1}{{\ensuremath{\mathrm{\pi}}}}\rho\left(\hat s\right)\quad ,\quad \hat s>t_0 \quad . \label{gauss_limit}$$ The variable $\hat{s}$ in , on the other hand, is unconstrained by QCD, and so the $\hat{s}$ dependence of $G_0\left(\hat s,\tau\right)$ can be used to probe the behaviour of the smeared spectral function, and hence the essential features of $\rho(t)$. An interesting feature of the GSR  is its ability to study excited and ground states with similar sensitivity. For example, as $\hat s$ passes through $t$ values corresponding to resonance peaks, the Gaussian kernel reaches its maximum value. Thus any features of the spectral function strong enough to be isolated from the continuum will be revealed through the GSR. In this regard, GSRs should be contrasted with Laplace sum-rules $$R\left(\Delta^2\right)=\frac{1}{\pi}\int\limits_{t_0}^\infty\exp{\left(-\frac{t}{\Delta^2}\right)}\rho(t)\, {\ensuremath{ \mathrm{d}\,t }} \quad , \label{basic_laplace}$$ which exponentially suppress excited states in comparison to the ground state. In what follows, the original formulation of GSRs [@gauss] will be reviewed along with corresponding GSR analysis techniques [@orl00; @har01]. A GSR moments analysis for scalar gluonic and quark correlation functions will be presented in order to demonstrate that the associated spectral functions have a distribution of resonance strength. In addition, mass and relative coupling predictions specific to a double narrow resonance model will be determined [@orl00; @har01]. The major emphasis of this paper is to highlight the remarkable consistency between GSR mass predictions extracted from the scalar gluonic channel and from the scalar-isoscalar quark channel, a result indicative of the existence of hadronic states which are mixtures of quark mesons and gluonium. Formulation and Analysis Techniques for Gaussian Sum-Rules ========================================================== Gaussian sum-rules are based on QCD correlation functions of renormalization-group invariant composite operators $J(x)$ $$\Pi\left(Q^2\right)={\ensuremath{\mathrm{i}}}\int\mathrm{d}^4x\; {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}q\cdot x}\left\langle O \vert T\left[ J(x) J(0)\right] \vert O \right\rangle \quad,\quad Q^2\equiv -q^2 \label{basic_corr_fn}$$ which, in turn, satisfy dispersion relations appropriate to the asymptotic form of the correlator in question. For example, the scalar gluonic correlation function $\Pi\left(Q^2\right)$ (see  and ) satisfies the following dispersion relation with three subtraction constants $$\Pi\left(Q^2\right)-\Pi(0)-Q^2\Pi'(0)-\frac{1}{2}Q^4\Pi''(0) =-\frac{Q^6}{\pi}\int\limits_{t_0}^\infty \frac{\rho(t)}{t^3\left(t+Q^2\right)}\;{\ensuremath{ \mathrm{d}\,t }} \quad. \label{disp_rel}$$ In general, the quantity $\rho(t)$ is the spectral function appropriate to the quantum numbers of the given current, and it should be noted that in certain situations (such as the scalar gluonic correlation function) the subtraction constant $\Pi(0)$ is determined by a low-energy theorem [@let] (see ). Undetermined dispersion-relation constants and field-theoretical divergences are eliminated through the GSR [@har01][^1] $$\label{srdef} G_k(\hat{s},\tau)\equiv \sqrt{\frac{\tau}{{\ensuremath{\mathrm{\pi}}}}}{\ensuremath{\mathcal{B}}}\left\{ \frac{(\hat{s}+{\ensuremath{\mathrm{i}}}\Delta)^k \Pi(-\hat{s}-{\ensuremath{\mathrm{i}}}\Delta) - (\hat{s}-{\ensuremath{\mathrm{i}}}\Delta)^k \Pi(-\hat{s}+{\ensuremath{\mathrm{i}}}\Delta) }{{\ensuremath{\mathrm{i}}}\Delta} \right\}$$ where $k= -1,0,1, \ldots$ and where the Borel transform ${\ensuremath{\mathcal{B}}}$ is defined by $$\label{borel} {\ensuremath{\mathcal{B}}}\equiv \lim_{\stackrel{N,\Delta^2\rightarrow\infty}{\Delta^2/N\equiv 4\tau}} \frac{(-\Delta^2)^N}{\Gamma(N)}\left( \frac{\mathrm{d}}{\mathrm{d}\Delta^2}\right)^N \quad.$$ The dispersion relation  in conjunction with definition  together yield the following family of GSRs $$G_{k}(\hat{s},\tau)+ \delta_{k\,-1}\frac{1}{\sqrt{4\pi\tau}} \exp\left( \frac{-\hat{s}^2}{4\tau}\right)\Pi(0) = \frac{1}{\sqrt{4\pi\tau}}\int_{t_0}^{\infty} t^k \exp\left[ \frac{-(\hat{s}-t)^2}{4\tau}\right]\frac{1}{\pi}\rho(t)\;{\ensuremath{ \mathrm{d}\,t }} \label{gauss_family}$$ where the identity $${\ensuremath{\mathcal{B}}}\left[\frac{1}{\Delta^2+a}\right] =\frac{1}{4\tau}\exp\left(\frac{-a}{4\tau}\right) \quad,\quad n=0,1,2,\ldots \label{borelDerek}$$ has been used to simplify the phenomenological (right-hand) side of  as well as the term on the QCD side proportional to $\Pi(0)$. Evidently the $k=-1$ sum-rule can only be defined in cases where there exists an appropriate low-energy theorem. Calculation of $G_k(\hat{s},\tau)$ is achieved through an identity relating (\[borel\]) to the inverse Laplace transform [@gauss] $$\label{bor_to_lap} {\ensuremath{\mathcal{B}}}[f(\Delta^2)] = \frac{1}{4\tau}{\ensuremath{\mathcal{L}^{-1}}}[f(\Delta^2)]$$ where, in our notation, $${\ensuremath{\mathcal{L}^{-1}}}[f(\Delta^2)] = \frac{1}{2{\ensuremath{\mathrm{\pi}}}{\ensuremath{\mathrm{i}}}} \int\limits_{a-{\ensuremath{\mathrm{i}}}\infty}^{a+{\ensuremath{\mathrm{i}}}\infty} f(\Delta^2) \exp\left( \frac{\Delta^2}{4\tau} \right) {\ensuremath{ \mathrm{d}\,\Delta^2 }}$$ with $a$ chosen such that all singularities of $f$ lie to the left of $a$ in the complex $\Delta^2$-plane. Through a simple change of variables, the calculation of the GSR reduces to [@har01] $$G_k(\hat{s},\tau) = \frac{1}{\sqrt{4{\ensuremath{\mathrm{\pi}}}\tau}}\frac{1}{2{\ensuremath{\mathrm{\pi}}}{\ensuremath{\mathrm{i}}}} \int_{\Gamma_1 +\Gamma_2} (-w)^k \exp \left[ \frac{-(\hat{s}+w)^2}{4\tau}\right] \Pi(w)\;{\ensuremath{ \mathrm{d}\,w }} \label{finish}$$ which, by suitably deforming the complex contours $\Gamma_1$ and $\Gamma_2$, can be recast in the form $$G_k(\hat{s},\tau) = -\frac{1}{\sqrt{4{\ensuremath{\mathrm{\pi}}}\tau}}\frac{1}{2{\ensuremath{\mathrm{\pi}}}{\ensuremath{\mathrm{i}}}} \int_{\Gamma_c +\Gamma_{\epsilon}} (-w)^k \exp \left[ \frac{-(\hat{s}+w)^2}{4\tau}\right]\Pi(w)\;{\ensuremath{ \mathrm{d}\,w }} \label{finishDerek}$$ where the trajectories $\Gamma_1$, $\Gamma_2$, $\Gamma_c$, and $\Gamma_{\epsilon}$ are all depicted in Figure \[cont\_fig\]. Further simplification of  requires a specific correlator $\Pi$ *i.e.*  or . ![[Contours of integration $\Gamma_1+\Gamma_2$ and $\Gamma_c+\Gamma_\epsilon$ defining the Gaussian sum-rule (see  and  respectively). The wavy line on the negative real axis denotes the branch cut of $\Pi(w)$. ]{}[]{data-label="cont_fig"}](complexContours.eps) Next, we impose a fairly general resonance(s) plus continuum model $$\rho(t)=\rho^{{\rm had}}(t)+\theta\left(t-s_0\right){\rm Im}\Pi^{{\ensuremath{\mathrm{QCD}}}}(t) \label{respcont}$$ where $s_0$ represents the onset of the QCD continuum. The continuum contribution of  to the right-hand side of  is $$\label{continuum} G_k^{{\rm cont}} (\hat{s},\tau,s_0) = \frac{1}{\sqrt{4\pi\tau}} \int_{s_0}^{\infty} t^k \exp \left[ \frac{-(\hat{s}-t)^2}{4\tau} \right] \frac{1}{\pi} {\rm Im} \Pi^{{\ensuremath{\mathrm{QCD}}}}(t)\; {\ensuremath{ \mathrm{d}\,t }}\quad ,$$ and is combined with $G_k\left(\hat s,\tau\right)$ to obtain the total QCD contribution $$G_k^{{\ensuremath{\mathrm{QCD}}}}\left(\hat{s},\tau,s_0\right) \equiv G_k\left(\hat{s},\tau\right) - G_k^{{\rm cont}} \left(\hat{s},\tau,s_0\right) \quad , \label{blah}$$ resulting in the final relation between the QCD and hadronic sides of the GSRs $$G_{k}^{\ensuremath{\mathrm{QCD}}}\left(\hat{s},\tau,s_0\right)+ \delta_{k\,-1}\frac{1}{\sqrt{4\pi\tau}} \exp\left( \frac{-\hat{s}^2}{4\tau}\right)\Pi(0) =\frac{1}{\sqrt{4\pi\tau}}\int_{t_0}^{\infty} t^k \exp\left[ \frac{-(\hat{s}-t)^2}{4\tau}\right] \frac{1}{\pi} \rho^{\rm had}(t)\; {\ensuremath{ \mathrm{d}\,t }} \quad. \label{final_gauss}$$ Original studies involving the GSRs exploited the diffusion equation $$\frac{\partial^2 G_0\left( \hat s,\tau\right)}{\partial\hat s^2}= \frac{\partial G_0\left( \hat s,\tau\right)}{\partial \tau} \label{diffusion}$$ which follows from  for $k=0$. In particular, when $\rho^{{\rm had}}(t)$ (see ) is evolved through the diffusion equation , it only reproduces the QCD prediction at large energies ($\tau$ large) if the resonance and continuum contributions are balanced through the $k=0$ member of the finite-energy sum-rule family [@gauss] $$F_k\left(s_0\right)=\frac{1}{\pi}\int\limits_{t_0}^{s_0} t^k\rho^{\rm had}(t)\;{\ensuremath{ \mathrm{d}\,t }}\quad . \label{basic_fesr}$$ An additional connection between the finite-energy sum-rules and the GSRs can be found by integrating both sides of  with respect to $\hat{s}$ to obtain $$\int\limits_{-\infty}^\infty G_k^{{\ensuremath{\mathrm{QCD}}}}(\hat{s},\tau,s_0)\;{\ensuremath{ \mathrm{d}\,\hat{s} }}+\delta_{k\,-1}\Pi(0) =\frac{1}{\pi}\int\limits_{t_0}^{\infty} t^k\rho^{{\rm had}}(t)\;{\ensuremath{ \mathrm{d}\,t }}\quad , \label{tom_norm_2}$$ indicating that the finite-energy sum-rules are related to the normalization of the GSRs. Thus the information independent of the finite-energy sum-rule constraint following from the diffusion equation analysis [@gauss] is contained in the [*normalized*]{} Gaussian sum-rules (NGSRs) [@orl00] $$\begin{gathered} N^{{\ensuremath{\mathrm{QCD}}}}_k (\hat{s}, \tau, s_0) = \frac{G^{{\ensuremath{\mathrm{QCD}}}}_{k} (\hat s, \tau, s_0) + \delta_{k\,-1}\frac{1}{\sqrt{4\pi\tau}} \exp\left(\frac{-\hat{s}^2}{4\tau}\right) \Pi(0) }{M^{{\ensuremath{\mathrm{QCD}}}}_{k,0} (\tau, s_0)+\delta_{k\,-1}\Pi(0)} \label{tom_norm_srk} \\ M_{k,n}(\tau, s_0) =\int\limits_{-\infty}^\infty \hat{s}^n G_k (\hat s,\tau, s_0)\;{\ensuremath{ \mathrm{d}\,\hat{s} }} \quad,\quad n=0,1,2,\ldots\quad, \label{moments}\end{gathered}$$ which are related to the hadronic spectral function via $$\label{ngsr} N_k^{{\ensuremath{\mathrm{QCD}}}}(\hat{s},\tau,s_0) = \frac{ \frac{1}{\sqrt{4\pi\tau}} \int_{t_0}^{\infty} t^k \exp\left[\frac{-(\hat{s}-t)^2}{4\tau} \right] \rho^{{\rm had}}(t)\;{\ensuremath{ \mathrm{d}\,t }}}{\int_{t_0}^{\infty} t^k \rho^{{\rm had}}(t)\;{\ensuremath{ \mathrm{d}\,t }}} \quad.$$ Gaussian Sum-Rules for Scalar Gluonic and Non-Strange Scalar Quark Currents =========================================================================== At leading order in the quark mass for $n_f$ flavours, scalar hadronic states can either be probed through the correlation function of the (renormalization-group invariant) gluonic current $$\begin{gathered} \Pi_g\left(Q^2\right)={\ensuremath{\mathrm{i}}}\int\,\mathrm{d}^4x\,{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}q\cdot x}\left\langle O \vert T\left[ J_g(x)J_g(0)\right] \vert O \right\rangle \quad,\quad Q^2\equiv -q^2 \label{glue_corr} \\ J_g(x)=-\frac{\pi^2}{\alpha\beta_0}\beta\left(\alpha \right)G^a_{\mu\nu}(x)G^a_{\mu\nu}(x) \label{glue_curr}\end{gathered}$$ where $$\begin{gathered} \beta\left(\alpha\right) =\nu^2\frac{\mathrm{d}}{\mathrm{d}\nu^2}\left(\frac{\alpha}{\pi}\right)= -\beta_0\left(\frac{\alpha}{\pi}\right)^2-\beta_1\left(\frac{\alpha}{\pi}\right)^3+\ldots \\ \beta_0 = \frac{11}{4}-\frac{1}{6} n_f\quad ,\quad \beta_1=\frac{51}{8}-\frac{19}{24}n_f \end{gathered} \label{betaDerek}$$ or through the correlation function of $I=0,1$ (non-strange) quark currents $$\begin{gathered} \Pi_q\left(Q^2\right) = {\ensuremath{\mathrm{i}}}\,\int\,\mathrm{d}^4x\;{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}q\cdot x} \left\langle 0 | T \left[J_q (x) J_q (0)\right] |0\right\rangle \quad,\quad Q^2\equiv -q^2 \label{quark_corr} \\ J_q(x) = m_q\left[\overline{u}(x)u(x) + (-1)^I\, \overline{d}(x)d(x)\right]/{2} \quad. \label{quark_curr}\end{gathered}$$ The quark mass factor $m_q = \left(m_u + m_d\right)/2$ in results in a renormalization-group invariant current. Although both the gluonic and ($I=0$) quark correlation functions are probes of scalar hadronic states, those which have a more significant overlap with the gluonic current will predominate in , while those states which are dominantly of a quark nature are more significant in . A mixed state with substantial gluonic and quark components ([*i.e.*]{} a state that overlaps with both the gluonic and quark currents) should self-consistently appear in an analysis of both correlation functions. In particular, prediction of mass-degenerate states from the QCD sum-rule analysis associated with these two correlation functions is evidence for interpretation as a mixed state. In the scalar gluonic channel, the low-energy theorem (LET) [@let] $$\Pi_g(0)\equiv\lim_{Q^2\rightarrow 0} \Pi_g(Q^2) = \frac{8\pi}{\beta_0} \langle J_g\rangle \label{letDerek}$$ allows construction of the $k=-1$ GSR. The significance of instanton contributions in the overall consistency of the LET-sensitive $k=-1$ sum-rule and the LET-insensitive $k\ge 0$ sum-rules was first demonstrated for Laplace sum-rules [@shuryak; @gluelet]. A similar consistency is observed for the Gaussian sum-rules, but theoretical uncertainties are better controlled in the $k\ge 0$ GSR [@har01], and hence this paper will focus on the $k=0$ GSRs for both the quark and gluonic channels. The QCD correlation functions  and  contain perturbative, condensate, and instanton contributions. For the scalar gluonic case, substituting  into  and performing the relevant integrals yields the QCD prediction $G_0^{(g)}\left(\hat s,\tau,s_0\right)$ for the $k=0$ GSR to leading order in the quark mass $$\begin{split} G_0^{(g)}(\hat{s},\tau,s_0) =& - \frac{1}{\sqrt{4\pi\tau}} \int\limits_0^{s_0} t^2 \exp\left[\frac{-(\hat{s}-t)^2}{4\tau}\right] \Biggl[ (a_0-\pi^2 a_2) + 2a_1\log\left(\frac{t}{\nu^2}\right) + 3a_2 \log^2\left(\frac{t}{\nu^2}\right)\Biggr]{\ensuremath{ \mathrm{d}\,t }} \\ &-\frac{1}{\sqrt{4\pi\tau}} b_1\langle J_g\rangle \int\limits_0^{s_0} \exp\left[ \frac{-(\hat{s}-t)^2}{4\tau}\right] {\ensuremath{ \mathrm{d}\,t }} +\frac{1}{\sqrt{4\pi\tau}} \exp\left( \frac{-\hat{s}^2}{4\tau}\right) \left[ c_0 \left\langle {\cal O}_6\right\rangle - \frac{d_0 \hat{s}}{2\tau} \left\langle {\cal O}_8\right\rangle \right] \\ &-\frac{16\pi^3}{\sqrt{4\pi\tau}} n_c\rho^4 \int\limits_0^{s_0} t^2 \exp\left[ \frac{-(\hat{s}-t)^2}{4\tau}\right] J_2\left(\rho\sqrt{t} \right) Y_2\left(\rho\sqrt{t} \right) {\ensuremath{ \mathrm{d}\,t }} \quad. \end{split} \label{G0}$$ The perturbative coefficients in  are given by $$\begin{gathered} a_0 = -2\left(\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^2\left[1+\frac{659}{36}\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}+ 247.480\left( \frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^2\right] \\ a_1 = 2\left(\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^3\left[ \frac{9}{4} +65.781\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right] \quad ,\quad a_2 = -10.1250\left(\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^4 \end{gathered} \label{pertDerek}$$ as obtained from the three-loop $\overline{\rm{MS}}$ calculation of the correlation function in the chiral limit of $n_f=3$ flavours [@che98]. The condensate contributions in  involve next-to-leading order [@bag90] contributions[^2] from the dimension four gluon condensate $\langle J_g\rangle$ and leading order [@NSVZ_glue] contributions from gluonic condensates of dimension six and eight $$\begin{gathered} \left\langle {\cal O}_6\right\rangle = \left\langle g f_{abc}G^a_{\mu\nu}G^b_{\nu\rho}G^c_{\rho\mu}\right\rangle \label{dimsix} \\ \left\langle {\cal O}_8\right\rangle = 14\left\langle\left(\alpha f_{abc}G^a_{\mu\rho} G^b_{\nu\rho}\right)^2\right\rangle -\left\langle\left(\alpha f_{abc}G^a_{\mu\nu}G^b_{\rho\lambda}\right)^2\right\rangle \label{dimeight} \\ \begin{gathered} b_0 = 4{\ensuremath{\mathrm{\pi}}}\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\left[ 1+ \frac{175}{36}\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right] \quad,\quad b_1 = -9{\ensuremath{\mathrm{\pi}}}\left(\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^2 \quad, \\ c_0 = 8{\ensuremath{\mathrm{\pi}}}^2\left(\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}}\right)^2 \quad,\quad d_0 = 8{\ensuremath{\mathrm{\pi}}}^2\frac{\alpha}{{\ensuremath{\mathrm{\pi}}}} \quad. \end{gathered} \label{cond_coefficients}\end{gathered}$$ The remaining term in the GSR  represents instanton contributions obtained from single instanton and anti-instanton [@basic_instanton] ([*i.e.*]{} assuming that multi-instanton effects are negligible [@schaefer_shuryak]) contributions to the scalar gluonic correlator [@shuryak; @gluelet; @NSVZ_glue; @inst_K2] within the liquid instanton model [@DIL] parameterized by the instanton size $\rho$ and the instanton density $n_c$. The quantities $J_2$ and $Y_2$ are Bessel functions in the notation of [@abr]. As a result of renormalization-group scaling of the GSRs [@orl00; @gauss], the coupling in the perturbative and condensate coefficients ( and ) is implicitly the running coupling at the scale $\nu^2=\sqrt{\tau}$ for $n_f=3$ in the $\overline{\rm{MS}}$ scheme $$\begin{gathered} \frac{\alpha (\nu^2)}{\pi} = \frac{1}{\beta_0 L}-\frac{\bar\beta_1\log L}{\left(\beta_0L\right)^2}+ \frac{1}{\left(\beta_0 L\right)^3}\left[ \bar\beta_1^2\left(\log^2 L-\log L -1\right) +\bar\beta_2\right] \\ L=\log\left(\frac{\nu^2}{\Lambda^2}\right)\quad ,\quad \bar\beta_i=\frac{\beta_i}{\beta_0} \quad ,\quad \beta_0=\frac{9}{4}\quad ,\quad \beta_1=4\quad ,\quad \beta_2=\frac{3863}{384} \end{gathered} \label{run_coupling}$$ with $\Lambda_{\overline{MS}}\approx 300\,{\rm MeV}$ for three active flavours, consistent with current estimates of $\alpha(M_\tau)$ [@pdg]. Analogous to the scalar gluonic case, substitution of  into  provides the QCD prediction of $G_0^{(q)}\left(\hat s,\tau,s_0\right)$ for the $I=0,1$ scalar quark currents. To leading order in the quark mass, we find [@orl00] $$\begin{split} G_0^{(q)}\left(\hat s, \tau, s_0\right)=& \frac{1}{\sqrt{4\pi\tau}}\frac{3m_q^2}{16\pi^2} \int\limits_0^{s_0} \exp\left[\frac{-\left(t-\hat s\right)^2}{4\tau}\right] \left[t\left(1+\frac{17}{3}\frac{\alpha}{\pi}\right) -2\frac{\alpha}{\pi}t\log{\left(\frac{t}{\nu^2}\right)} \right]\;{\ensuremath{ \mathrm{d}\,t }} \\ &+m_q^2\exp{\left(\frac{-\hat s^2}{4\tau}\right)}\left[ \frac{1}{2\sqrt{\pi\tau}}\left\langle C^s_4{\cal O}^s_4\right\rangle- \frac{\hat s}{4\tau\sqrt{\pi\tau}}\left\langle C^s_6{\cal O}^s_6\right\rangle \right] \\ &-\left(-1\right)^I\frac{3 m_q^2}{8\pi} \frac{1}{\sqrt{4\pi\tau}}\int\limits_0^{s_0} t\exp\left[\frac{-\left(t-\hat s\right)^2}{4\tau}\right] J_1\left(\rho\sqrt{t}\right) Y_1\left(\rho\sqrt{t}\right)\;{\ensuremath{ \mathrm{d}\,t }} \quad. \end{split} \label{gauss_scalar_QCD}$$ Again, renormalization-group improvement implies that both $m_q$ and $\alpha$ are implicitly running quantities at the scale $\nu^2=\sqrt{\tau}$ as given by and the (two-loop, $n_f=3$, $\overline{\rm MS}$) expression $$m_q\left(\nu^2\right)= \frac{\hat m_q}{\left(\frac{1}{2}L\right)^{\frac{4}{9}}}\left( 1+\frac{290}{729}\frac{1}{L}-\frac{256}{729}\frac{\log{ L}}{L} \right) \quad ,\quad L=\log\left(\frac{\nu^2}{\Lambda^2}\right)\quad , \label{run_mass}$$ where $\hat m_q$ is the renormalization-group invariant quark mass parameter. The perturbative contributions in are the $n_f=3$ two-loop results obtained from [@chetyrkin], and the instanton expressions are obtained from [@SVZ]. The condensate contributions are leading-order results obtained from [@SVZ], and are defined by the quantities $$\left\langle C_4^s {\cal O}_4^s\right\rangle = \frac{3}{2} \left\langle m_q \overline{q}q\right\rangle + \frac{1}{16\pi} \left\langle\alpha_s G^2 \right\rangle \label{c4_scalar}$$ and $$\langle C^s_6{\cal O}^s_6\rangle = \pi\alpha_s \Biggl[ \frac{1}{4}\left\langle\left(\bar u\sigma_{\mu\nu}\lambda^a u-\bar d\sigma_{\mu\nu}\lambda^a d\right)^2\right\rangle +\frac{1}{6} \left\langle \left( \bar u \gamma_\mu \lambda^a u+\bar d \gamma_\mu \lambda^a d \right) \sum_{u,d,s}\bar q \gamma^\mu \lambda^a q \right\rangle\Biggr]\quad . \label{o6}$$ The vacuum saturation hypothesis [@SVZ] in the $SU(2)$ limit $\langle \bar u u\rangle=\langle\bar d d\rangle\equiv\langle\bar q q\rangle$ provides a reference value for $\langle {\cal O}^s_6\rangle$ $$\left\langle C_6^s{\cal O}^s_6\right\rangle=-f_{vs}\frac{88}{27}\alpha_s \left\langle (\bar q q)^2\right\rangle =-f_{vs}1.8\times 10^{-4} {\rm GeV}^6\quad , \label{o61}$$ where the quantity $f_{vs}$ parameterizes deviations from vacuum saturation where $f_{vs}=1$. The GSRs and exhibit some interesting qualitative features. For example, the condensate contributions decay exponentially with the Gaussian peak-position $\hat s$, emphasizing that these contributions have a low-energy origin. Also, the explicit factor of $I$ appearing in the instanton contributions in the quark scalar channel are a non-perturbative source of isospin symmetry-breaking. Before proceeding with an analysis of the GSRs, the QCD input parameters must be specified. We assume that $\langle J_g\rangle\approx\langle\alpha G^2 \rangle$ and then employ the (central) value from [@nar97] $$\label{dimfour} \langle\alpha G^a_{\ \mu\nu}G^{a\mu\nu}\rangle=\langle\alpha G^2\rangle = (0.07\pm 0.01)\, {\rm GeV^4} \quad.$$ The dimension six gluon condensate  can be related to this value of $\langle\alpha G^2\rangle$ using instanton techniques (see [@NSVZ_glue; @SVZ]) $$\langle \mathcal{O}_6 \rangle = (0.27\, {\rm GeV}^2) \langle \alpha G^2\rangle\quad.$$ Further, by invoking vacuum saturation in conjunction with the heavy quark expansion, the authors of [@bag85] have also related the dimension eight gluon condensate  to $\langle\alpha G^2\rangle$ through $$\langle\mathcal{O}_8\rangle = \frac{9}{16} \left(\langle\alpha G^2\rangle\right)^2\quad.$$ In addition, the dilute instanton liquid (DIL) model [@DIL] parameters $$\label{DILparams} n_{{c}} = 8.0\times 10^{-4}\ {\rm GeV^4}\quad,\quad\ \rho = \frac{1}{0.6}\ {\rm GeV}^{-1}$$ will be employed. Finally, we use $f_{vs}=1.5$ as a central value to accommodate the observed deviations from vacuum saturation in the dimension six quark condensates  [@bordes]. Note that knowledge of the quark mass parameter is not needed for an analysis based on the normalized GSRs since $m_q$ provides a common prefactor in . Analysis of the Gaussian Sum-Rules for Scalar Gluonic and Quark Currents ======================================================================== In the single narrow resonance model $$\rho^{\rm had}(t)=f^2\delta\left(t-m^2\right)$$ the $k=0$ NGSR becomes $$\label{phenom_single} N_0^{{\ensuremath{\mathrm{QCD}}}}\left(\hat{s},\tau,s_0\right) = \frac{1}{\sqrt{4{\ensuremath{\mathrm{\pi}}}\tau}} \exp\left[\frac{-(\hat{s}-m^2)^2}{4\tau}\right] \quad.$$ The prediction of the mass $m$ is obtained by optimizing the parameter $s_0$ so that the left-hand side of  has a maximum as a function of $\hat s$ ($\hat s$ peak position) independent of $\tau$ as required by the properties of the right-hand side of  [@orl00]. The $\tau$-stable $\hat s$ peak for this optimized $s_0$ then provides the prediction of the resonance mass $m$. The integrity of this procedure has been demonstrated for the vector-isovector currents which probe the $\rho$ meson, resulting in a predicted $\rho$ mass of $750\,{\rm MeV}$ and superb agreement between the phenomenological and QCD sides of the NGSR [@orl00]. However, the same procedure applied to the scalar gluonic and scalar quark channels leads to significant disagreement between the two sides of the NGSR as illustrated in Figure \[nar\_res\_fig\] [@orl00; @har01].[^3] In both the scalar gluonic and scalar quark sum-rules, the single resonance model is larger than the theoretical contribution at the peak and underestimates the theoretical contribution in the tails. Since both the theoretical and phenomenological contributions are normalized, this implies that the QCD prediction is broader than the phenomenological model. ![ [Comparison of the theoretical prediction for the normalized GSR $N_0^{(g)}\left(\hat s, \tau,s_0\right)$ with the single narrow resonance phenomenological model for the optimized value of the continuum $s_0$. The $\tau$ values used for the three pairs of curves, from top to bottom in the figure, are respectively $\tau=2.0\, {\rm GeV}^4$, $\tau=3.0\,{\rm GeV}^4$, and $\tau=4.0\,{\rm GeV}^4$. A qualitatively similar agreement between the single narrow resonance model and the QCD prediction exists for the $I=0,1$ scalar quark channels. ]{}[]{data-label="nar_res_fig"}](sres_plot.eps) The following second-order moment combination $$\label{sigma_combo} \sigma^2 \equiv \frac{M_{0,2}}{M_{0,0}} -\left(\frac{M_{0,1}}{M_{0,0}}\right)^2$$ provides a quantitative measure of the width of the GSRs. In the single narrow resonance model we find $\sigma^2=2\tau$, and hence a significant deviation of the QCD prediction from this result indicates a failure of the phenomenological model to adequately describe a particular channel’s hadronic content. Figure \[sigma\] illustrates that $\sigma^2>2\tau$, indicating that a phenomenological model with distributed resonance strength is necessary [@orl00; @har01]. ![Plot of $\sigma^2$ for the theoretical prediction (dotted curve) for the scalar gluonic GSR compared with $\sigma^2=2\tau$ for the single-resonance model (solid curve) using the optimized value of the continuum. A qualitatively similar result exists for the $I=0,1$ scalar quark channels.[]{data-label="sigma"}](glue_m2.eps) A phenomenological model with two narrow resonances of mass $m_1$ and $m_2$ results in the NGSR $$N_0\left(\hat s, \tau,s_0\right)=\frac{r_1}{\sqrt{4\pi\tau}} \exp{\left[ \frac{-\left(\hat s-m_1^2\right)^2}{4\tau}\right]} +\frac{r_2}{\sqrt{4\pi\tau}}\exp{\left[ \frac{-\left(\hat s-m_2^2\right)^2}{4\tau}\right]} \label{norm_gauss_2res}$$ where $r_1+r_2=1$ describes the relative strength of the two resonances contributing to the spectral function. In terms of the parameters $\{ r,y,z\}$ defined by $$r=r_1-r_2\,,~y=m_1^2-m_2^2\,,~z=m_1^2+m_2^2\quad,$$ the second-order moment combination  resulting from the left-hand side of  $$\sigma^2-2\tau=\frac{1}{4}y^2\left(1-r^2\right)>0$$ naturally results in a broader distribution than the single resonance model as suggested by the QCD result. Analysis of the double narrow resonance model is substantially more complicated than the single narrow resonance case. In particular, the $\hat s$ peak position (denoted by $\hat s_{peak}$) develops $\tau$ dependence which is well-described by [@orl00] $$\hat s_{peak}\left(\tau,s_0\right)= A + \frac{B}{\tau} + \frac{C}{\tau^2}$$ and hence $s_0$ is optimized to obtain the best description of this $\tau$ dependence. After optimization of $s_0$, the parameters in the phenomenological model can be determined from the moment combinations $$\begin{aligned} & & z = 2\frac{M_{0,1}}{M_{0,0}} + \frac{A_2}{\sigma^2-2\tau} \label{z_moms}\\ & & y = \frac{ -\sqrt{A_2^2 + 4(\sigma^2-2\tau)^3}}{\sigma^2-2\tau} \label{y_moms}\\ & & r = \frac{A_2}{\sqrt{A_2^2 + 4(\sigma^2-2\tau)^3}} \label{r_moms}\end{aligned}$$ where the third-order moment combination $A_2$, representing the asymmetry of the distribution, is defined by $$A_2=\frac{M_{0,3}}{M_{0,0}}-3\frac{M_{0,2}}{M_{0,0}}\frac{M_{0,1}}{M_{0,0}} +2\left(\frac{M_{0,1}}{M_{0,0}}\right)^3 \quad . \label{dist_asymm}$$ This procedure for optimizing $s_0$ and determining the resonance parameters has been confirmed by a more numerically-intensive multi-parameter fit of $s_0$ and the resonance parameters [@orl00; @ps_glue]. The resonance parameters resulting from this analysis shown in Table \[doubres\_tab\] [@orl00; @har01] illustrate a remarkably consistent scenario of a $1\,{\rm GeV}$ and a $1.4\,{\rm GeV}$ state coupled to both the gluonic and non-strange $I=0$ quark currents, with the heavier state more strongly coupled to the gluonic operators. This consistency of the mass predictions in the two channels is precisely what is expected for hadronic states which are mixtures of gluonium and quark mesons. The results in the $I=1$ scalar quark channel support the interpretation of the $a_0(1450)$ as the lightest state with a dominant coupling to the scalar quark currents. Sum-Rule $m_1$ (GeV) $m_2$ (GeV) $r_1$ $r_2$ $s_0$ (${\rm GeV^2}$) ------------- ------------- ------------- -------- -------- ----------------------- gluonic $0.98$ $1.4$ $0.28$ $0.72$ 2.3 quark $I=0$ $0.97$ $1.4$ $0.63$ $0.37$ 2.6 quark $I=1$ $1.4$ $1.8$ $0.57$ $0.43$ 3.9 : Analysis results from scalar quark and gluonic Gaussian sum-rules in the double narrow resonance model.[]{data-label="doubres_tab"} The double narrow resonance model results in excellent agreement with QCD as illustrated in Figure \[twores\_fig\]. More complicated resonance models which extend the double narrow resonance model to introduce resonance widths do not improve the quantitative agreement with QCD exhibited in the figure. ![ Comparison of the theoretical prediction $N_0^{(g)}\left(\hat s, \tau,s_0\right)$ with the double narrow resonance phenomenological model using the parameters in Table \[doubres\_tab\]. The $\tau$ values used for the three pairs of curves, from top to bottom in the figure, are respectively $\tau=2.0\,{\rm GeV}^4$, $\tau=3.0\, {\rm GeV}^4$, and $\tau=4.0\, {\rm GeV}^4$. Note the almost perfect overlap between the theoretical prediction and the phenomenological models. A qualitatively similar agreement between the double narrow resonance model and the QCD prediction exists for the $I=0,1$ scalar quark channels. []{data-label="twores_fig"}](g_plot.eps) In summary, analysis of the GSRs for the scalar gluonic and $I=0$ non-strange quark scalar currents exhibit a remarkable similarity in their mass predictions within a double narrow resonance model, a result indicative of the existence of hadronic states which are mixtures of quark mesons and gluonium. Although the effect of QCD uncertainties arising from a 15% variation in the DIL parameters  and the $d=4$ gluon condensate , as well as variations of the vacuum saturation parameter within the range $1<f_{vs}<2$ correspond to an uncertainty in the Table \[doubres\_tab\] mass parameters of approximately $0.2\,{\rm GeV}$, we note that the mass splitting of $0.4\,{\rm GeV}$ between the states is remarkably stable [@har01]. This QCD evidence for mixed states with a mass splitting of $0.4\,{\rm GeV}$ provides valuable information for interpretation of the known $f_0$ resonances. Acknowledgements ================ TGS is grateful for research support from the Natural Sciences & Engineering Research Council of Canada (NSERC). DH is thankful for support from the Department of Research at the University College of the Fraser Valley (UCFV). Many thanks to Amir Fariborz for his efforts in organizing the Utica Workshop on Scalar Mesons which resulted in a tremendously enjoyable and valuable workshop. [99]{} K. Hagiwara [*et al.*]{}, Phys. Rev. . G. Orlandini, T.G. Steele and D. Harnett, Nucl. Phys. . D. Harnett, T.G. Steele, Nucl. Phys.  . T.G. Steele, D. Harnett, G. Orlandini, to appear in the Proceedings of the 2002 Utica Workshop on High-Energy Physics (hep-ph/0210013). R.A. Bertlmann, G. Launer, E. de Rafael, Nucl. Phys. . V.A. Novikov, M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. . E.V. Shuryak, Nucl. Phys. ;\ H. Forkel, Phys. Rev. . D. Harnett, T.G. Steele, V. Elias, Nucl. Phys. . K.G. Chetyrkin, B.A. Kneihl and M. Steinhauser, Nucl. Phys. . E. Bagan and T.G. Steele, Phys. Lett. . V.A. Novikov, M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. . A. Belavin, A. Polyakov, A. Schwartz and Y. Tyupkin, Phys. Lett. ;\ G. ’t Hooft, Phys. Rev. . T. Schäefer and E.V. Shuryak, Phys. Rev. Lett. . B.V. Geshkenbein and B.L. Ioffe, Nucl. Phys. ;\ B.L. Ioffe and A.V. Samsonov, Phys. of Atom. Nucl. . E.V. Shuryak, Nucl. Phys. . M. Abramowitz and I.E. Stegun, [*Mathematical Functions with Formulas, Graphs, and Mathematical Tables*]{} (National Bureau of Standards Applied Mathematics Series, Washington) 1972. K.G. Chetyrkin, Phys. Lett.  ;\ S.G. Gorishny, A.L. Kataev, S.A. Larin, L.R. Surguladze, Phys. Rev.  and Mod. Phys. Lett.  . M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. . S. Narison, Nucl. Phys. B (Proc. Supp.) . E. Bagan, J.I. Latorre, P.Pascual and T. Tarrach, Nucl. Phys. . V. Giménez, J. Bordes, J. Peñarrocha, Nucl. Phys. ;\ C.A. Dominguez, J. Sola, Z. Phys. . Ailin Zhang, T.G. Steele, hep-ph/0304208. [^1]: This definition is a natural generalization of that given in [@gauss]. To recover the original GSR, we simply let $k=0$ in (\[srdef\]). [^2]: The calculation of next-to-leading contributions in [@bag90] have been extended non-trivially to $n_f=3$ from $n_f=0$, and the operator basis has been changed from $\left\langle \alpha G^2\right\rangle$ to $\langle J_g\rangle$. [^3]: The $\tau$-stability analysis of the $\hat s$ peak is examined in the range $2\,{\rm GeV^4}\le\tau\le 4\,{\rm GeV^4}$ where the perturbative series is reasonably convergent while maintaining a Gaussian resolution of typical hadronic scales.
--- abstract: '[We study a continuum model of an extensile active nematic to show that mesoscale turbulence develops in two stages: (i) ordered regions undergo an intrinsic hydrodynamic instability generating walls, lines of stong bend deformations, (ii) the walls relax by forming oppositely charged pairs of defects. Both creation and annihilation of defect pairs reinstate nematic regions which undergo further instabilities, leading to a dynamic steady state. We compare this with the development of active turbulence in a contractile active nematic.]{}' author: - 'Sumesh P. Thampi' - Ramin Golestanian - 'Julia M. Yeomans' bibliography: - 'refe.bib' title: Instabilities and Topological Defects in Active Nematics --- [*Introduction:*]{} Dense active systems have generated much interest due to the novel properties that arise because they operate out of thermodynamic equilibrium [@Sriram2010; @Ganesh2011; @Marchetti2013]. There are many different active suspensions, comprising individual components that can differ widely in their characteristic length scales. Examples include mixtures of molecular motors and microtubules or actin, cells and bacteria, vibrating granular rods, and schools of fish [@Sriram2010; @Ganesh2011; @Marchetti2013; @Bausch2010; @Chate2012; @Dogic2012; @Julia2012; @Narayan2007]. At high densities active systems often exist in a state where the velocity field is highly disordered, with a continually changing pattern of vortices, see fig. \[fig:fullfledge\]. The turbulent appearance of the flow is, at first sight, surprising because the active suspensions usually correspond to a low Reynolds number regime. However detailed properties, such as scaling laws, are very different to inertial turbulence [@Julia2012; @Jorn2013]. For active materials with hydrodynamic interactions a linear stability analysis shows that the nematic state is unstable to fluctuations [@Sriram2002; @Madan2007; @Scott2009; @Joanny2005]. However, the path to well-developed mesoscale turbulence is not yet clear [@Mahadevan2011]. Moreover the extent to which the evolution of the turbulent state is generic between different active systems is still to be understood. Here we contribute to answering these questions by demonstrating the route through which the turbulent state is reached in a 2D active nematic. In addition to hydrodynamic instabilities there is now considerable evidence that topological defects play a role in determining the dynamics of active systems. The presence of topological defects in an active system with nematic symmetry has been demonstrated in experiments using 2D suspensions of microtubule bundles and kinesin molecular motors [@Dogic2012]. Simulations show that such defects are strongly associated with vorticity generation in extensile active nematics [@ourprl2013]. Moreover, defects have been identified in dry active matter experiments[@Kremkemer2000; @Narayan2007] and in active systems with polar symmetry [@Kruse2004; @Bausch2013]. Generally, in passive systems, oppositely charged defects formed, say by a quench, attract and annihilate each other and thus defects continuously disappear from a system as it approaches equilibrium. By contrast, because of the continuous input of energy, defects in active systems can be formed in pairs and subsequently move apart [@Giomi2013; @ourprl2013] giving rise to a steady state where topological defects are continually being created and destroyed. In this letter, we perform simulations of active nematics to study the onset of active turbulence in detail. Two physical mechanisms are shown to be relevant to setting up and maintaining the flow field. The first is the formation of lines of kinks in the director field, which we shall term *walls*, and which arise directly from the hydrodynamic instability of the active nematic. The second is the ‘unzipping’ of the walls by the formation of topological defects, a process that we show can be driven either by flow or by relaxation of the excess elastic free energy in the wall. We shall first introduce the model and then describe the onset of active turbulence. Wall formation and defect formation will each be discussed in more detail. We concentrate primarily on extensile systems, but discuss contractile nematics later in the paper. [*Equations of motion:*]{} We consider an active nematic suspension. The evolution equation for the order parameter tensor $\mathbf{Q}$ is a standard equation in liquid crystal hydrodynamics [@Berisbook; @DeGennesBook]: $$(\partial_t + u_k \partial_k) Q_{ij} - S_{ij} = \Gamma H_{ij},\label{eqn:lc}$$ where $\mathbf{u}$ is the velocity field and, because the nematic order can respond to shear flow, the advection term is generalised to $$\begin{aligned} S_{ij} =& (\lambda E_{ik} + \Omega_{ik})(Q_{kj} + \delta_{kj}/3) + (Q_{ik} + \delta_{ik}/3)\nonumber\\ & (\lambda E_{kj} - \Omega_{kj}) - 2 \lambda (Q_{ij} + \delta_{ij}/3)(Q_{kl}\partial_k u_l),\nonumber\end{aligned}$$ where the strain rate tensor, $E_{ij} = (\partial_i u_j + \partial_j u_i)/2$ and the vorticity tensor, $\Omega_{ij} = (\partial_j u_i - \partial_i u_j)/2$. The alignment parameter is chosen as $\lambda=0.7$. The relaxation of $\mathbf{Q}$ is related to the molecular field $H_{ij} = -\delta \mathcal{F}/ \delta Q_{ij} + (\delta_{ij}/3) {\rm Tr} (\delta \mathcal{F}/ \delta Q_{kl})$ by the constant of proportionality $\Gamma$, the rotational diffusivity. The standard Landau-de Gennes free energy functional, within the single elastic constant, $K$, approximation, $$\begin{aligned} \mathcal{F} &= \frac{K}{2} (\partial_k Q_{ij})^2 \nonumber\\ &+ \frac{A}{2} Q_{ij} Q_{ji} + \frac{B}{3} Q_{ij} Q_{jk} Q_{ki} + \frac{C}{4} (Q_{ij} Q_{ji})^2 \nonumber\end{aligned}$$ is used to determine the molecular field, $\mathbf{H}$. Here, $A, B$ and $C$ are material constants. The velocity field obeys the equations of motion $$\begin{aligned} \nabla \cdot \mathbf{u} &= 0;~ \rho (\partial_t + u_k \partial_k) u_i &= \partial_j \Pi_{ij}. \label{eqn:ns} \end{aligned}$$ where $\rho$ is the fluid density. The stress tensor $\mathbf{\Pi}$ incorporates two terms which appear in the hydrodynamic equations describing passive liquid crystals - the viscous stress, $\Pi_{ij}^{viscous} = 2 \mu E_{ij}$, - the passive stress, $\Pi_{ij}^{passive}=-P\delta_{ij} + 2 \lambda(Q_{ij} + \delta_{ij}/3) (Q_{kl} H_{lk}) -\lambda H_{ik} (Q_{kj} + \delta_{kj}/3) - \lambda (Q_{ik} + \delta_{ik}/3) H_{kj} -\partial_i Q_{kl} \frac{\delta \mathcal{F}}{\delta \partial_j Q_{lk}} + Q_{ik}H_{kj} - H_{ik} Q_{kj}$ where $P$ is the pressure and $\mu$ is the viscosity. The activity is imparted by incorporating - the active stress, $\Pi_{ij}^{active} = -\zeta Q_{ij}$ introduced in [@Sriram2002], where $\zeta$ is the coefficient controlling the strength of the activity. This term implies that any gradient in $\mathbf{Q}$ will produce a flow field, which is extensile for $\zeta>0$ and contractile for $\zeta<0$. More details of the model can be found in [@Berisbook; @DeGennesBook; @Davide2007; @Henrich2010]. The governing equations (\[eqn:lc\]) and (\[eqn:ns\]) form a coupled system which we solve using a hybrid lattice Boltzmann algorithm [@Davide2007; @Suzanne2011]. The parameters used are $\Gamma=0.34$, $A=0.0$, $B=-0.3$, $C=0.3$, $K=0.02$, $\mu=2/3$ and $\zeta=0.0125$ unless specified otherwise. These parameters are non-dimensionalised in lattice units where discrete space and time steps are chosen as unity. Depending on the material of interest (cytoskeletal filaments or bacterial suspensions) appropriate scales can be chosen to convert them to physical units [@Cates2008; @Henrich2010; @ourprl2013]. [*The onset of active turbulence:*]{} Figure \[fig:startup\] illustrates how an active suspension with extensile stress undergoes the transition from a nematic to a turbulent state which generates and sustains vortical structures in the flow field. The nematic initial condition is shown in fig. \[fig:startupt1\]. Any small bend fluctuation in an ordered active nematic is reinforced by local shear leading to a hydrodynamic instability [@Sriram2002; @Madan2007; @Scott2009]. The linear stability analysis predicts that long wavelength modes are unstable, and the waves of bend deformations shown in fig. \[fig:startupt2\] are a consequence of the dominance of the most unstable mode. The bends then sharpen (fig. \[fig:startupt3\]) [@Madan2007; @Scott2009] because the shear flow associated with gradients in the director field acts to further tilt the director to form approximately equispaced lines of kinks similar to the observations in [@Mahadevan2011]. We shall refer to these as walls. Similar structures are observed in passive liquid crystals where they form due to the imposed boundary conditions or the application external forces [@Rey2002; @Lozar2005]. By contrast, in active matter the wall formation is internally driven by the flow field generated by the active stress. These structures are similar to those formed by an active nematic confined to a bounded channel where, as the activity is increased, there is a spontaneous symmetry breaking to a state where a kinked director field produces a net flow [@Joanny2005; @Davide2007]. However, there the solid boundaries impose no-slip boundary conditions or fix the anchoring of the director field. This is not the case for our fully 2D system and consequently no steady state flow or director field is obtained. Instead, the director continues to tilt until pairs of oppositely charged defects are created as shown in fig. \[fig:startupt4\]. This spontaneous formation of defects occurs at the points where the bend is strongest due to local noise or interactions with neighbouring walls. The defects are strong sources of vorticity and move in their own or in the ambient flow [@Julia2002; @Giomi2013; @Pismen2013]. This further destroys the striated structure in the director field which soon looks disordered as shown in fig. \[fig:startupt5\] and on a larger scale in fig. \[fig:fullfledge\]. The system reaches a dynamic steady state where walls are continually formed and then decay through defect formation. The defect number also saturates because they are advected by the flow field and annihilate if they encounter a defect of the opposite charge. Thus two distinct processes contribute to the dynamics of active turbulence in extensile nematics; wall formation, and defect formation and annihilation which acts to remove the walls. We next analyse each of these in more detail. \ [*Wall formation*]{}: The wall formation can be thought of as a direct competition between activity trying to establish a flow field by distorting the director field and elasticity trying to prevent this deformation. This competition leads to a dominant length scale $\sqrt{K/\zeta}$ for the instability [@Sriram2010]. We study the wall formation process in active nematics as a function of the parameters $K$ and $\zeta$. The results are illustrated in fig. \[fig:startupzetaK\]. In each of these simulations, the director field was recorded just before the onset of defect pair generation. The extent of the director deformation at any point was quantified by calculating the total of the angular deviation of the director at that point from its neighbours. This quantity will be zero in perfectly ordered regions, while it will be nonzero and large in the regions of bends. It is thus possible to identify walls quantitatively as shown in fig. \[fig:startupzetaK\]. Note that it is visually apparent that the distance between the walls decreases as $\zeta$ increases and as $K$ decreases as expected. To obtain a quantitative measure of the dominant length scale we take a 1D Fourier transform in the direction normal to the walls. The resulting length scale is plotted as a function of $\zeta$ and $K$ in figs. \[fig:startupzetascale\] and \[fig:startupKscale\] confirming a characteristic length scale separating the walls $\sim \sqrt{K/\zeta}$. This is strong evidence that wall formation in active nematics is indeed a consequence of the inherent hydrodynamic instability. \ \ [*Defect formation and annihilation:*]{} Domain walls are known to be unstable in passive liquid crystals, giving rise to a pair of defects. However, the literature indicates that this happens due to a difference in elastic constants and known mechanisms are in three dimensions [@Rey2002; @Lozar2005]. In active nematics, a local perturbation of the wall can nucleate pairs of oppositely charged $\pm 1/2$ defects. Such fluctuations are enhanced by the vortical flow field. Before we analyse the details of defect formation mechanisms, we describe an overall picture of the defect dynamics. Snapshots of the nucleation process, and the subsequent defect motion leading to annihilation events, are shown in fig. \[fig:defformn\]. At time $t_1$, three different walls, labelled as [*w1*]{}, [*w2*]{} and [*w3*]{} can be identified. Nucleation of a pair of defects [*m1-p1*]{} from [*w1*]{} occurs at $t=t_1$. Two other pairs, [*m2-p2*]{} and [*m3-p3*]{} are formed in wall [*w2*]{} at time $t=t_2$. Arrows in the figure show the trajectories of the defects. When the defects form they have a propensity to move along the walls, driven by elastic forces and flow. This causes the walls to ‘unzip’ i.e, to relax back to the nematic state. For example, [*w1*]{} is unzipped by the defect pair [*m1-p1*]{} during $t_1 \le t \le t_2$ and [*w2*]{} by both [*m2-p2*]{} and [*m3-p3*]{} during $t_2 \le t \le t_3$. A defect [*p5*]{} unzipping wall [*w3*]{} for $t \ge t_1$ is also visible. When a defect encounters an oppositely charged defect they annihilate each other. For example, [*m2*]{} and [*p3*]{}, [*m3*]{} and [*p0*]{}, [*m6*]{} and [*p7*]{} meet at $t \approx t_3$ resulting in the annihilation of each of these pairs. Formation of more than one closely spaced pair in a wall (e.g. [*m2-p2*]{} and [*m3-p3*]{}) tends to result in fast annihilation as the defects move easily along the walls (e.g. [*m2*]{} with [*p3*]{}), and the wall disappears. As the activity increases the defects are more likely to be driven away from the wall in which they have formed by the ambient flow. This tends to lead to longer times between creation and finding an oppositely charged defect with the correct orientation for annihilation (e.g. [*m6*]{} and [*p7*]{}). Indeed in general the route to defect annihilation depends on the local order and flows. For example, [*m4*]{} is weakly associated with two different walls [*w1*]{} and [*w3*]{} at $t=t_1$ and hence there is the possibility of [*m4*]{} annihilating with [*p5*]{} or [*p1*]{}. As time proceeds [*m4*]{}–[*p5*]{} annihilation occurs ($t>t_4$, not shown) while [*p1*]{} moves away from the original wall. One might ask why oppositely charged defects do not annihilate each other immediately after they form[@Giomi2013]. This is because creation and annihilation occur for different orientations of defects as illustrated in fig. \[fig:defcre\]-\[fig:defann\]. Fig. \[fig:defcre\] shows that, as the two defects move apart, the length of wall between them regains nematic order (horizontal alignment in fig. \[fig:defcre\]). Thus the process of defect pair creation is a very natural way to relieve the bending energy of the wall. Similarly fig. \[fig:defann\] shows that, as defects annihilate, the stretch of wall between them is removed. Realigned nematic regions then undergo further hydrodynamic instabilities and the system reaches a dynamical steady state. Similar defect creation and annihilation events are observed in experiments on microtubule bundles driven by kinesin molecular motors [@Dogic2012]. [*Flow-driven defect formation*]{}: We next consider in more detail how the topological defects are created. As in passive systems their formation is driven by minimisation of elastic free energy, but in active systems, this is not the only driving force. Activity, which generates flow, can also play an important role. To illustrate this we consider, as a model system, a channel with any gradients only in the direction normal to the boundary. The spontaneous symmetry breaking which results in a unidirectional steady state flow in these channels above a threshold activity is well established [@Joanny2005; @Davide2007]. For homeotropic anchoring of the director field on the boundaries the instability is to a state where the director field adopts a bend configuration similar to one of the active walls in fig. \[fig:startupzetaK\]. As activity increases, the shear increases, and the bend configuration becomes unstable to a splay configuration as shown in fig. \[fig:defect1d\]. In figs. \[fig:chanactq\] and \[fig:chanactu\] we show the corresponding changes in the director and velocity profiles across the channel. For this simple model the bend to splay transition occurs uniformly throughout the length of the channel. Returning to the full 2D geometry, and unbounded walls, it is apparent that a similar bend to splay transition occurs at the point of nucleation of a defect pair in the wall. The difference is that because the 1D symmetry is lost due to, for example, local flows the bend to splay transition takes place at one point. The splay region then expands in both directions along the wall, corresponding to the defects moving apart. Parameters influencing the instability of the bend configurations can be identified from an analysis of eqs. (\[eqn:lc\]) and (\[eqn:ns\]) assuming 1D and an order parameter of constant magnitude. With these simplifying assumptions eq. (\[eqn:ns\]) reduces, in the steady state, to $$\begin{aligned} 0 = \frac{1}{2} \frac{du}{dy} \left(\lambda_1 \cos{2\theta} -1 \right) + \Gamma K \frac{d^2\theta}{dy^2} \label{eqn:leslieq}\end{aligned}$$ where $y$ is the transverse direction and $\theta$ is the director angle to the unidirectional flow field, $u$. $\lambda_1 = (3q+4)\lambda/9q$ is related to the flow alignment parameter $\lambda$ of eq. (\[eqn:lc\]) and $q$ is the magnitude of the nematic order, the largest eigenvalue of $\mathbf{Q}$ [@Davide2007]. If the active stress balances the viscous stress then, from eq. (\[eqn:lc\]) we also obtain $$\begin{aligned} \frac{du}{dy} = \frac{\zeta}{2 \mu} \sin{2\theta} . \label{eqn:lesliens}\end{aligned}$$ For homeotropic boundary conditions, one obtains the director profiles shown in fig. \[fig:chanactq\] [@Davide2007]. However, $\theta_c$, the value of $ \theta$ at the centre of the channel, may be $\theta_c=0^{\circ}$ or $\theta_c=90^{\circ}$, corresponding to a bend configuration or a splay configuration respectively, depending upon the activity. If the director profile is slightly perturbed around the steady state variations $\hat{\theta}$ of $\theta_c$ at the centre of the channel evolve as $$\begin{aligned} \frac{d\hat{\theta}}{dt} &= -\frac{ \lambda_1 \zeta \hat{\theta}}{2 \mu} \sin^2{2\theta_c} \nonumber\\ &+ \frac{\zeta}{2\mu} \hat{\theta} \cos{2\theta_c} \left(\lambda_1 \cos{2\theta_c} -1 \right) + \Gamma K \frac{d^2\hat{\theta}}{dy^2} . \label{eqn:thetacap}\end{aligned}$$ The growth of $\hat{\theta}$ depends upon the sign of the terms on the right hand side of eq. (\[eqn:thetacap\]). If $({\zeta}/{\mu \Gamma K})$ is sufficiently large, the coupling between the flow and the director field (the first two terms) will dominate over the diffusive mechanism. For $\theta_c \approx 0^{\circ}$ and $\theta_c \approx 90^{\circ}$ these terms reduce to $\frac{\zeta}{2\mu}\hat{\theta}(\lambda_1 - 1)$ and $\frac{\zeta}{2\mu}\hat{\theta}(\lambda_1 + 1)$ respectively. The second expression is always positive and gives rise to exponentially growing modes of $\hat{\theta}$. Thus the coupling of the director field to the flow will tend to drive a bend to splay transition at high activities, in agreement with the simulations. [*Elastic defect formation*]{}: Flow is, however, not essential to create defects. To demonstrate that defect pair formation can be driven purely by elastic energy we solved eq. (\[eqn:lc\]) with $\mathbf{u}=0$. Using as initial condition a wall in a nematic domain (fig. \[fig:2ddef0\]) the wall either relaxed continuously to a vertically aligned nematic (fig. \[fig:2dsimp\]) or a pair of defects were created (fig. \[fig:2ddef1\]) leading to a horizontally aligned nematic depending upon K. The defect formation is faster when $K$ increases as illustrated in fig. \[fig:defKdep\]. This figure also shows the balance between the elasticity-driven and flow-driven defect formation processes. For example, for the particular configuration and parameters used, there is a critical $K_c$ below which no defects form for $\zeta=0$. However non-zero activity allows defects to form even below $K_c$. For $K>>K_c$ the time to defect formation is unaffected by the active flow. \ [*Contractile active nematics*]{}: We now comment briefly on contractile systems $\zeta<0$. Here the dominant hydrodynamic instability is to splay deformations. As a result, details of both the initial patterning and of the formation/annihilation of topological defects differ to those observed in the extensile case. Starting from an orderd configuration as in fig. \[fig:startupt1\], fig. \[fig:startupcon\] illustrates the instabilty of a contractile active nematic. Instead of the bands formed in the extensile case, two dimensional nematic regions of varying orientations appear (fig: \[fig:startupcont2\]-\[fig:startupcont3\]). The borders of these regions are marked by large splay deformations. The splay is connected to neighbouring nematic regions through a bend distortion. The bends become more pronounced with time resulting in the formation of pairs of defects in the border regions as shown in fig. \[fig:startupcont3\]-\[fig:startupcont4\]. The director field immediately after the creation of a defect pair in a contractile nematic is shown in fig. \[fig:condefects\]; compare fig. \[fig:defcre\] for the extensile case. [*Summary:*]{} To conclude, the nematic regions in an active system are hydrodynamically unstable. This results in the formation of walls, local lines of high distortion. The elastic energy stored in the walls is released with the creation and annihilation of pairs of defects. Flow both helps to localise the walls and to aid the formation of defects. Defects preferentially move along the walls, but can escape from them at higher activities, and when oppositely charged defects meet they annihilate. Both creation and annihilation events remove walls and help to reinstate regions of nematic order which then undergo further hydrodynamic instabilities. The time scale for instability is usually much faster than the typical time scale of defect dynamics suggesting that it is the creation, motion and annihilation of the defects that primarily control the structure of the director and flow fields. Details of the defect formation differs in contractile suspensions, and further investigations are required to understand the implication of this to the properties of the fully-developed active turbulence. [*Acknowledgements:*]{} We thank Z. Dogic, D. Chen and D. Pushkin for helpful discussions. This work was supported by the E.R.C. Advanced Grant MiCE.
--- abstract: 'As an application of the Combinatorial Nullstellensatz, we give a short polynomial proof of the $q$-analogue of Dyson’s conjecture formulated by Andrews and first proved by Zeilberger and Bressoud.' address: - 'School of Mathematics and Physics, The University of Queensland, Brisbane, QLD 4072, Australia' - 'Alfréd Rényi Institute of Mathematics, Reáltanoda utca 13–15, Budapest, 1053 Hungary' author: - Gyula Károlyi - Zoltán Lóránt Nagy title: 'A simple proof of the Zeilberger–Bressoud $q$-Dyson theorem' --- [^1] Introduction ============ Let $x_1,\ldots,x_n$ denote independent variables, each associated with a nonnegative integer $a_i$. Motivated by a problem in statistical physics Dyson [@Dyson] in 1962 formulated the hypothesis that the constant term of the Laurent polynomial $$\prod_{1\leq i \neq j \leq n}\left(1-\frac{x_i}{x_j}\right)^{a_i}$$ is equal to the multinomial coefficient $(a_1+a_2+\dots +a_n)!/(a_1!a_2! \dots a_n!)$. Independently Gunson \[unpublished\] and Wilson [@Wilson] confirmed the statement in the same year, then Good gave an elegant proof [@Good] using Lagrange interpolation. Let $q$ denote yet another independent variable. In 1975 Andrews [@Andrews] suggested the following $q$-analogue of Dyson’s conjecture: The constant term of the Laurent polynomial $$f_q(\mathbf{x}):= f_q(x_1, x_2, \dots, x_n)=\prod_{1\leq i < j \leq n} \left(\frac{x_i}{x_j}\right)_{a_i}\left(\frac{qx_j}{x_i}\right)_{a_j} \in \mathbb{Q}(q)[\mathbf{x},\mathbf{x}^{-1}]$$ must be $$\frac{\left(q\right)_{a_1+a_2+\dots +a_n}} {\left(q\right)_{a_1}\left(q\right)_{a_2}\dots\left(q\right)_{a_n}},$$ where $\big(t\big)_{k}= (1-t)(1-tq)\dots(1-tq^{k-1})$ with $\big(t\big)_{0}$ defined to be $1$. Specializing at $q=1$, Andrews’ conjecture gives back that of Dyson. Despite several attempts [@Kadell; @Stanley; @Stanley2] the problem remained unsolved until 1985, when Zeilberger and Bressoud [@Zeilberger2] found a combinatorial proof. Shorter proofs for the equal parameter case $a_1=a_2=\ldots=a_n$ are due to Habsieger [@Habsieger], Kadell [@Kadell2] and Stembridge [@Stembridge]; they cover the special case $A_{n-1}$ of a problem of Macdonald [@Macdonald] concerning root systems, which was solved in full generality by Cherednik [@Cherednik]. A shorter proof of the Zeilberger–Bressoud theorem, manipulating formal Laurent series, was given by Gessel and Xin [@Gessel]. Following up a recent idea of Karasev and Petrov we present a very short combinatorial proof using polynomial techniques. We find that their proof of the Dyson conjecture in [@Karasev] naturally extends for Andrews’ $q$-Dyson conjecture. We note that built on the same basic principles but with more sophisticated details it is possible to prove a whole family of constant term identities for Laurent polynomials, including the Bressoud–Goulden theorems [@Bressoud], conjectures of Kadell [@Kadell3; @Kadell4], the $q$-Morris constant term identity [@Habsieger; @Kadell2; @Morris; @Zeilberger] and its far reaching generalizations conjectured by Forrester [@Baker; @Forrester]; see [@Karolyi; @Karolyi2; @Karolyi3]. We decided to publish this proof separately because of its sheer simplicity. The proof ========= Note that if $a_i=0$, then we may omit all factors that include the variable $x_i$ without affecting the constant term of $f_q$. Accordingly, we may assume that each $a_i$ is a positive integer. Consider the homogeneous polynomial $$F(x_1, x_2, \dots, x_n)= \prod_{1\leq i < j \leq n} {\left( \prod_{t=0}^{a_i-1}{(x_j-x_iq^t)}\cdot \prod_{t=1}^{a_j}{(x_i-x_jq^t)} \right)}\in \mathbb{Q}(q)[\mathbf{x}].$$ Clearly, the constant term of $f_q(\mathbf{x})$ is equal to the coefficient of $\prod_i{x_i^{\sigma-a_i}}$ in $F(\mathbf{x})$, where $\sigma=\sum_ia_i$. To express this coefficient we apply the following effective version of the Combinatorial Nullstellensatz [@Alon] observed independently by Lasoń [@Lason] and by Karasev and Petrov [@Karasev]. A sketch of the proof is included for the sake of completeness. \[interpol\] Let $\mathbb{F}$ be an arbitrary field and $F\in \mathbb{F}[x_1, x_2, \dots, x_n]$ a polynomial of degree $\deg(F)\leq d_1+d_2+\dots+d_n$. For arbitrary subsets $A_1, A_2, \dots, A_n$ of $\mathbb{F}$ with $|A_i|=d_i+1$, the coefficient of $\prod x_i^{d_i}$ in $F$ is $$\sum_{c_1\in A_1} \sum_{c_2\in A_2} \dots \sum_{c_n\in A_n} \frac{F(c_1, c_2, \dots, c_n)}{\phi_1'(c_1)\phi_2'(c_2)\dots \phi_n'(c_n)},$$ where $\phi_i(z)= \prod_{a\in A_i}(z-a)$. Construct a sequence of polynomials $F_0:=F, F_1,\ldots,F_n\in \mathbb{F}[\mathbf{x}]$ recursively as follows. For $i=1,\ldots,n$, let $F_i=F_i(\mathbf{x})$ denote the remainder obtained after dividing $F_{i-1}(\mathbf{x})$ by $\phi_i(x_i)$ over the ring $\mathbb{F}[x_1,\ldots,x_{i-1},x_{i+1},\dots,x_n]$. This process does not affect the coefficient of $\prod x_i^{d_i}$. The polynomial $F_n$ satisfies $F_n(\mathbf{c})=F(\mathbf{c})$ for all $\mathbf{c}\in A_1\times \dots \times A_n$ and its degree in $x_i$ is at most $d_i$ for every $i$. The unique polynomial with that property is expressed in the form $$F_n(\mathbf{x})=\sum_{\mathbf{c}\in A_1\times\dots\times A_n}F(\mathbf{c}) \prod_{i=1}^n\prod_{\substack{\gamma\in A_i\\\gamma\ne c_i}}\frac{x_i-\gamma}{c_i-\gamma}$$ by the Lagrange interpolation formula, hence the result. The idea is to apply this lemma taking $\mathbb{F}=\mathbb{Q}(q)$ with a suitable choice of the sets $A_i$ such that $F(\mathbf{c})=0$ for all but one element $\mathbf{c}\in A_1\times\dots\times A_n$. Put $A_i=\{1, q, \dots, q^{\sigma-a_i}\}$, then $|A_i|=\sigma-a_i+1$; and introduce $\sigma_i=\sum_{j=1}^{i-1}{a_j}$. Thus, $\sigma_1=0$ and $\sigma_{n+1}=\sigma$. For $\mathbf{c}\in A_1\times\dots\times A_n$ we have $F(\mathbf{c})=0$, unless $c_i=q^{\sigma_i}$ for all $i$. Suppose that $F(\mathbf{c})\ne 0$ for the numbers $c_i=q^{\alpha_i}\in A_i$. Here $\alpha_i$ is an integer satisfying $0\le \alpha_i\le \sigma-a_i$. Then for each pair $j>i$, either $\alpha_j-\alpha_i\ge a_i$, or $\alpha_i-\alpha_j\ge a_j+1$. In other words, $\alpha_j-\alpha_i\ge a_i$ holds for every pair $j\ne i$, with strict inequality if $j<i$. In particular, all of the $\alpha_i$ are distinct. Consider the unique permutation $\pi$ satisfying $\alpha_{\pi(1)}< \alpha_{\pi(2)}<\dots< \alpha_{\pi(n)}$. Adding up the inequalities $\alpha_{\pi(i+1)}-\alpha_{\pi(i)}\ge a_{\pi(i)}$ for $i=1,2\ldots,n-1$ we obtain $$\alpha_{\pi(n)}-\alpha_{\pi(1)}\ge \sum_{i=1}^{n-1}a_{\pi(i)}=\sigma-a_{\pi(n)}.$$ Given that $\alpha_{\pi(1)}\ge 0$ and $\alpha_{\pi(n)}\le \sigma-a_{\pi(n)}$, strict inequality is excluded in all of these inequalities. It follows that $\pi$ must be the identity permutation and $\alpha_i=\alpha_{\pi(i)}=\sum_{j=1}^{i-1}a_{\pi(j)}=\sigma_i$ must hold for every $i=1,2,\dots,n$. This proves the claim. This way finding the constant term of $f_q$ is reduced to the evaluation of $$\frac{F(q^{\sigma_1}, q^{\sigma_2}, \dots, q^{\sigma_n})} {\phi_1'(q^{\sigma_1})\phi_2'(q^{\sigma_2})\dots \phi_n'(q^{\sigma_n})},$$ where $\phi_i(z)=(z-1)(z-q)\dots(z-q^{\sigma-a_i})$. Here $$\begin{aligned} \phi_i'(q^{\sigma_i})&= \prod_{t=0}^{\sigma_i-1}{(q^{\sigma_i}-q^t)}\cdot \prod_{t=\sigma_i+1}^{\sigma-a_i}{(q^{\sigma_i}-q^t)}\\ &=\prod_{t=0}^{\sigma_i-1}{q^t(q^{\sigma_i-t}-1)}\cdot \prod_{t=1}^{\sigma-\sigma_{i+1}}{q^{\sigma_i}(1-q^t)}\\ &=(-1)^{\sigma_i}q^{\tau_i} %q^{{{\sigma_i}\choose{2}}+\sigma_i(\sigma-\sigma_{i+1})} \left(q\right)_{\sigma_i}\left(q\right)_{\sigma-\sigma_{i+1}}\end{aligned}$$ with $\tau_i=\binom{\sigma_i}{2}+\sigma_i(\sigma-\sigma_{i+1})$, whereas $$\begin{aligned} {F(q^{\sigma_1}, q^{\sigma_2}, \dots, q^{\sigma_n})}&= \prod_{1\leq i < j \leq n}\left( \prod_{t=0}^{a_i-1}{q^{\sigma_i+t}(q^{\sigma_j-\sigma_i-t}-1)} \cdot \prod_{t=1}^{a_j}{q^{\sigma_i}(1-q^{\sigma_j-\sigma_i+t})} \right)\\ &=(-1)^uq^v\prod_{1\leq i < j \leq n}\left( \frac{\left(q\right)_{\sigma_j-\sigma_i}} {\left(q\right)_{\sigma_j-\sigma_{i+1}}} \cdot \frac{\left(q\right)_{\sigma_{j+1}-\sigma_i}} {\left(q\right)_{\sigma_j-\sigma_{i}}}\right)\\ &=(-1)^uq^v \prod_{i=1}^n \frac{\left(q\right)_{\sigma_i}\left(q\right)_{\sigma-\sigma_i}} {\left(q\right)_{\sigma_{i+1}-\sigma_i}}\end{aligned}$$ with $u=\sum_i(n-i)a_i$ and $v=\sum_i\left((n-i)a_i\sigma_i+ (n-i)\binom{a_i}{2}+\sigma_i(\sigma-\sigma_{i+1})\right)$. In view of the simple identity $\sum_i(n-i)a_i=\sum_i\sigma_i$, we have $u=\sum_i\sigma_i$, thus the powers of $-1$ cancel out. The same happens with the powers of $q$ due to the following observation, which implies $v=\sum_i\tau_i$. $\sum_i{(n-i)\left(a_i\sigma_i+\binom{a_i}{2}\right)}= \sum_i\binom{\sigma_i}{2}.$ We proceed by a routine induction on $n$. When $n=0$, both expressions are 0, and one readily checks the relation $$\sum_{i=1}^n\left(a_i\sigma_i+\binom{a_i}{2}\right) =\binom{\sigma_{n+1}}{2},$$ which completes the induction. Putting everything together we obtain that the constant term of $f_q$ is indeed $$\begin{aligned} \frac{F(q^{\sigma_1}, q^{\sigma_2}, \dots, q^{\sigma_n})} {\phi_1'(q^{\sigma_1})\phi_2'(q^{\sigma_2})\dots \phi_n'(q^{\sigma_n})}&= \prod_{i=1}^n \frac{\left(q\right)_{\sigma_i}\left(q\right)_{\sigma-\sigma_i}} {\left(q\right)_{\sigma_i}\left(q\right)_{\sigma-\sigma_{i+1}} \left(q\right)_{\sigma_{i+1}-\sigma_i}}\\ &=\frac{\left(q\right)_{\sigma}} {\displaystyle{\prod_{i=1}^n\left(q\right)_{\sigma_{i+1}-\sigma_i}}}\\ &=\frac{\left(q\right)_{a_1+a_2+\dots +a_n}} {\left(q\right)_{a_1}\left(q\right)_{a_2}\dots\left(q\right)_{a_n}}.\end{aligned}$$ [10]{} N. Alon, Combinatorial Nullstellensatz, Combin. Probab. Comput. **8** (1999) 7–29. G. E. Andrews, Problems and prospects for basic hypergeometric functions, in: Theory and Application of Special Functions, R. A. Askey, ed., Academic Press, New York (1975), pp. 191–224. T. H. Baker and P. J. Forrester, Generalizations of the $q$-Morris constant term identity, J. Combin. Th. A **81** (1998) 69–87. D. M. Bressoud and I. P. Goulden, Constant term identities extending the $q$-Dyson theorem, Trans. Amer. Math. Soc. **291** (1985) 203–228. I. Cherednik, Double affine Hecke algebras and Macdonald’s conjectures, Annals of Math. **141** (1995) 191–216. F. J. Dyson, Statistical theory of energy levels of complex systems, J. Math. Phys. **3** (1962) 140–156. P. J. Forrester, Normalization of the wawefunction for the Calogero–Sutherland model with internal degrees of freedom, Int. J. Mod. Phys. B **9** (1995) 1243–1261. I. M. Gessel and G. Xin, A short proof of the Zeilberger–Bressoud $q$-Dyson theorem, Proc. Amer. Math. Soc. **134** (2006) 2179–2187. I. J. Good, Short proof of a conjecture by Dyson, J. Math. Phys. **11** (1970) 1884. L. Habsieger, Une $q$-intégrale de Selberg–Askey, SIAM J. Math. Anal. **19** (1988) 1475–1489. K. W. J. Kadell, A proof of Andrews’s $q$-Dyson conjecture for $n=4$, Trans. Amer. Math. Soc. **290** (1985) 127–144. K. W. J. Kadell, A proof of Askey’s conjectured $q$-analogue of Selberg’s integral and a conjecture of Morris, SIAM J. Math. Anal. **19** (1988) 969–986. K. W. J. Kadell, Aomoto’s machine and the Dyson constant term identity, Methods Appl. Anal. **5** (1998) 335–350. K. W. J. Kadell, A Dyson constant term orthogonality relation, J. Combin. Th. A **89** (2000) 291–297. R. N. Karasev and F. V. Petrov, Partitions of nonzero elements of a finite field into pairs, Israel J. Math., to appear. Gy. Károlyi, Note on a problem of Kadell, manuscript. Gy. Károlyi, A. Lascoux, and S. O. Warnaar, Constant term identities and Poincaré polynomials, submitted. Gy. Károlyi and Z. L. Nagy, Proof of a $q$-Aomoto integral and a conjecture of Forrester, manuscript. M. Lasoń, A generalization of Combinatorial Nullstellensatz, Electron. J. Combin. **17** (2010) \#N32, 6 pages. I. G. Macdonald, Some conjectures for root systems, SIAM J. Math. Anal. **13** (1982) 988–1007. W. G. Morris, Constant Term Identities for Finite and Affine Root Systems, Ph.D. Thesis, University of Wisconsin, Madison, 1982. R. P. Stanley, The $q$-Dyson conjecture, generalized exponents, and the internal product of Schur functions, Combinatorics and Algebra (Boulder, 1983), Contemp. Math. **34**, Amer. Math. Soc., Providence, 1984, pp. 81–94. R. P. Stanley, The stable behavior of some characters of $\mathrm{SL}(n,\mathbb{C})$, Lin. Multilin. Alg. **16** (1984) 3–27. J. R. Stembridge, A short proof of Macdonald’s conjecture for the root systems of type $A$, Proc. Amer. Math. Soc. **102** (1988) 777–786. K. G. Wilson, Proof of a conjecture by Dyson, J. Math. Phys. **3** (1962) 1040–1043. D. Zeilberger, A Stembridge–Stanton style elementary proof of the Habsieger–Kadell $q$-Morris identity, Discrete Math. **79** (1989) 313–322. D. Zeilberger and D. Bressoud, A proof of Andrews’s $q$-Dyson conjecture, Discrete Math. **54** (1985) 201–224. [^1]: This research was supported by the Australian Research Council, by ERC Advanced Research Grant No. 267165, and by Hungarian National Scientific Research Funds (OTKA) Grants 67676 and 81310.
--- author: - 'D. Kaledin' title: 'Motivic structures in non-commutative geometry' --- Generalities on mixed motives. {#mm.sec} ============================== The conjectural category ${\operatorname{{\mathcal M}{\mathcal M}}}$ of [*mixed motives*]{}, as described by Deligne, Beilinson and others, unifies and connects various cohomology theories which appear in modern algebraic geometry. Recall that one expects ${\operatorname{{\mathcal M}{\mathcal M}}}$ to be a symmetric tensor abelian category with a distiguished invertible object ${{\mathbb Z}}(1)$ called the [*Tate motive*]{}. One expects that for any smooth projective algebraic variety $X$ defined over ${{\mathbb Q}}$, there exist a functorial [*motivic cohomology complex*]{} $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(X) \in {{\cal D}}^b({\operatorname{{\mathcal M}{\mathcal M}}})$ with values in the derived category ${{\cal D}}^b({\operatorname{{\mathcal M}{\mathcal M}}})$, whose cohomology groups $$H^i(X) \in {\operatorname{{\mathcal M}{\mathcal M}}}$$ are called [*motivic cohomology groups*]{}. If $X$ is the projective space ${{\mathbb P}}^n$, $n \geq 1$, then one expects to have $$H^{2i}({{\mathbb P}}^n) \cong {{\mathbb Z}}(i)$$ for $0 \leq i \leq n$, and $0$ otherwise. For a general $X$ and any integer $j$, one defines the [*absolute cohomology complex*]{} by $$H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{abs}(X,{{\mathbb Z}}(j)) = {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{\operatorname{{\mathcal M}{\mathcal M}}}}({{\mathbb Z}}(-j),H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(X)),$$ with its cohomology groups $H^i_{abs}(X,{{\mathbb Z}}(j))$ known as [*absolute cohomology groups*]{}. It is expected that the absolute cohomology groups are related to the algebraic $K$-theory groups $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(X)$ by means of a functorial [*regulator map*]{} $$\label{reg.abs} r:K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(X) \to \bigoplus_j H^{2j-{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}}_{abs}(X,{{\mathbb Z}}(j)),$$ and it is expected that the regulator map is “not far from an isomorphism” (for example, it ought to be an isomorphism modulo torsion). The above picture, with its many refinements which we will not need, is, unfortunately, still conjectural. In practice, one has to be content with [*categories of realizations*]{}. These follow the same general pattern, but the hypothetical category ${\operatorname{{\mathcal M}{\mathcal M}}}$ is replaced with a known category ${\operatorname{\sf Real}}$ whose definition axiomatizes the features of a particular known cohomology theory. The prototype example is that of $l$-adic cohomology. Recall that for any algebraic variety $X/{{\mathbb Q}}$, its $l$-adic étale cohomology groups $$H_{et}^i(X,{{\mathbb Q}}_l)$$ are ${{\mathbb Q}}_l$-vector spaces equipped with an additional structure of an [*$l$-adic representation*]{} of the Galois group ${\operatorname{Gal}}(\overline{{{\mathbb Q}}}/{{\mathbb Q}})$. These representations form a tensor symmetric abelian category ${\operatorname{Rep}}_l({\operatorname{Gal}}(\overline{{{\mathbb Q}}}/{{\mathbb Q}}))$ with a distiguished Tate module ${{\mathbb Q}}_l(1)$, and one can treat $l$-adic cohomology as taking values in this category. One can them define a double-graded absolute cohomology theory $$H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{abs}(X,{{\mathbb Q}}_l(j)) = H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({\operatorname{Gal}}(\overline{{{\mathbb Q}}}/{{\mathbb Q}}),H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{et}(X,{{\mathbb Q}}_l(j))),$$ known as [*absolute $l$-adic cohomology*]{}, and construct a regulator map of the form . Conjecturally, we have an exact tensor “realization functor” ${\operatorname{{\mathcal M}{\mathcal M}}}\to {\operatorname{Rep}}_l({\operatorname{Gal}}(\overline{{{\mathbb Q}}}/{{\mathbb Q}}))$, $l$-adic cohomology is obtained by applying realization to motivic cohomology, and the étale regulator map factors through the motivic one. In practice, one can treat ${\operatorname{Rep}}_l({\operatorname{Gal}}(\overline{{{\mathbb Q}}}/{{\mathbb Q}}))$ as a replacement for ${\operatorname{{\mathcal M}{\mathcal M}}}$, and hope that the regulator map still captures essential information about $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(X)$. In this paper, we will be concerned with another family of cohomology theories and realizations which appear as refinements of [*de Rham cohomology*]{}. By its very nature, de Rham cohomology of a smooth algebraic variety $X$ has coefficients in the field or ring of definition of $X$. Thus it is not necessary to require that $X$ is defined over ${{\mathbb Q}}$, and it is convenient to classify de Rham-type cohomology theories by their rings of definitions. There are two main examples. 1. The ring of definition is either ${{\mathbb R}}$ or ${{\mathbb C}}$; the corresponding category of realizations is Deligne’s category of mixed ${{\mathbb R}}$-Hodge structures, and the absolute cohomology theory is Hodge-Deligne cohomology (with a refinement by Beilinson). The regulator map is the subject of famous Beilinson Conjectures. 2. The ring of definition is ${{\mathbb Z}}_p$; the corresponding category of realizations is the category of [*filtered Dieudonné modules*]{} of Fontaine-Lafaille [@FL], and the absolute cohomology theory is [*syntomic cohomology*]{} of Fontaine and Messing [@FM]. The goal of this paper is to report on recent discoveries and conjectures which state, roughly speaking, that all these additional “motivic” structures on de Rham cohomology of an algebraic variety should exist in a much more general setting of periodic cyclic homology of properly understood [*non-commutative*]{} algebraic varieties. As opposed to the usual commutative setting, the “classical” case is more difficult and largely conjectural; in the $p$-adic case , most of the statements have been proved. Moreover, the $p$-adic story shows an unexpected relation to algebraic topology which we will also explain. Before we start, however, we should define exactly what we mean by a “non-commutative algebraic variety”, and recall basic facts on cyclic homology. Non-commutative setting. {#hc.sec} ======================== We start by a brief recollection on cyclic homology; a very good overview can be found in J.-L. Loday’s book [@Lo], and an old overview [@FT] is also quite useful. [*Hochschild homology*]{} $HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A/k)$ of an associative unital algebra $A$ flat over a commutative ring $k$ is given by $$HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) = HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A/k) = {\operatorname{Tor}}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^{A^{opp} \otimes_k A}(A,A),$$ where $A^{opp}$ is $A$ with multiplication written in the opposite direction. It has been discovered by Hochschild, Kostant and Rosenberg [@HKR] that if $A$ is commutative and $X = {\operatorname{Spec}}A$ is a smooth algebraic variety over $k$, then $$HH_i(A) \cong H^0(X,\Omega^i(X)),$$ the space of $i$-forms on $X$ over $k$. [*Cyclic homology*]{} $HC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is a refinement of Hochschild homology discovered independently by A. Connes and B. Tsygan. It is functorial in $A$, and related to $HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ by the [*Connes’ long exact sequence*]{} $$\begin{CD} HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) @>>> HC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) @>{u}>> HC_{{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}-2}(A) @>>>, \end{CD}$$ where $u$ is a canonical [*periodicity map*]{} of degree $2$. Both $HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ and $HC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ can be represented by functorial complexes $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$, $CC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$, and the Connes’ exact sequence then becomes a short exact sequence of complexes. The complex $CC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is the total complex of a bicomplex $$\label{hc.eq} \begin{CD} \dots @>>> A @>{{\operatorname{\sf id}}}>> A @>{0}>> A\\ @. @AA{b}A @AA{b'}A @AA{b}A \\ \dots @>>> A \otimes A @>{{\operatorname{\sf id}}+ \tau}>> A \otimes A @>{{\operatorname{\sf id}}- \tau}>> A \otimes A\\ @. @AA{b}A @AA{b'}A @AA{b}A\\ \dots @. \dots @. \dots @. \dots\\ @. @AA{b}A @AA{b'}A @AA{b}A \\ \dots @>>> A^{\otimes n} @>{{\operatorname{\sf id}}+ \tau + \dots + \tau^{n-1}}>> A^{\otimes n} @>{{\operatorname{\sf id}}- \tau}>> A^{\otimes n}\\ @. @AA{b}A @AA{b'}A @AA{b}A \end{CD}$$ Here it is understood that the whole thing extends indefinitely to the left, all the even-numbered columns are the same, all odd-numbered columns are the same, and the bicomplex is invariant with respect to the horizontal shift by $2$ columns which gives the periodicity map $u$. The map $\tau:A^{\otimes i} \to {{\cal A}}^{\otimes i}$ is the cyclic permutation of order $i$ multiplied by $(-1)^{i+1}$, and $b$, $b'$ are certain explicit differentials expressed in terms of the multiplication map $m:A^{\otimes 2} \to A$. The complex $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is the rightmost column of , and also any odd-numbered column when counting from the right; the even-numbered columns are acyclic. [*Periodic cyclic homology*]{} $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is obtained by inverting the map $u$, namely, $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is the homology of the complex $$CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) = \lim_{\overset{u}{\gets}}CC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$$ (explictily, this is the total complex of a bicomplex obtained by extending to the right as well as to the left). [*Negative cyclic homology*]{} $HC^-_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is the homology of the complex $CC^-_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ obtained as the third term in a short exact sequence $$\begin{CD} 0 @>>> CC^-(A) @>>> CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) @>>> CC_{{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}-2}(A) @>>> 0 \end{CD}$$ (equivalently, one extends to the right but not to the left). The reason cyclic homology is interesting in algebraic geometry is the following comparison theorem. In the situation of the Hochschild-Kostant-Rosenberg Theorem, let $d$ be the dimension of $X = {\operatorname{Spec}}A$, and assume in addition that $d!$ is invertible in the base ring $k$. Then there exists a canonical isomorphism $$\label{hp.dr} HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \cong H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X)((u)),$$ where the right-hand side is a shorthand for “formal Laurent power series in one variable $u$ of degree $2$ with coefficients in de Rham cohomology $H_{DR}(X)$”. By , periodic cyclic homology classes can be thought of as non-commutative generalizations of de Rham cohomology classes. Some information is lost in this generalization: because of the presense of $u$ in the right-hand side of , what we recover from $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is not the de Rham cohomology of $X$ but rather, the de Rham cohomology of the product $X \times {{\mathbb P}}^\infty$ of $X$ and the infinite projective space ${{\mathbb P}}^\infty$, where we moreover invert the generator $u \in H^2_{DR}({{\mathbb P}}^\infty)$. Thus given a category of realizations ${\operatorname{\sf Real}}$ and a ${\operatorname{\sf Real}}$-valued refinement of de Rham cohomology, the appropriate target for its non-commutative generalization is not the derived category ${{\cal D}}({\operatorname{\sf Real}})$ but the [*twisted $2$-periodic derived category*]{} ${{\cal D}}^{per}({\operatorname{\sf Real}})$ obtained by inverting quasiisomorphisms in the category of complexes $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ of objects in ${\operatorname{\sf Real}}$ equipped with an isomorphism $u:M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\cong M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(-1)[2]$, where we denote $M(n) = M \otimes {{\mathbb Z}}(n)$, $n \in {{\mathbb Z}}$. We note, however, that this causes no problem with the regulator map, since the summation in the right-hand side of is the same as in the right-hand side of . Thus for a ${\operatorname{\sf Real}}$-valued refinement $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{\operatorname{\sf Real}}}(-)$ of de Rham cohomology and any smooth affine algebraic variety $X = {\operatorname{Spec}}A$, the regulator map takes the form $$K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{{\cal D}}^{per}({\operatorname{\sf Real}})}(k,HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)) = {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{{\cal D}}^{per}({\operatorname{\sf Real}})}(k,H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{\operatorname{\sf Real}}}(X)((u))),$$ where $k$ in the right-hand side is the unit object of ${\operatorname{\sf Real}}$. Somewhat surprisingly, non-affine algebraic varieties can be included in the above picture with very little additional effort. To do it, it is convenient to use the machinery of differential graded (DG) algebras and DG categories. An excellent overview can be found in [@kel]; for the convenience of the reader, let us summarize the relevant points. Roughly speaking, a $k$-linear DG category is a category $C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ whose ${\operatorname{Hom}}$-sets $C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(-,-)$ are equipped with a structure of complexes of $k$-modules in such a way that composition maps are $k$-linear and compatible with the differentials (for precise definitions, see [@kel Section 2]). For any small $k$-linear DG category $C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$, one defines a triangulated [*derived category of DG modules*]{} ${{\cal D}}(C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ ([@kel Section 3]). Any $k$-linear DG functor $\gamma:C_1^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\to C_2^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ induces a trinagulated functor $\gamma^*:{{\cal D}}(C_2^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \to {{\cal D}}(C_1^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$. The functor $\gamma$ ia a [*derived Morita equivalence*]{} if the induced functor $\gamma^*$ is an equivalence of triangulated categories. It turns out – this mostly due to the work of G. Tabuada and B. Toën, see [@kel Section 4] and references therein – that there is a closed model structure on the category of small $k$-linear DG categories whose weak equivalences are exactly derived Morita equivalences. Denote by ${\operatorname{\sf Morita}}(k)$ the corresponding homotopy category, that is, the category of “small $k$-linear DG categories up to a derived Morita equivalence”. Any $k$-algebra $A$ is a $k$-linear DG category with one object ${{\sf pt}}$ and ${\operatorname{Hom}}({{\sf pt}},{{\sf pt}})=A$ placed in degree $0$, so that we have an embedding ${\operatorname{\sf Alg}}(k) \to {\operatorname{\sf Morita}}(k)$ from the category ${\operatorname{\sf Alg}}(k)$ of associative $k$-algebras to ${\operatorname{\sf Morita}}(k)$. Then, as explained in [@kel Section 5], Hochschild homology, cyclic homology, periodic cyclic homology and negative cyclic homology extend to functors $${\operatorname{\sf Morita}}(k) \to {{\cal D}}(k).$$ Moreover, so does the algebraic $K$-theory functor $K^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(-)$, and other “additive invariants” in the sense of [@kel Section 5]. In general, a DG category with one object ${{\sf pt}}$ is the same thing as an associative unital DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= {\operatorname{Hom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\sf pt}},{{\sf pt}})$. The category of DG algebras over $k$ has a natural closed model structure whose weak equivalences are quasiisomorphisms, and whose fibrations are surjective maps. The corresponding homotopy category ${\operatorname{\sf DG-Alg}}(k)$ is the category of DG algebras “up to a quasiisomorphism”. One shows that a quasiisomorphism between DG algebras is in particular a derived Morita equivalence, so that we have a natural functor $$\label{dg.mor} {\operatorname{\sf DG-Alg}}(k) \to {\operatorname{\sf Morita}}(k).$$ It is not diffucult to show that for every cofibrant DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$, the individual terms of the complex $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ are flat $k$-modules. In this case, the Hochschild, cyclic etc. homology of $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ are especially simple – they are given by exactly the same bicomplex and its versions as in the case of ordinary algebras. This is manifestly invariant under quasiisomorphisms, so that the Hochschild, cyclic etc. homology obviously descend to functors from ${\operatorname{\sf DG-Alg}}(k)$ to the derived category ${{\cal D}}(k)$. The DG category approach shows that there is even more invariance: even if two DG algebras $A_1^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$, $A_2^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ are not quasiisomorphic but only have isomorphic images in ${\operatorname{\sf Morita}}(k)$, their Hochschild, cyclic etc. homology is naturally identified. This statement is already non-trivial in the case of usual algebras, see [@Lo Section 1.2]. A DG category $T_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf Morita}}(k)$ is [*derived-affine*]{} if it lies in the essential image of the functor . A small $k$-linear DG category $C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ with a finite number of objects is automatically derived-Morita equivalent to a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$, thus affine. For example, one can take $$A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= \bigoplus_{c,c'}C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(c,c'),$$ where the sum is taken over all pairs of objects in ${{\mathbb C}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$. Now, it has been proved ([@BV] combined with [@kelold]) that for any quasiseparated quasicompact scheme $X$ over $k$, there exists a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/k$ such that the derived category ${{\cal D}}(X)$ of quasicoherent sheaves on $X$ is equivalent to the derived category ${{\cal D}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, $${{\cal D}}(X) \cong {{\cal D}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}),$$ and such a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is unique up to a derived Morita equivalence, so that we have a canonical functor from the category of algebraic varieties over $k$ to the category ${\operatorname{\sf Morita}}(k)$. Roughly speaking, any algebraic variety is derived Morita-equivalent to a DG algebra, or, in a succint formulation of [@BV], “every algebraic variety is derived-affine”. Moreover, it turns out that the properties of $X$ which are relevant for the present paper are reflected in the properties of a Morita-equivalent DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$. For example, one introduces the following (see e.g. [@KS]). 1. A DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/k$ is [*proper*]{} if $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is perfect as an object in the derived category ${{\cal D}}(k)$ of complexes of $k$-modules. 2. A DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/k$ is [*smooth*]{} if $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is perfect as an object in the derived category ${{\cal D}}(A^{{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}opp}\otimes A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ of $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$-bimodules. Then $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is proper, resp. smooth if and only if $X$ is proper, resp. smooth (in the affine case $X = {\operatorname{Spec}}A$, the second claim is the famous Serre regularity criterion). Moreover, the correspondence $X \mapsto A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is compatible with algebraic $K$-theory, $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(X) \cong K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, and if the variety $X/k$ is smooth of dimension $d$, and $d!$ is invertible in $k$, then the Hochschild homology of such a Morita-equivalent DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is canonically isomorphic to $$HH_i(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \cong \bigoplus_j H^j(X,\Omega^{i+j}(X)),$$ the so-called “Hodge cohomology” of $X$, while the periodic cyclic homology $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is exactly as in . Thus as far as homological invariants are concerned, one can treat DG algebras “up to a derived Morita-equivalence” as non-commutative generalizations of algebraic varieties: - A non-commutative algebraic variety over $k$ is a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over $k$ considered as an object of the Tabuada-Toën category ${\operatorname{\sf Morita}}(k)$. This is the point of view we will adopt. Hodge-to-de Rham spectral sequence. {#hdr.sec} =================================== A convenient way to pack all the structures related to Hochschild homology $HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ of a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/k$ is by considering the equivariant derived category ${{\cal D}}_{S^1}(k)$ of $S^1$-equivariant constructible sheaves of $k$-modules on the point ${{\sf pt}}$. Then the claim is that the Hochschild homology complex $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, while [*a priori*]{} simply a complex of $k$-modules, in fact underlies a canonical object ${\widetilde}{CH}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \in {{\cal D}}_{S^1}(k)$ (loosely speaking, “$CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ carries a canonical $S^1$-action”). The negative cyclic homology appears as $S^1$-equivariant cohomology $$H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}},{\widetilde}{CH}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})),$$ the periodicity map $u$ is the generator of $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}}) \cong H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(BS^1)$, and $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ is the localization $HC_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^-(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})(u^{-1})$. Another way to pack the same data is by considering the [ *filtered derived category*]{} ${{{\cal D}}{\mathcal F}}(k)$ of $k$-modules of [@BBD] – that is, the triangulated category obtained by considering complexes $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ of $k$-modules equipped with a descreasing filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ numbered by all integers, and inverting those maps which induce quasiisomorphisms on the associated graded quotients ${\operatorname{\sf gr}}^F$. This has a “twisted $2$-periodic” version ${{{\cal D}}{\mathcal F}}^{per}(k)$, obtained from filtered complexes $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ equipped with an isomorphism $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\cong V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}[2](-1)$, where $(-1)$ means renumbering the filtration: $F^iV(-1) = F^{{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}+1}V$. \[trivial.lemma\] We have $${{\cal D}}_{S^1}(k) \cong {{{\cal D}}{\mathcal F}}^{per}(k).$$ [[*Sketch of the proof.*]{}]{} Let us just indicate the equivalence: it sends $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{\cal D}}_{S^1}(k)$ to the equivariant cohomology complex $C^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}},V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})(u^{-1})$, with the (generalized) filtration given by $$F^iH^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}},V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})(u^{-1}) = u^iC^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}},V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}),$$ where $u \in C^2_{S^1}(k)$ represents the generator of the equivariant cohomology ring $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{S^1}({{\sf pt}},k) \cong k[u]$. In the case of the Hochschild homology complex ${\widetilde}{CH}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, the corresponding periodic filtered complex is $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, with the filtration given by $$F^iCP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}((A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) = u^iCC^-_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \subset CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}).$$ One can treat ${{{\cal D}}{\mathcal F}}^{per}(k)$ as a very crude “category of realization” ${\operatorname{\sf Real}}$ in the sense of Section \[mm.sec\], or rather, of its periodic derived category ${{\cal D}}^{per}({\operatorname{\sf Real}})$. The expected regulator map then takes the form $$\label{non-comm.reg} K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \to HC^-_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) = {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{{{\cal D}}{\mathcal F}}^{per}(k)}(k,HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)).$$ Such a map does indeed exist, see [@Lo Chapter 8]. In general, it is very far from being an isomorphism. The only general result is a theorem of T. Goodwillie [@good] which shows that at least the tangent spaces to both sides are the same. Namely, given an alegbra $A$ with an ideal $I \subset A$, one defines the relative $K$-theory $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A,I)$ spectrum as the cone of the natural map $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A/I)$, and analogously for the cyclic homology functors. Then it has been proved in [@good] that if $k$ is a field of characteristic $0$ and $I \subset A$ is a nilpotent ideal, then the map $$K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A,I) \to HC^-(A,I)$$ induced by the regulator map is a quasiisomorphism. An analogous statement also holds for DG algebras over $k$. While filtered complexes are a very crude approximation to mixed motives, already on this level the smoothness and properness of a DG algebra leads to non-trivial consequences. Namely, a filtered complex gives rise to a spectral sequence. In the case of cyclic homology, it takes the form $$\label{h.d.r} HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})((u)) \Rightarrow HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}),$$ where we use the same shorthand as in . When the DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/k$ is Morita-equivalent to a smooth algebraic variety $X/k$, the filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ on $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \cong H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X)((u))$ is just the Hodge filtration on de Rham cohomology, and is the usual Hodge-to-de Rham spectral sequence $$H^p(X,\Omega^q(X))\Rightarrow H^{p+q}_{DR}(X)$$ tensored with $k((u))$. Because of this, in general is also called “Hodge-to-de Rham spectral sequence”. Then the following is a partial proof of a general conjecture of M. Kontsevich and Ya. Soibelman [@KS1]. \[hdr.thm\] Assume that $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is a smooth and proper DG algebra over a field $k$ of characteristic ${\operatorname{\sf char}}k = 0$. Assume further that $A^i = 0$ for $i < 0$. Then the Hodge-to-de Rham spectral sequence degenerates. The assumption $A^i = 0$, $i < 0$ is technical (note, however, that it can always be achieved for a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ corresponding to a smooth and proper algebraic variety $X/k$, see e.g. [@O Theorem 4]). In the usual commutative case, the Hodge-to-de Rham degeneration statement is well-known and has two proofs. Classically, it follows from the general complex-analytic package of Hodge theory and harmonic forms. An alternative proof by Deligne and Illusie [@DI] uses reduction to positive characteristic and $p$-adic methods. So far, it is only the second technique that has been generalized to the non-commutative case. We will now explain this. Review of Filtered Dieudonné modules. {#fdm.1.sec} ===================================== A $p$-adic analog of the notion of a mixed Hodge structure has been introduced in 1982 by Fontaine and Lafaille [@FL]. Here is the definition. \[fdm.defn\] Let $k$ be a finite field of characteristic $p$, with its Frobenius map, and let $W$ be its ring of Witt vectors, with its canonical lifting ${\varphi}$ of the Frobenius map. A [*filtered Dieudonné module*]{} over $W$ is a finitely generated $W$-module $M$ equipped with a decreasing filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}M$, indexed by all integers and such that $\cap F^iM=0$, $\cup F^iM=M$, and a collection of Frobenius-semilinear maps ${\varphi}_i:F^iM \to M$, one for each integer $i$, such that 1. ${\varphi}_i|_{F^{i+1}M} = p {\varphi}^{i+1}$, and 2. the map $$\sum {\varphi}_i:\bigoplus_iF^iM \to M$$ is surjective. We will denote by ${{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W)$ the category of filtered Dieudonné modules over $W$. It is an abelian category. A symmetric tensor product in ${{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W)$ is defined in the obvious way, and we have the Tate object $W(1)$ given by: $W(1) = W$ as a $W$-module, $F^1W(1) = W(1)$, $F^2W(1)=0$, ${\varphi}_1:F^1W(1) \to W(1)$ equal to ${\varphi}$. We also have the derived category ${{\cal D}}({{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W))$. If a filtered Dieudonné module $M \in {{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W)$ is annihilated by $p$, then of Definition \[fdm.defn\] insures that the map in factors through a surjective map $${\widetilde}{{\varphi}}:{\operatorname{\sf gr}}_F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}M \to M.$$ Since both sides are $k$-vector spaces of the same dimension, ${\widetilde}{{\varphi}}$ must be an isomorphism. For a general filtered $W$-module $\langle M,F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\rangle$, one lets ${\widetilde}{M}$ be the cokernel of the map $$\label{wt.M} \begin{CD} \bigoplus_i F^iM @>{t - p{\operatorname{\sf id}}}>> \bigoplus_i F^iM, \end{CD}$$ where $t:F^{{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}+1}M \to F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}M$ is the tautological embedding. Then again, insures that the map $\sum_i{\varphi}_i$ factors through a map $$\label{wt.phi} {\widetilde}{{\varphi}}:{\widetilde}{M} \to M$$ and this map must be an isomorphism if were to be satisfied. This allows to generalize the definition of a filtered Dieudonné module: instead of a finitely generated filtered $W$-module, one can consider a filtered $W$-module $\langle M,F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\rangle$ such that $M$ is $p$-adically complete and complete with respect to the topology induced by $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ (these conditions together with the non-degeneracy conditions $\cap F^iM=0$, $\cup F^iM=M$ insure that the map is injective). Then a [*unbounded Dieudonné module*]{} structure on $M$ is given by a Frobenius-semilinear isomorphism ${\widetilde}{{\varphi}}$ of the form . I do not know whether the category of unbounded filtered Dieudonné modules is still abelian. However, complexes of unbounded filtered Dieudonné modules can be defined in the obvious way, and the correspondence $M \mapsto {\widetilde}{M}$ sends filtered quasiisomorphisms into quasiisomorphisms, so that we obtain a triangulated derived category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W) \supset {{\cal D}}({{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W))$ and its twisted $2$-periodic version ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}^{per}$. Moreover, one can drop the requirement that the map ${\widetilde}{{\varphi}}$ is an isomorphism and allow it to be an arbitrary map. Let us call the resulting objects “weak filtered Dieudonné modules”. The category of weak filtered Dieudonné modules is definitely not abelian, but the above procedure stil applies: we can invert filtered quasiisomorphisms and obtain triangulated categories ${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}(W)$, ${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}^{per}(W)$. We then have a fully faithful inclusions ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}(W) \subset {\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}(W)$, ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}^{per}(W) \subset {\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}^{per}(W)$, and one can show that their essential images consist of those $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ in ${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}(W)$, resp. ${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}^{per}(W)$ for which the map ${\widetilde}{{\varphi}}$ of is a quasiisomorphism. Assume given a algebraic variety $X$ smooth over $W$, of dimension $d < p$. Then de Rham cohomology $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X/W)$ equipped with the filtration induced by the stupid filtration on the de Rham complex has the structure of a complex of generalized filtered Dieudonné modules. If $X/W$ is proper, the groups $H^i_{DR}(X/W)$ are finitely generated, so that they are filtered Dieudonné modules in the strict sense (and the filtration is then the Hodge filtration). This Dieudonné module structure can be seen explicitly under the following strong additional assumption: - the Frobenius endomorphism ${{\sf Fr}}$ of the special fiber $X_k = X \otimes_W k$ of $X/W$ lifts to a Frobenius-semilinear endomorphism ${\widetilde}{{{\sf Fr}}}:X \to X$. Then one checks easily that for any $i \geq 0$, the natural map ${\widetilde}{{{\sf Fr}}}^*:\Omega^i(X/W) \to \Omega^i(X/W)$ is divisible by $p^i$. The Dieudonné module structure maps ${\varphi}_i$ are induced by the corresponding maps $\frac{1}{p^i}{\widetilde}{{{\sf Fr}}}^*$. We note that in this special case, the map ${\varphi}_i$ sends $F^i$ into $F^i$. In the general case, the construction is due to G. Faltings [@falt Theorem 4.1]; roughly speaking, it uses a comparison theorem which gives a quasiisomorphism $$H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{cris}(X_k) \cong H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X),$$ where in the left-hand side, we have the cristalline cohomology of the special fiber $X_k$. The Frobenius endomorphism of $X_k$ induces an endomorphism on cristalline cohomology, and this gives the structure map ${\varphi}_0$. By an additional argument, one shows that ${\varphi}_0|F^i$ is canonically divisible by $p^i$, and this gives the other structure maps ${\varphi}_i$ (in general, they do not preserve the Hodge filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$). In particular, for any smooth $X/W$, one has the isomorphism . Its reduction $\mod p$ is an isomorphism $$\label{cartier} {\operatorname{\sf gr}}_F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X_k) \cong \bigoplus_i H^{{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}-i}(X_k,\Omega^i(X_k)) \cong H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X_k)$$ between Hodge and de Rham cohomology of the special fiber $X_k$. If $X$ is affine, this is nothing but the inverse to the Cartier isomorphism, discovered by P. Cartier back in the 1950-ies; as such, it depends only on the special fiber $X_k$ and not on the lifting $X/W$. In the general case, it has been shown by Deligne and Illusie in [@DI] that depends on the lifting $X \otimes_W W_2(k)$ of $X_k$ to the second Witt vectors ring $W_2(k) = W(k)/p^2$ (but not on the lifting to higher orders, nor even on the existence of such a lifting). The absolute cohomology theory associated to the ${{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}$-valued refinement of de Rham cohomology is the [*syntomic cohomology*]{} of Fontaine and Messing. As it happens, the functors ${\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(W(-j),-)$ in the category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}$ are easy to compute explicitly — for any complex $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}$, ${\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(W,-)$ is the cone of the natural map $$\begin{CD} F^jM_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}@>{{\operatorname{\sf id}}- {\varphi}_j}>> M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}. \end{CD}$$ When applied to a smooth proper variety $X/W$, this gives syntomic cohomology groups $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{synt}(X,{{\mathbb Z}}_p(j))$. The construction can even be localized with respect to the Zariski topology on $X_k$, so that the syntomic cohomology is expressed as hypercohomology of $X_k$ with coefficients in certain canonical complexes of Zariski sheaves, as in [@FM]. The existence and properties of the regulator map for the syntomic cohomology have been studied by M. Gros [@gros1; @gros2]. In principle, one can construct the regulator by the standard procedure for “twisted cohomology theories” in the sense of [@BlO], but there is one serious problem: the filtered Dieudonné module structure on $H^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{DR}(X)$ only exists if $p > \dim X$. Since the standard procedure works by considering infinite projective spaces and Grassmann varieties, this condition is inevitably broken no matter what $p$ we start with. To circumvent this, Gros had to modify (in [@gros2]) the definition of syntomic cohomology by including additional structures such as the rigid analytic space associated to $X/W$. The resulting picture becomes extremely complex, and at present, it is not clear whether it can be generalized to non-commutative varieties. FDM in the non-commutative case. {#fdm.2.sec} ================================ What we do have for non-commutative varieties is the following result. The [*Hochschild cohomology*]{} $HH^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/R)$ of a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over a ring $R$ is given by $$HH^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/R) = {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{A^{{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}opp} \otimes_R A}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}},A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}).$$ \[car\] Assume given an associative DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over a finite field $k$. Assume that $A^i = 0$ for $i < 0$. Assume also that $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is smooth, that it can be lifted to a flat DG algebra ${\widetilde}{A}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over $W_2(k)$, and that $HH^i(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) = 0$ for $i \geq 2p-1$. Then there exists a canonical Cartier-type isomorphism $$HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})((u)) \cong HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}).$$ If a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is derived Morita-equivalent to a smooth algebraic variety $X/k$, then we have $HH^i(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) = 0$ automatically for $i > 2 \dim X$, so that the last condition on $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ in Theorem \[car\] reduces to the condition $p > \dim X$ already mentioned in Section \[fdm.1.sec\]. Theorem \[hdr.thm\] easily follows from Theorem \[car\] by the same dimension argument as in the original proof of Deligne and Illusie in [@DI]. The only non-trivial additional input is a beautiful recent theorem of B. Toën [@To] which claims that a smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over a field $K$ comes from a smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_R$ over a finitely generated subring $R \subset K$, $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\cong A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_R \otimes_R K$. This allows one to reduce problems from ${\operatorname{\sf char}}0$ to ${\operatorname{\sf char}}p$. Let us give a very rough sketch of how Theorem \[car\] is proved (for more details, see [@goet], and the complete proof in a slightly different language is in [@K]). As in the commutative story, there are two cases for Theorem \[car\]: the easy case when one can construct the Cartier map explicitly, and the general case. The easy case is when $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= A$ is concentrated in degree $0$, and the algebra $A$ admits a so-called [*quasi-Frobenius map*]{}. \[tt.le\] For any vector space $V$ over the finite field $k$ of characteristic ${\operatorname{\sf char}}k = p > 0$, there is a canonical Frobenius-semilinear isomorphism $${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}/p{{\mathbb Z}},V) \cong {\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}/p{{\mathbb Z}},V^{\otimes p}),$$ where ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}/p{{\mathbb Z}},-)$ means the Tate (co)homology of the group ${{\mathbb Z}}/p{{\mathbb Z}}$, the action of ${{\mathbb Z}}/p{{\mathbb Z}}$ on $V$ is trivial, and the action on $V^{\otimes p}$ is by the longest cycle permutation $\sigma:V^{\otimes p} \to V^{\otimes p}$. A [*quasi-Frobenius map*]{} for an algebra $A/k$ is a ${{\mathbb Z}}/p{{\mathbb Z}}$-equivariant algebra map $$\Phi:A \to A^{\otimes p}$$ which induces the standard isomorphism of Lemma \[tt.le\] on Tate cohomology ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}/p{{\mathbb Z}},-)$. If the algebra $A$ admits a quasi-Frobenius map $\Phi$, then the construction of the Cartier isomorphism proceeds as follows. First, recall that for any algebra $B$ equipped with an action of a group $G$, the [*smash product algebra*]{} $B \# G$ is the group algebra $B[G]$ but with the twisted product given by $$(b_1 \cdot g_1)(b_2 \cdot g_2) = b_1b_2^{g_1} \cdot g_1g_2,$$ and one has a canonical decomposition $$\label{burgh} HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B \# G) = \bigoplus_{\langle g \rangle} HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B \# G)_g$$ into components numbered by conjugacy classes of elements in $G$ (these components are sometimes called [*twisted sectors*]{}). Next, let $G$ be the cyclic group ${{\mathbb Z}}/p{{\mathbb Z}}$, and let $\sigma \in G$ be the generator. Then one can show that if the $G$-action on $B$ is trivial, then $$\label{tw.1} HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B \# G)_\sigma \cong {\widetilde}{HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B)},$$ where $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B)$ in the right-hand side is equipped with the Hodge filtration, and ${\widetilde}{M}$ for a filtered group $M$ means the cokernel of the map , as in Section \[fdm.1.sec\]. One the other hand, if we take the $p$-th power $B^{\otimes p}$ with $\sigma$ acting by the longest cycle permutation, then one can show that $$\label{tw.2} HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B^{\otimes p} \# G)_\sigma \cong HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(B).$$ Both the isomorphisms and are completely general and valid for algebras over any ring. So is the decomposition , which is moreover functorial with respect to $G$-equivariant maps. We now apply this to our algebras $A$ and $A^{\otimes p}$ over $k$, with the $G$-action as in Lemma \[tt.le\]. The quasi-Frobenius map $\Phi$ induces a map $${\varphi}:{\widetilde}{HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)} \cong HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A \# G)_\sigma \to HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{\otimes p} \# G)_\sigma \cong HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A),$$ and since $p$ annihilates $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$, we have $${\widetilde}{HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)} \cong {\operatorname{\sf gr}}_F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \cong HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)((u)).$$ The map ${\varphi}$ is the Cartier map of Theorem \[car\]. One then shows that it is an isomorphism; this requires one to assume that $A$ is smooth. The general case of Theorem \[car\] is handled by finding a replacement for a quasi-Frobenius map; as far as the cyclic homology is concerned, the argument stays the same. One first shows that for any unital associative algebra $A/k$, there exists a completely canonical diagram $$\begin{CD} A @<{\alpha}<< Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) @>{\Phi}>> P_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) @<{\beta}<< A^{\otimes p} \end{CD}$$ of DG algebras equipped with an action of $G={{\mathbb Z}}/p{{\mathbb Z}}$ and $G$-equivariant maps between them. The G action on $A$ and $Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is trivial. In addition, if $A$ is smooth, the map $$HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{\otimes p} \# G)_\sigma \to HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(P_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \# G)_\sigma$$ induced by the map $\beta$ is an isomorphism (although in general, this isomorphism does not preserve the Hodge filtration). Thus as before, $\Phi$ induces a canonical map $$\overline{{\varphi}}:HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A))((u)) \cong {\widetilde}{HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A))} \to HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A).$$ To construct the Cartier map for the algebra $A$, it remains to construct a map $$HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to HH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)).$$ To do this, one applies obstruction theory and shows that the map $\alpha:Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to A$ admits a splitting in the category ${\operatorname{\sf DG-Alg}}(k)$. The homology of the DG algebra $Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ is given by $$\label{Q.st} {{\mathcal{H}}}_i(Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)) = A \otimes {\operatorname{\sf St}}_i(k),$$ where ${\operatorname{\sf St}}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(k)$ is the dual $k$-Steenrod algebra — that is, the dual to the algebra of $k$-linear cohomological operations in cohomology with coefficients in $k$. We have ${\operatorname{\sf St}}_0(k) \cong {\operatorname{\sf St}}_1(k) \cong k$, and ${\operatorname{\sf St}}_i(k) = 0$ for $1 < k \leq 2p$. The map $a:Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to A$ is an isomorphism in degree $0$. The splitting is constructed degree-by-degree. In degree $1$, the obstruction to splitting is exactly the same as the obstruction to lifting the algebra $A/k$ to the ring $W_2(k)$. In any higher degree $i > 1$, the obstruction lies in the Hochschild cohomology group $HH^{2+i}(A \otimes {\operatorname{\sf St}}_i(k))$, and this vanishes in the relevant range of degrees by the assumption $HH^i(A) = 0$, $i \geq 2p-1$. In the DG algebra case, the construction breaks down since Lemma \[tt.le\] does not have a DG version. Thus one first has to replace a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ with a cosimplicial algebra ${{\cal A}}$ by the Dold-Kan equivalence, and then apply the above construction to ${{\cal A}}$ “pointwise”. It is at this point that one has to require $A^i=0$ for $i < 0$. Although [@K] only provides a Cartier map for DG algebras defined over a finite field $k$, the same technology should apply to DG algebras over $W=W(k)$ with very little changes, so that for any smooth DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/W(k)$ with $HH^i(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})=0$ for $i \geq 2p-1$, one should be able to construct a canonical isomorphism $${\widetilde}{{\varphi}}:{\widetilde}{HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})} \cong HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}).$$ Equivalently, $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ should carry a filtered Dieudonné module structure (in other words, underlie a canonical object of the periodic derived category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}^{per}(W)$). One also should be able to check that if $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is Morita-equivalent to a smooth variety $X/W$, the comparison isomorphism is compatible with the filtered Dieudonné module structures on both sides. However, at present, none of this has been done. We note that the problem with the regulator map in the $p$-adic setting mentioned in the end of Section \[fdm.1.sec\] survives in the non-commutative situation. Namely, the standard technology for constructing the regulator map ([@Lo Section 8.4]) involves considering the group algebras $k[G]$ for $G = GL_n(A)$, for all $n \geq 1$. As $n$ goes to infinity, the homological dimension of these group alegbras becomes arbitrarily large, and the conditions of Theorem \[car\] cannot be satified. Generalities on stable homotopy. ================================ The appearance of the Steenrod algebra in suggests that the whole story should be related to algebraic topology. This is indeed so. To explain the relation, we need to recall some standard facts on stable homotopy theory. Stable homotopy category and homology. -------------------------------------- Roughly speaking, the [*stable homotopy category*]{} ${\operatorname{\sf StHom}}$ is obtained by inverting the suspension functor $\Sigma$ in the category ${\operatorname{\sf Hom}}$ of pointed CW complexes and homotopy classes of maps between them. Objects of ${\operatorname{\sf StHom}}$ are called [*spectra*]{}. A spectrum consists of a collection of pointed CW complexes $X_i$, $i \geq 0$, and maps $\Sigma X_i \to X_{i+1}$ for all $i$ (in some treatments, these data are required to satisfy additional technical conditions). For the definitions of maps between spectra and homotopies between such maps, we refer the reader to a number of standard references, for example [@adams]. Any CW complex $X \in {\operatorname{\sf Hom}}$ defines its [*suspension spectrum*]{} $\Sigma^\infty X \in {\operatorname{\sf StHom}}$ consisting of the suspensions $\Sigma^i X$. For any two CW complexes $X$, $Y$, we have $${\operatorname{Hom}}_{{\operatorname{\sf StHom}}}(\Sigma^\infty X,\Sigma^\infty Y) = \lim_{\overset{i}{\to}}[\Sigma^iX,\Sigma^iY],$$ where $[-,-]$ denotes the set of homotopy classes of maps. Any complex of abelian groups $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ defines a spectrum ${\operatorname{\sf EM}}(M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ called the [*Eilenberg-Maclane spectrum of $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$*]{}. This is functorial in $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$, so that for any commutative ring $R$, we have a functor $${\operatorname{\sf EM}}:{{\cal D}}(R) \to {{\cal D}}({\operatorname{Ab}}) \to {\operatorname{\sf StHom}},$$ where ${{\cal D}}(R)$ is the derived category of the category of $R$-modules. This functor has a left-adjoint $H(R):{\operatorname{\sf StHom}}\to {{\cal D}}(R)$, known as [*homology with coefficients in $R$*]{}. The category ${\operatorname{\sf StHom}}$ is a tensor triangulated category. Both functors ${\operatorname{\sf EM}}$ and $H(R)$ are triangulated. Moreover, the homology functor $H(R)$ is a tensor functor – for any two spectra $X,Y \in {\operatorname{\sf StHom}}$ with smash-product $X \wedge Y$, there exists a functorial isomorphism $$H(R)(X) {\overset{\sf\scriptscriptstyle L}{\otimes}}_R H(R)(Y) \cong H(R)(X \wedge Y).$$ The adjoint Eilenberg-Maclane functor ${\operatorname{\sf EM}}$ is pseudotensor – we have a natural map $${\operatorname{\sf EM}}(V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}) \wedge {\operatorname{\sf EM}}(W_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}) \to {\operatorname{\sf EM}}(V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}{\overset{\sf\scriptscriptstyle L}{\otimes}}_R W_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$$ for any two objects $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}},W_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{\cal D}}(R)$. Thus for any associative ring object ${{\cal A}}$ in ${\operatorname{\sf StHom}}$, its homology $H(R)({{\cal A}})$ is a ring object in ${{\cal D}}(R)$, and conversely, for any associative ring object $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{\cal D}}(R)$, the Eilenberg-Maclane spectrum ${\operatorname{\sf EM}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is a ring object in ${\operatorname{\sf StHom}}$. In the homological setting, we know that the structure of a “ring object in ${{\cal D}}(R)$” is too weak, and the right objects to consider are DG algebras over $R$. To define an analogous notion for spectra is non-trivial, since the traditional topological interpretation of spectra does not behave too well as far as the products are concerned. Fortunately, new models for ${\operatorname{\sf StHom}}$ have appeared more recently, such as for example [*$S$-modules*]{} of [@EMM], [ *orthogonal spectra*]{} of [@MM], or [*symmetric spectra*]{} of [@HSS]. All these approaches give equivalent results; to be precise, let us choose for example the last one. As shown in [@HSS], symmetric spectra form a symmetric monoidal category; denote it by ${\operatorname{{\sf Sym}}}$. Then in this paper, a [ring spectrum]{} will denote a monoidal object in ${\operatorname{{\sf Sym}}}$, and ${\operatorname{\sf StAlg}}$ will denote the category of ring spectra considered up to a homotopy equivalence (formally, this is defined by putting a closed model structure on the category of ring monoidal objects in ${\operatorname{{\sf Sym}}}$ whose weak equivalences are homotopy equivalences of the underlying symmetric spectra). The homology functor $H(R)$ and the Eilenberg-Maclane functor ${\operatorname{\sf EM}}$ extend to functors $$H(R):{\operatorname{\sf StAlg}}\to {\operatorname{\sf DG-Alg}}(R),\qquad {\operatorname{\sf EM}}:{\operatorname{\sf DG-Alg}}(R) \to {\operatorname{\sf StAlg}}.$$ where as in Section \[hc.sec\], ${\operatorname{\sf DG-Alg}}(R)$ is the category of DG algebras over $R$ considered up to a quasiisomorphism. Equivariant categories. {#equiv.subs} ----------------------- For any compact group $G$, a pointed “$G$-CW complex” is a pointed CW complex $X$ equipped with a continuos action of $G$ such that the fixed-point subset $X^g \subset X$ is a pointed subcomplex for any $g \in G$. We will denote by ${\operatorname{\sf Hom}}(G)$ the category of pointed $G$-CW complexes and $G$-equivariant homotopy classes of $G$-equivariant maps between them. We note that for any closed subgroup $H \subset G$, sending $X$ to the fixed-point subspace $X^H \subset X$ gives a well-defined functor $${\operatorname{\sf Hom}}(G) \to {\operatorname{\sf Hom}}.$$ This functor is representable in the following sense: for any $X \in {\operatorname{\sf Hom}}(G)$, we have a homotopy equivalence $$\label{fxp.sp} X^H \cong {\operatorname{Maps}}_G([G/H]_+,X),$$ where ${\operatorname{Maps}}_G(-,-)$ means the space of $G$-equivariant maps with its natural topology, and $[G/H]_+$ is the pointed $G$-CW complex obtained by adding a (disjoint) marked point to the quotient $G/H$ with the induced topology and $G$-action. To define a stable version of the category ${\operatorname{\sf Hom}}(G)$, one could again simply invert the suspension functor. However, there is a more interesting alternative: by definition, $n$-fold suspension $\Sigma^n$ is the smash-product with an $n$-sphere, and in the equivariant setting, one can allow the sphere to carry a non-trivial $G$-action. The corresponding equivariant stable category has been constructed in [@may1]; it is known as the [*genuine $G$-equivariant stable homotopy category*]{} ${\operatorname{\sf StHom}}(G)$. To define it, one needs to fix a real representation $U$ of the group $G$ which is equipped with a $G$-invariant inner product and contains every finite-dimensional inner-product representation countably many times; this is called a “complete $G$-universe”. Then a genuine $G$-equivariant spectrum is a collection of $G$-CW complexes $X(V)$, one for each finite-dimensional $G$-invariant inner-product subspace $V \subset U$, and maps $S^W \wedge X(V) \to X(V \oplus W)$, one for each inner-product $G$-invariant subspace $V \oplus W \subset U$, where $S^V$ is the one-point compactification of the underlying topological space of the representation $V$, with its natural $G$-action. As in the non-equivariant case, ${\operatorname{\sf StHom}}(G)$ is a tensor triangulated category. We have a natural suspension spectrum functor $\Sigma^\infty:{\operatorname{\sf Hom}}(G) \to {\operatorname{\sf StHom}}(G)$, and for any two objects $X,Y \in {\operatorname{\sf Hom}}(G)$, we have $${\operatorname{Hom}}_{{\operatorname{\sf StHom}}(G)}(\Sigma^\infty X,\Sigma^\infty Y) = \lim_{\overset{V \subset U}{\to}}[S^V \wedge X, S^V \wedge Y]_G,$$ where $[-,-]_G$ is the set of $G$-homotopy classes of $G$-equivariant maps, and the limit is over all the finite-dimensional $G$-invariant inner-product subspaces $V \subset U$. The category ${\operatorname{\sf StHom}}(G)$ does depend on $U$, but this is not too drastic: all complete $G$-universes are isomorphic, and for any isomorphism $U \cong U'$ between complete $G$-universes, there is a “change of universe” functor which is an equivalence between the corresponding versions of ${\operatorname{\sf StHom}}(G)$. Forgetting the $G$-action gives a natural forgetful functor ${\operatorname{\sf StHom}}(G) \to {\operatorname{\sf StHom}}$, and equipping a spectrum with a trivial $G$-action gives an embedding ${\operatorname{\sf StHom}}\to {\operatorname{\sf StHom}}(G)$. Thus for any $X \in {\operatorname{\sf StHom}}$ and $Y \in {\operatorname{\sf StHom}}(G)$, we have a functorial smash product $X \wedge Y \in {\operatorname{\sf StHom}}(G)$. This has an adjoint: for any $X,Y \in {\operatorname{\sf StHom}}(G)$, we have a natural spectrum ${\operatorname{Maps}}_G(X,Y) \in {\operatorname{\sf StHom}}$ such that for any $Z \in {\operatorname{\sf StHom}}$, there is a functorial isomorphism $${\operatorname{Hom}}_{{\operatorname{\sf StHom}}(G)}(Z \wedge X,Y) \cong {\operatorname{Hom}}_{{\operatorname{\sf StHom}}}(Z,{\operatorname{Maps}}_G(X,Y)).$$ For any closed subgroup $H \subset G$ and any $X \in {\operatorname{\sf StHom}}(G)$, one can extend and define the fixed point spectrum $X^H$ by the same formula, $$\label{psi} X^H = {\operatorname{Maps}}_G(\Sigma^\infty[G/H]_+,X).$$ However, this does not commute with the suspension spectrum functor $\Sigma^\infty$. In [@may1], a second fixed-points functor is introduced, called the [*geometric fixed points functor*]{} and denoted $\Phi^H$. It does commute with $\Sigma^\infty$, and also commutes with smash products, so that there are functorial isomorphisms $$\Phi^H(\Sigma^\infty X) \cong \Sigma^\infty X^H, \qquad \Phi^H(X \wedge Y) \cong \Phi^H(X) \wedge \Phi^H(Y)$$ for any $X,Y \in {\operatorname{\sf StHom}}(G)$. For any $X \in {\operatorname{\sf StHom}}(G)$, there exists a canonical map $$\label{can.phi} {\operatorname{\sf can}}:X^H \to \Phi^H(X),$$ functorial in $X$. Moreover, let $N_H \subset G$ be the normalizer of the subgroup $H \subset G$, and let $W_H = N_H/H$ be the quotient. Then $\Phi^H$ can be extended to a functor $${\widehat}{\Phi}^H:{\operatorname{\sf StHom}}(G) \to {\operatorname{\sf StHom}}(W_H),$$ and the same is true for the usual fixed-points functor $X \mapsto X^H$ of . The map ${\operatorname{\sf can}}$ of then lifts to a map of $W_H$-equivariant spectra. Here if ${\operatorname{\sf StHom}}(G)$ is defined on a complete $G$-universe $U$, then ${\operatorname{\sf StHom}}(W_H)$ should be defined on the complete $W_H$-universe $U^H$. The functor ${\widehat}{\Phi}^H$ has a right-adjoint which is a fully faithful embedding ${\operatorname{\sf StHom}}(W_H) \to {\operatorname{\sf StHom}}(G)$ (for example, if $H=G$, then this is the trivial embedding ${\operatorname{\sf StHom}}\to {\operatorname{\sf StHom}}(G)$). Mackey functors. {#mackey.subs} ---------------- Assume from now on that the compact group $G$ is a finite group with discrete topology. It is not difficult to extend the homology functor $H(R)$ to a functor $$H(R):{\operatorname{\sf StHom}}(G) \to {{\cal D}}(G,R)$$ with values in the derived category of $R[G]$-modules. However, this version of equivariant homology looses a lot of information such as fixed points. A more natural target for equivariant homology is the category of the so-called [*Mackey functors*]{}. To define them, one considers an additive category ${{\cal B}}_G$ whose objects are $G$-orbits $G/H$ for all subgroups $H \subset G$, and whose ${\operatorname{Hom}}$-groups are given by $$\label{b.g} \begin{aligned} {{\cal B}}^G([G/H_1],[G/H_2]) &= {\operatorname{Hom}}_{{\operatorname{\sf StHom}}(G)}(\Sigma^\infty[G/H_1]_+,\Sigma^\infty[G/H_2]_+)=\\ &= \pi_0({\operatorname{Maps}}_G(\Sigma^\infty[G/H_1]_+,\Sigma^\infty[G/H_2]_+)). \end{aligned}$$ An [*$R$-valued $G$-Mackey functor*]{} ([@dress], [@lind], [@tD], [@may2]) is an additive functor from ${{\cal B}}_G$ to the category of $R$-modules. The category of such functors is an abelian category, denoted ${\operatorname{\mathcal M}}(G,R)$. More explicitly, for any subgroups $H_1,H_2 \subset G$, one can consider the groupoid ${{\cal Q}}([G/H_1],[G/H_2])$ of diagrams $[G/H_1] \gets S \to [G/H_2]$ of finite sets equipped with a $G$-action, and isomorphisms between such diagrams. Then disjoint union turns these groupoids into symmetric monoidal categories, the Cartesian product turns the collection ${{\cal Q}}(-,-)$ into a $2$-category with objects $[G/H]$, and it seems very likely that the mapping spectra ${\operatorname{Maps}}_G(\Sigma^\infty[G/H_1]_+,\Sigma^\infty[G/H_2]_+)$ are in fact obtained from the classifying spaces $|{{\cal Q}}([G/H_1],[G/H_2])|$ of symmetric monoidal groupoids ${{\cal Q}}([G/H_1],[G/H_2])$ by group completion. At present, this has not been proved ([@M0]); however, the corresponding isomorphism is well-known at the level of $\pi_0$: we have $$\pi_0({\operatorname{Maps}}_G(\Sigma^\infty[G/H_1]_+,\Sigma^\infty[G/H_2]_+)) \cong \pi_0(\Omega B |{{\cal Q}}([G/H_1],[G/H_2])|),$$ so that the groups ${{\cal B}}_G(-,-)$ are given by $$\label{b.g.bis} {{\cal B}}^G([G/H_1],[G/H_2]) = {{\mathbb Z}}[{{\operatorname{Iso}}}({{\cal Q}}([G/H_1],[G/H_2]))]/\{[S_1 \coprod S_2] - [S_1] - [S_2]\},$$ where ${{\operatorname{Iso}}}$ means the set of isomorphism classes of objects. For any $X \in {\operatorname{\sf StHom}}(G)$, individual homology groups $H_i(R)(X)$ can be equipped with a natural structure of a Mackey functor in such a way that $H_i(R)(X)([G/H]) \cong H_i(R)(X^H)$, $H \subset G$ (for more details, see [@may2]). To collect these into a single homology functor $H(R)$, one has to work out a natural derived version of the abelian category ${\operatorname{\mathcal M}}(G,R)$. This has been done recently in [@Ka-ma]. Roughly speaking, instead of $\pi_0$ in , one should the chain homology complexes $C_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(-,{{\mathbb Z}})$ of the corresponding spectra, and one should set $${{\cal B}}^G_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}([G/H_1],[G/H_2]) = C_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({\operatorname{Maps}}_G(\Sigma^\infty[G/H_1]_+,\Sigma^\infty[G/H_2]_+),{{\mathbb Z}}).$$ In practice, one replaces this with complexes which compute the homology of the spectra obtained by group completion from the symmetric monoidal groupoids ${{\cal Q}}([G/H_1],[G/H_2])$. This can be computed explicitly, so that the complexes ${{\cal B}}^G_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(-,-)$ introduced in [@Ka-ma Section 3] are given by an explicit formula, and spectra are not mentioned at all. One then shows that the collection ${{\cal B}}^G_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(-,-)$ is an $A_\infty$-category in a natural way, and one defines the triangulated category ${{\cal D}{\cal M}}(G,R)$ of [*derived $R$-valued $G$-Mackey functors*]{} as the derived category of $A_\infty$-functors from ${{\cal B}}^G_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ to the category of complexes of $R$-modules. In general, the category ${{\cal D}{\cal M}}(G,R)$ turns out to be different from the derived category ${{\cal D}}({\operatorname{\mathcal M}}(G,R))$ (although both contain the abelian category ${\operatorname{\mathcal M}}(G,R)$ as a full subcategory). On the level of slogans, one can hope that the category ${{\cal D}{\cal M}}(G,R)$ is the “brave new product” of the category ${\operatorname{\sf StHom}}(G)$ and the derived category ${{\cal D}}(R)$ of $R$-modules, taken over the non-equivariant stable homotopy category ${\operatorname{\sf StHom}}$, so that we have a diagram $$\begin{CD} {{\cal D}}({\operatorname{\mathcal M}}(G,R)) @>>> {{\cal D}{\cal M}}(G,R) @>>> {\operatorname{\sf StHom}}(G)\\ @. @VVV @VVV\\ @. {{\cal D}}(R) @>>> {\operatorname{\sf StHom}}, \end{CD}$$ where the square is Cartesian in some “brave new” sense. On a more mundane level, it is expected that the triangulated category ${{\cal D}{\cal M}}(G,R)$ reflects the structure of the category ${\operatorname{\sf StHom}}(G)$ in the following way. 1. There exists a symmetric tensor product $- \otimes -$ on the triangulated category ${{\cal D}{\cal M}}(G,R)$, and for any subgroup $H \subset G$, we have natural triangulated fixed-point functors $\Phi^H,\Psi^H:{{\cal D}{\cal M}}(G,R) \to {{\cal D}}(R)$. 2. There exists a natural triangulated equivariant homology functor $$H_G(R):{\operatorname{\sf StHom}}(G) \to {{\cal D}{\cal M}}(G,R)$$ and natural functorial isomorphisms $$\begin{aligned} \Phi^H(H_G(R)(X)) &\cong H(R)(\Phi^H(X)),\\ \Psi^H(H_G(R)(X)) &\cong H(R)(X^H),\\ H_G(X \wedge Y) &\cong H_G(X) \otimes H_G(Y) \end{aligned}$$ for any $X,Y \in {\operatorname{\sf StHom}}(G)$, $H \subset G$. In fact, most of these statements has been proved in [@Ka-ma], although only for the so-called “Spanier-Whitehead category”, the full triangulated subcategory in ${\operatorname{\sf StHom}}(G)$ spanned by the suspension spectra of finite CW complexes (the only thing not proved is the compatibility $\Psi^H(H_G(R)X) \cong H(R)(X^H)$ which requires one to leave the Spanier-Whitehead category). It has been also shown in [@Ka-ma] that as in the case of spectra, the fixed point functor $\Phi^H$ extends to a functor $$\label{phi.h} {\widehat}{\Phi}^H:{{\cal D}{\cal M}}(G,R) \to {{\cal D}{\cal M}}(W_H,R)$$ with a fully faithful right-adjoint. These fixed-points functors allow one to give a very explicit description of the category ${{\cal D}{\cal M}}(G,R)$. Namely, let $I(G)$ be the set of conjugacy classes of subgroups in $G$, and for any $c \in I(G)$, let $${{\cal D}{\cal M}}_c(G,R) \subset {{\cal D}{\cal M}}(G,R)$$ be the full subcategory of such $M \in {{\cal D}{\cal M}}(G,R)$ that $\Phi^H(M) = 0$ unless $H \subset G$ is in the class $c$. For any $c \in I(G)$, ${{\cal D}{\cal M}}_c(G,R) \subset {{\cal D}{\cal M}}(G,R)$ is an admissible triangulated subcategory, and for any subgroup $H \subset G$ is a subgroup in the class $c$, the functor ${\widehat}{\Phi}^H$ of induces an equivalence $${\widehat}{{\varphi}}^H:{{\cal D}{\cal M}}_c(G,R) \cong {{\cal D}}(W_H,R).$$ Moreover, equip $I(G)$ with the partial order given by inclusion. Then it has been shown in [@Ka-ma] that unless $c \leq c'$, ${{\cal D}{\cal M}}_c(G,R)$ is left-orthogonal to ${{\cal D}{\cal M}}_{c'}(G,R)$, so that ${{\cal D}{\cal M}}_c(G,R)$, $c \in I(G)$ form a semiorhtogonal decomposition of the triangulated category ${{\cal D}{\cal M}}(G,R)$ indexed by the partially ordered set $I(G)$ (for generalities on semiorthogonal decompositions, see [@boka]). To describe the gluing data between the pieces of this semiorthogonal decomposition, one introduces the following. Assume given a finite group $G$ and a module $V$ over $R[G]$. The [*maximal Tate cohomology*]{} ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(G,V)$ is given by $${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(G,V) = {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{{{\cal D}}^b(G/R)/{\operatorname{\sf Ind}}}(R,V),$$ where ${\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is computed in the quotient ${{\cal D}}^b(G,R)/{\operatorname{\sf Ind}}$ of the bounded derived category ${{\cal D}}^b(G,R)$ by the full saturated triangulated subcategory ${\operatorname{\sf Ind}}\subset {{\cal D}}^b(G,R)$ spanned by representrations ${\operatorname{\sf Ind}}_G^H(W)$ induced from a representation $W$ of a subgroup $H \subset G$, $H \neq G$. Then for any two subgroups $H \subset H' \subset G$ with conjucacy classes $c,c' \in I$, $c \leq c'$, the gluing functor between ${{\cal D}{\cal M}}_c(G,R)$ and ${{\cal D}{\cal M}}_{c'}(G,R)$ is expressed in terms of maximal Tate cohomology of the group $W_H$ and its various subgroups. This description turns out to be very effective because maximal Tate cohomology often vanishes. For example, if the order of the group $G$ is invertible in $R$, ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(G,V)=0$ for any $R[G]$-module $V$, and the category ${{\cal D}{\cal M}}(G,R)$ becomes simply the direct sum of the categories ${{\cal D}{\cal M}}_c(G,R) \cong {{\cal D}}(W_H,R)$ (for the abelian category ${\operatorname{\mathcal M}}(G,R)$, a similar decomposition theorem has been proved some time ago by J. Thevenaz [@Th1]). On the other hand, if $R$ is arbitrary but the group $G = {{\mathbb Z}}/n{{\mathbb Z}}$ is cyclic, then ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(G,V)=0$ for any $V$ unless $n = p$ is prime, in which case ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(G,V)$ reduces to the usual Tate cohomology ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(G,V)$. Cyclotomic traces. ================== Returning to the setting of Theorem \[car\], we can now explain the appearance of the Steenrod algebra in : up to a quasiisomorphism, the DG algebra $Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ of is in fact given by $$Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) = H(k)({\operatorname{\sf EM}}(A))^{k^*},$$ where the $k^*$-invariants are taken with respect to the natural action of the multiplicative group $k^*$ of the finite field $k$ induced by its action on $k$. In particular, this shows that it is not necessary to use dimension arguments to construct a splitting $A \to Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ of the augmentation map $Q_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A) \to A$. For example, if we are given a ring spectrum ${{\cal A}}$ with homology DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= H(k)({{\cal A}})$, then a canonical map $$\label{sp.spl} A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= H(k)({{\cal A}}) \to H(k)({\operatorname{\sf EM}}(H(k)({{\cal A}})))$$ exists simply by adjunction, and being canonical, it is in particular $k^*$-invariant. Thus for any DG algebra of the form $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= H(k)({{\cal A}})$, the same procedure as in the proof of Theorem \[car\] allows one to construct a Cartier map. However, in this case one can do much more – namely, one can compare the homological story with the theory of [*cyclotomic traces*]{} and [*topological cyclic homology*]{} known in algebraic topology. Let us briefly recall the setup (we mostly follow the very clear and concise exposition in [@HM]). Topological cyclic homology. ---------------------------- For any unital associative algebra $A$ over a ring $k$, the Hochschild homology complex $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$ of Section \[hc.sec\] is in fact the standard complex of a simplicial $k$-module $A_\# \in \Delta^{opp}k{{\text{\rm -mod}}}$. [*Topological Hochschild homology*]{} is a version of this construction for ring spectra. It was originally introduced by Bökstedt [@bok] long before the invention of symmetric spectra, and used the technology of “functors with a smash product”. In the language of symmetric spectra, one starts with a unital associative ring spectrum ${{\cal A}}$, and one defines a simplicial spectrum ${{\cal A}}_\#$ by exactly the same formula as in the algebra case. The terms of ${{\cal A}}_\#$ are the iterated smash products ${{\cal A}}\wedge \dots \wedge {{\cal A}}$, and the face and degeneracy maps are obtained from the multiplication and the unit map in ${{\cal A}}$. Then one sets $${\operatorname{THH}}({{\cal A}}) = \hocolim_{\Delta^{opp}}{{\cal A}}_\#.$$ As in the algebra case, this spectrum is equipped with a canonical $S^1$-action, but in the topological setting this means much more: one shows that ${\operatorname{THH}}({{\cal A}})$ actually underlies a canonical $S^1$-equivariant spectum ${\operatorname{THH}}({{\cal A}}) \in {\operatorname{\sf StHom}}(S^1)$. However, this is not the end of the story. Note that the finite subgroups in $S^1$ are the cyclic groups $C_n = {{\mathbb Z}}/n{{\mathbb Z}}\subset S^1$ numbered by integers $n \geq 1$, and for every $n$, we have $S^1/C_n \cong S^1$. Fix a system of such isomorphisms which are compatible with the embeddings $C_n \subset C_{nm} \subset S^1$, $n,m \geq 1$, and fix a compatible system of isomorphisms $U^{C_n} \cong U$, where $U$ is the complete $S^1$-universe used to define ${\operatorname{\sf StHom}}(S^1)$. Then the following notion has been introduced in [@BM]. \[cyclo\] A [*cyclotomic structure*]{} on an $S^1$-equivariant spectrum $T$ is given by a collection of $S^1$-equivariant homotopy equivalences $$r_n:{\widehat}{\Phi}^{C_n}T \cong T,$$ one for each finite subgroup $C_n \subset S^1$, such that $r_1= {\operatorname{\sf id}}$ and $r_n \circ r_m = r_{nm}$ for any two integer $n,m > 1$. \[cyclo.rem\] Here it is tacitly assumed that one works with specific model of equivariant spectra, so that a spectrum means more than just an object of the triangulated category ${\operatorname{\sf StHom}}(S^1)$; moreover, the functors ${\widehat}{\Phi}^{C_n}$ are composed with the change of universe functors so that we can treat them as endofunctors of ${\operatorname{\sf StHom}}(S^1)$. Please refer to [@BM] or [@HM] for exact definitions. \[loop.exa\] Assume given a CW complex $X$, and let $LX = {\operatorname{Maps}}(S^1,X)$ be its free loop space. Then for any finite subgroup $C \subset S^1$, the isomorphism $S^1 \cong S^1/C$ induces a homeomorphism $${\operatorname{Maps}}(S^1,X)^C = {\operatorname{Maps}}(S^1/C,X) \cong {\operatorname{Maps}}(S^1,X),$$ and these homeomorphism provide a canonical cyclotomic structure on the suspension spectrum $\Sigma^\infty LX$. For any $S^1$-equivariant spectrum $T$ and a pair of integers $r,s > 1$, one has a natural non-equivariant map $$F_{r,s}:T^{C_{rs}} \to T^{C_r}.$$ On the other hand, assume that $T$ is equipped with a cyclotomic structure. Then we have a natural map $$\begin{CD} R_{r,s}:T^{C_{rs}} \cong (T^{C_s})^{C_r} @>{{\operatorname{\sf can}}}>> ({\widehat}{\Phi}^{C_s}T)^{C_r} @>{r_s}>> T^{C_r}, \end{CD}$$ where ${\operatorname{\sf can}}$ is the canonical map , and $r_s$ comes from the cyclotomic structure on $T$. To pack together the maps $F_{r,s}$, $R_{r,s}$, it is convenient to introduce a small category ${{\mathbb I}}$ whose objects are all integers $n \geq 1$, and whose maps are generated by two maps $F_r,R_r:n \to m$ for each pair $m$, $n=rm$, $r > 1$, subject to the relations $F_r \circ F_s = F_{rs}$, $R_r \circ R_s = R_{rs}$, $F_r \circ R_s = R_s \circ F_r$. Then the maps $T_{r,s}$, $F_{r,s}$ turn the collection $T^{C_n}$, $n \geq 1$ into a functor ${\widetilde}{T}$ from ${{\mathbb I}}$ to the category of spectra. The [*topological cyclic homology*]{} ${\operatorname{TC}}(T)$ of a cyclotomic spectrum $T$ is given by $${\operatorname{TC}}(T) = \holim_{{{\mathbb I}}} {\widetilde}{T}.$$ Given a ring spectrum ${{\cal A}}$, Bökstedt and Madsen equip the $S^1$-equivariant spectrum $THH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({{\cal A}})$ with a canonical cyclomomic structure. [*Topological cyclic homology*]{} ${\operatorname{TC}}({{\cal A}})$ is then given by $${\operatorname{TC}}({{\cal A}}) = {\operatorname{TC}}({\operatorname{THH}}({{\cal A}})).$$ Further, they construct a canonical [*cyclotomic trace map*]{} $$\label{cyc.tr} K({{\cal A}}) \to {\operatorname{TC}}({{\cal A}})$$ from the $K$-theory spectrum $K({{\cal A}})$ to the topological cyclic homology spectrum. The topological cyclic homology functor ${\operatorname{TC}}({{\cal A}})$ and the cyclotomic trace were actually introduced by Bökstedt, Hsiang and Madsen in [@BHM]; the more convenient formulation using cyclotomic spectra appeared slightly later in [@BM]. Starting with [@BHM], it has been proved in many cases that the cyclotomic trace map becomes a homotopy equivalence after taking profinite completions of both sides of . Moreover, in [@mac] MacCarthy generalized Goodwillie’s Theorem and proved that after pro-$p$ completion at any prime $p$, the cyclotomic trace gives an equivalence of the relative groups ${\widehat}{K({{\cal A}},I)}_p \cong {\widehat}{{\operatorname{TC}}({{\cal A}},I)}_p$, where $I \subset {{\cal A}}$ is a nilpotent ideal. Cyclotomic complexes. --------------------- To define a homological analog of cyclotomic spectra, one needs to replace $S^1$-equivariant spectra with derived Mackey functors. The machinery of [@Ka-ma] does not apply directly to non-discrete groups, since this would require treating the groupoids ${{\cal Q}}(-,-)$ of Subsection \[equiv.subs\] as topological groupoids. However, for finite subgroups $C_1,C_2 \subset S^1$, the category ${{\cal Q}}([S^1/C_1],[S^1/C_2])$ is still discrete. Thus one can define a restricted version of derived $S^1$-Mackey functors by discarding the only infinite closed subgroup in $S^1$ (which is $S^1$ itself). This is done in [@cyclo]. The category ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ of [*$R$-valued cyclic Mackey functors*]{} introduced in that paper has the following features. 1. For every proper finite subgroup $C = C_n \subset S^1$, $n > 1$, there is a fixed-point functor ${\widehat}{\Phi}_n:{{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ whose right-adjoint functor ${\widehat}{\iota}_n:{{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ is a full embedding. Moreover, there are canonical isomorphisms ${\widehat}{\Phi}_n \circ {\widehat}{\Phi}_m \cong {\widehat}{\Phi}_{mn}$. 2. Let ${{\cal D}}_{S^1}(R)$ be the equivariant derived category of Section \[hdr.sec\]. Then there is a full embedding $\iota_1:{{\cal D}}_{S^1}(R) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ with a left-adjoint $\Phi_1:{{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R) \to {{\cal D}}_{S^1}(R)$. 3. The images ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_n(R)$ of the full embeddings $\iota_n = {\widehat}{\iota}_N \circ \iota_1:{{\cal D}}_{S^1}(R) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$, $n \geq 1$, generate the triangulated category ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$, and ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_n(R) \subset {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ is left-orthogonal to ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_m(R) \subset {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ unless $n=mr$ for some integer $r \geq 1$. Thus as in the finite group case of [@Ka-ma], the subcategories ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_n(R) \subset {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ form a semiorthogonal decomposition of the category ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$. The gluing data between ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_{mr}(R)$ and ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_r(R)$ can be expressed in terms of the maximal Tate cohomology ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}_{max}(C_m,-)$ of the cyclic group $C_m = {{\mathbb Z}}/m{{\mathbb Z}}$. For any $n \geq 1$, let $\overline{\Phi}_n:{{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R) \to {{\cal D}}(R)$ be the composition of the left-adjoint $\Phi_n = \Phi_1 \circ {\widehat}{\Phi}_n$ to $\iota_n$ and the forgetful functor ${{\cal D}}_{S^1}(R) \to {{\cal D}}(R)$; then the functors $\overline{\Phi}_n$ play the role of fixed points functors $\Phi^H$. There are also functors $\Psi_n:{{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R) \to {{\cal D}}_{S^1}(R)$ analogous to the functors $\Psi^H$. The homology functor $H(R)$ extends to a functor $$H_{S^1}(R):{\operatorname{\sf StHom}}(S^1) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R),$$ and we have functorial isomorphisms $$\overline{\Phi}_n(H_{S^1}(R)(T)) \cong H(R)(\Phi^{C_n}(T)), \qquad \Psi_n(H_{S^1}(R)(T)) \cong H(R)(T^{C_n})$$ for every $n \geq 1$ and every $T \in {\operatorname{\sf StHom}}(S^1)$. Another category defined in [@cyclo] is a triangulated category ${{{\cal D}}\!\Lambda\text{{\upshape R}}}(R)$ of [*$R$-valued cyclotomic complexes*]{}. Essentially, a cyclotomic complex $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{{\cal D}}\!\Lambda\text{{\upshape R}}}(R)$ is a cyclic Mackey functor $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ equipped with a system of compatible quasiisomorphisms $${\widehat}{\Phi}_nM_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\cong M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}},$$ as in Definition \[cyclo\] (although as in Remark \[cyclo.rem\], the precise definition is different for technical reasons). The homology functor $H_{S^1}(R):{\operatorname{\sf StHom}}(S^1) \to {{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ extends to a functor from the category of cyclotomic spectra to the category ${{{\cal D}}\!\Lambda\text{{\upshape R}}}(R)$. Moreover, all the constructions used in the definition of topological cyclic homology make sense for cyclotomic complexes, so that one has a natural functor $${\operatorname{TC}}:{{{\cal D}}\!\Lambda\text{{\upshape R}}}(R) \to {{\cal D}}(R)$$ and a functorial isomorphism $$\label{tc.tc} {\operatorname{TC}}(H_{S^1}(R)(T)) \cong H(R)({\operatorname{TC}}(T))$$ for every cyclotomic spectrum $T$. Comparison theorem. ------------------- We can now formulate the comparison theorem relating Dieudonné modules and cyclotomic complexes. We introduce the following definition. \[gfdm\] A [*generalized filtered Dieudonné module*]{} $M$ over a commutative ring $R$ is an $R$-module $M$ equipped with a decreasing filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}M$ and a collection of maps $${\varphi}^p_{i,j}:F^iM \to M/p^j,$$ one for every integers $i$, $j$, $j \geq 1$, and a prime $p$, such that $${\varphi}^p_{i,j+1} = {\varphi}^p_{i,j} \mod p^j, \quad {\varphi}^p_{i,j}|_{F^{i+1}M} = p{\varphi}^p_{i,j}.$$ For any integer $i$, we define the generalized filtered Dieudonné module $R(i)$ as $R$ with the filtration $F^iR(i)=R$, $F^{i+1}R(i)=0$, and ${\varphi}^p_{i,j} = p^i{\operatorname{\sf id}}$ for any $p$ and $j$. Generalized filtered Dieudonné modules in the sense of Defintion \[gfdm\] do not form an abelian category; however, by inverting the filtered quasiisomorphisms, we can still construct the derived category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}_{g}(R)$ and its twisted $2$-periodic version ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}_{g}^{per}(R)$. Definition \[gfdm\] generalizes in that it collects together the data for all primes $p$. Note, however, that one can rephrase Definition \[gfdm\] by putting together all the maps ${\varphi}^p_{i,j}$, $j \geq 1$, into a single map $${\widehat}{{\varphi}}^p_i:F^iM \to {\widehat}{(M)}_p$$ into the pro-$p$ completion ${\widehat}{(M)}_p$ of the module $M$. Then if $R = {{\mathbb Z}}_p$ and $M$ is finitely generated over ${{\mathbb Z}}_p$, we have $${\widehat}{(M)}_p \cong M, \quad {\widehat}{(M)}_l = 0 \text{ for } l \neq p,$$ so that for such an $M$, the extra data imposed onto $M$ in Definition \[gfdm\] and in Definition \[fdm.defn\] are the same. In general, for any prime $p$, we have a fully faithful embedding $${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}({{\mathbb Z}}_p) \subset {{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}_{g}({{\mathbb Z}}),$$ where ${\widetilde}{{{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}}({{\mathbb Z}}_p)$ is as in Section \[fdm.1.sec\], and similarly for the periodic categories. The essential images of these embeddings are spanned by complexes which are pro-$p$ complete as complexes of abelian groups. Note, however, that what appears here are [*weak*]{} filtered Dieudonné modules. The requirement that the map is a quasiisomorphism can be additionally imposed at each individual prime $p$; I do not know whether it is useful to impose it in the universal category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}_{g}({{\mathbb Z}})$. Here is then the main comparison theorem of [@cyclo]. \[cyclo.thm\] For any commutative ring $R$, there is a canonical equivalence of categories $${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}_{g}^{per}(R) \cong {{{\cal D}}\!\Lambda\text{{\upshape R}}}(R).$$ Thus the category ${{{\cal D}}\!\Lambda\text{{\upshape R}}}(R)$ of cyclotomic complexes over $R$ admits an extremely simple linear-algebraic description. Roughly speaking, the reason for this is the vanishing of maximal Tate cohomology ${\check{H}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}/n{{\mathbb Z}},-)$ for non-prime $n$ mentioned at the end of Subsection \[mackey.subs\]. Due to this vanishing, the only non-trivial gluing between the pieces ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_m(R)$, ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}_n(R)$ of the semiorthogonal decomposition of the category ${{{\cal D}}\!{\operatorname{\mathcal M}}\!\Lambda}(R)$ of cyclic Mackey functors occurs when $n = mp$ for a prime $p$ (and this gluing is described by the Tate cohomology of the group ${{\mathbb Z}}/p{{\mathbb Z}}$). The gluing data provide the maps ${\widehat}{{\varphi}}^p_i$ in the equivalence of Theorem \[cyclo.thm\]; the periodic filtered complex comes from the equivalence ${{\cal D}}_{S^1}(R) \cong {{{\cal D}}{\mathcal F}}^{per}(R)$ of Lemma \[trivial.lemma\]. These are the main ideas of the proof. Moreover, there is a second comparison theorem which expresses topological cyclic homology in terms of generalized Dieudonné modules. \[tc.thm\] Under the equivalence of Theorem \[cyclo.thm\], there is functorial isomorphism $$\label{tc.eq} {\widehat}{{\operatorname{TC}}(M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})}_f \cong {\widehat}{{\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(R,M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})}_f$$ for any $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {{{\cal D}}\!\Lambda\text{{\upshape R}}}(R)$, where $R = R(0) \in {{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}^{per}_{g}(R)$ is the trivial generalized filtered Dieudonné module, and ${\widehat}{(-)}_f$ stands for profinite completion. Both ${\operatorname{TC}}(-)$ and ${\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(R,-)$ commute with profinite completions, so that if $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ itself is profinitely complete, the completions in can be dropped. In general, it is better to keep the completion; to obtain an isomorphism in the general case, one should, roughly speaking, replace $T$ with the homotopy fixed points $T^{hS^1}$ in the definition of topological cyclic homology ${\operatorname{TC}}(T)$. It is not unreasonable to hope that Theorem \[tc.thm\] has a topological analog: one can define a triangulated category of cyclotomic spectra which is enriched over ${\operatorname{\sf StHom}}$, and then for any profinitely complete cyclotomic spectrum $T$, we have a natural homotopy equivalence $${\operatorname{TC}}(T) \cong {\operatorname{Maps}}({{\mathbb S}},T),$$ where ${\operatorname{Maps}}(-,-)$ is the mapping spectrum in the cyclotomic category, and ${{\mathbb S}}= \Sigma^\infty{{\sf pt}}$ is the sphere spectrum with the trivial cyclotomic structure (obtained as in Example \[loop.exa\]). This would give a conceptual replacement of the somewhat [*ad hoc*]{} definition of the functor ${\operatorname{TC}}$. Back to ring spectra. --------------------- Return now to our original situation: we have a ring spectrum ${{\cal A}}\in {\operatorname{\sf StAlg}}$, and the DG algebra $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}= H(W)({{\cal A}})$ is obtained as its homology with coefficients in the Witt vector ring $W= W(k)$ of a finite field $k$. Assume for simplicity that $k = {{\mathbb Z}}/p{{\mathbb Z}}$ is a prime field, so that $W={{\mathbb Z}}_p$. Then on one hand, we have the cyclotomic spectrum ${\operatorname{THH}}({{\cal A}})$ of [@BM], and since the homology functor $H({{\mathbb Z}}_p)$ commutes with tensor products, we have a quasiisomorphism $$H({{\mathbb Z}}_p)({\operatorname{THH}}({{\cal A}})) \cong CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}).$$ But the left-hand side underlies a cyclotomic complex, and by Theorem \[cyclo.thm\], this is equivalent to saying that it has a structure of a generalized filtered Dieudonné module. And on the other hand, $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ has a Dieudonné module structure induced by the splitting map . We expect that the two structures coincide (although at present, this has not been checked). Moreover, the functor ${\operatorname{TC}}$ commutes with $H({{\mathbb Z}}_p)$ by , and Theorem \[tc.thm\] shows that we have $$\begin{aligned} H({{\mathbb Z}}_p)({\operatorname{TC}}({{\cal A}})) &\cong H({{\mathbb Z}}_p)({\operatorname{TC}}({\operatorname{THH}}({{\cal A}}))) \cong {\operatorname{TC}}(CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}))\\ &\cong {\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({{\mathbb Z}}_p,CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)), \end{aligned}$$ where ${\operatorname{RHom}}^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(-,-)$ is taken in the category ${{{\cal D}}{\mathcal F}{{\cal D}}\!{\operatorname{\mathcal M}}}^{per}({{\mathbb Z}}_p)$ of filtered Dieudonné modules. In other words: - The homology functor $H({{\mathbb Z}}_p)$ sends topological cyclic homology into syntomic periodic cyclic homology. This principle can be used to study further the regulator map for syntomic homology. Namely, applying $H({{\mathbb Z}}_p)$ to the cyclotomic trace map , we obtain a functorial map $$H({{\mathbb Z}}_p)(K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({{\cal A}})) \to H({{\mathbb Z}}_p)({\operatorname{TC}}({{\cal A}})),$$ and the right-hand side is the target of the desired regulator map for the DG algebra $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$. The desired source of this map is $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}) \cong K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(H({{\mathbb Z}}_p)({{\cal A}}))$. Thus the question of existence of the syntomic regulator maps reduces to a problem in algebraic $K$-theory: describe the relation between the homology of the $K$-theory of a ring spectrum, and the $K$-theory of its homology. To finish the Section, let us explain how things work in a very simple particular case. Assume given a CW complex $X$, and let ${{\cal A}}= \Sigma^\infty \Omega X$, the suspension spectrum of the based loop space $\Omega X$. Then since $\Omega X$ is a topological monoid, ${{\cal A}}$ is a ring spectrum. The DG algebra $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}= H({{\mathbb Z}}_p)({{\cal A}})$ is given by $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}= C_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(\Omega X,{{\mathbb Z}}_p)$, the singular chain complex of the topological space $\Omega X$. It is known that in this case, we have $$CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}) \cong C_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(LX,{{\mathbb Z}}_p),$$ the singular chain complex of the free loop space $LX$. Analogously, we have ${\operatorname{THH}}({{\cal A}}) \cong \Sigma^\infty LX$. The $S^1$-action on ${\operatorname{THH}}({{\cal A}})$ and $CH_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is induced by the loop rotation action on $LX$. The cyclotomic structure on ${\operatorname{THH}}({{\cal A}})$ is that of Example \[loop.exa\]. The corresponding Dieudonné module structure map ${\varphi}$ on $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is induced by the cyclotomic structure map $LX^{{{\mathbb Z}}/p{{\mathbb Z}}} \cong LX$ of the free loop space $LX$. To compare this with the constructions of Section \[fdm.2.sec\], specialize even further and assume that $\Omega X$ is discrete, so that $A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ is quasiisomorphic to an algebra $A$ concentated in degree $0$. In this case $X \cong BG$ for a discrete group $G$, and $A = {{\mathbb Z}}_p[G]$ is its group algebra. Then the diagonal map $G \to G^p$ induces a map $A \to A^{\otimes p}$ which is a quasi-Frobenius map in the sense of Section \[fdm.2.sec\], thus induces another Dieudonné module structure on the filtered complex $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A)$. One checks easily that the two structures coincide. For a general $X$, the Diedonné module structure on $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ can also be described explicitly in the same way as in Section \[fdm.2.sec\], by using the map $$A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\to A_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^{\otimes p}$$ induced by the diagonal map $\Omega X \to (\Omega X)^p$ in place of the quasi-Frobenius map. Hodge structures. ================= In the archimedian setting of of Section \[mm.sec\], much less is known about periodic cyclic homology than in the non-archimedian setting of . One starts with a smooth proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over ${{\mathbb C}}$ and considers its periodic cyclic homology complex $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ with its Hodge filtration. In order to equip $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ with an ${{\mathbb R}}$-Hodge structure, one needs to define a weight filtration $W_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ and a complex conjugation isomorphism $\overline{\phantom{m}}:CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \to \overline{CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})}$. The gradings in the isomorphism suggest that $W_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ should be simply the canonical filtration of the complex $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$. However, the complex conjugation is a complete mystery. There is only one approach known at present, albeit a very indirect and highly conjectural one; the goal of this section is to describe it. I have learned all this material from B. Toën and/or M. Kontsevich – it is only the mistakes here that are mine. The so-called [*${{\cal D}}^-$-stacks*]{} introduced by B. Toën and G. Vezzosi in [@ToVe2] generalize both Artin stacks and DG schemes and form the subject of what is now known as “derived algebraic geometry”; a very nice overview is avaiable in [@toen-overview]. Very approximately, a ${{\cal D}}^-$-stack over a ring $k$ is a functor $${\operatorname{\mathcal M}}:\Delta^{opp}{\operatorname{\sf Comm}}(k) \to \Delta^{opp}{\operatorname{Sets}}$$ from the category of simplicial commutative algebras over $k$ to the category of simplicial sets. This functor should satisfy some descent-type conditions, and all such functors are considered up to an appropriately defined homotopy equivalence (made sense of by the technology of closed model structures). This generalizes the Grothendieck approach to schemes which treats a scheme over $k$ as its functor of points – a sheaf of sets on the opposite ${\operatorname{\sf Comm}}(k)^{opp}$ to the category of commutative algebras over $k$. The category ${\operatorname{\sf Comm}}(k)$ is naturally embedded in $\Delta^{opp}{\operatorname{\sf Comm}}(k)$ as the subcategory of constant simplicial objects, and resticting a ${{\cal D}}^-$-stack ${\operatorname{\mathcal M}}$ to ${\operatorname{\sf Comm}}(k) \subset \Delta^{opp}{\operatorname{\sf Comm}}(k)$ gives an $\infty$-stack in the sense of Simpson [@Si] (this is called the [*truncation*]{} of ${\operatorname{\mathcal M}}$). If $k$ contains ${{\mathbb Q}}$, one may replace simplicial commutative algebras with commutative DG algebras $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ over $k$ placed in non-negative homological degrees, $R_i = 0$ for $i < 0$. If we denote the category of such DG algebras by ${\operatorname{\sf DG-Comm}}^-(k)$, then a ${{\cal D}}^-$-stack is a functor $${\operatorname{\mathcal M}}:{\operatorname{\sf DG-Comm}}^-(k) \to \Delta^{opp}{\operatorname{Sets}},$$ again satisfying some conditions, and considered up to a homotopy equivalence. The category of ${{\cal D}}^-$-stacks over $k$ is denoted ${\operatorname{{\mathcal D}^-{\sf st}}}(k)$. For every DG algebra $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Comm}}^-(k)$, its [*derived spectrum*]{} ${\operatorname{R{\operatorname{Spec}}}}(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}) \in {\operatorname{{\mathcal D}^-{\sf st}}}(k)$ sends a DG algebra $R'_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Comm}}^-(k)$ to the simplicial set of maps from $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ to $R'_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$, with the simplicial structure induced by the model structure on the category ${\operatorname{\sf DG-Comm}}^-(k)$. We thus obtain a Yoneda-type embedding $${\operatorname{R{\operatorname{Spec}}}}:{\operatorname{\sf DG-Comm}}^-(k)^{opp} \to {\operatorname{{\mathcal D}^-{\sf st}}}(k).$$ For any DG algebra $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Comm}}^-(k)$, its de Rham cohomology comple $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is defined in the obvious way; $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(-)$ gives a functor $$\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}:{\operatorname{\sf DG-Comm}}^-(k) \to {\operatorname{\rm Spaces}}_{{\mathbb Q}}$$ from ${\operatorname{\sf DG-Comm}}^-(k)$ to the category ${\operatorname{\rm Spaces}}_{{\mathbb Q}}$ of rational homotopy types in the sense of Quillen [@Q]. By the standard Kan extension machinery, $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ extentds to a de Rham realization functor $$\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}:{\operatorname{{\mathcal D}^-{\sf st}}}(k) \to {\operatorname{\rm Spaces}}_{{\mathbb Q}}.$$ Alternatively, one can take the $0$-th homology algebra $H_0(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ and consider its cristalline cohomology; this gives a DG algebra quasiisomorphic to $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ (the higher homology groups behave as nilpotent extensions and do not contribute to cohomology). This shows that the de Rham realization $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({\operatorname{\mathcal M}})$ of a ${{\cal D}}^-$-stack ${\operatorname{\mathcal M}}\in {\operatorname{{\mathcal D}^-{\sf st}}}(k)$ only depends on its truncation. Moreover, for ${{\cal D}}^-$-stacks satifying a certain finiteness condition (“locally geometric” and “locally finitely presented” in the sense of [@ToVa]), instead of considering de Rham cohomology, one can take the underlying topological spaces ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(R))$ of the simplicial complex algebraic varieties ${\operatorname{\mathcal M}}(R)$, $R \in {\operatorname{\sf Comm}}(k)$; by Kan extension, this gives a topological realization functor $${\operatorname{\rm Top}}:{\operatorname{{\mathcal D}^-{\sf st}}}(k) \to {\operatorname{\rm Spaces}}$$ into the category of topological spaces. By the standard comparison theorems, ${\operatorname{\rm Top}}({\operatorname{\mathcal M}})$ and $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({\operatorname{\mathcal M}})$ represented the same rational homotopy type. Now, it has been proved in [@ToVa] that for any associative unital DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ over $k$, there exists a ${{\cal D}}^-$-stack ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ classifying “finite-dimensional DG modules over $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$”. By definition, for any commutative DG algebra $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Comm}}^-(k)$, the simplicial set ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is given by - ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})(R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is the nerve of the category ${\operatorname{\sf Perf}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}},R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ of DG modules over $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\otimes R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ which are perfect over $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$, and quasiisomorphisms between such DG modules. Toën and Vaquié prove that this indeed defines a ${{\cal D}}^-$-stack. Moreover, they prove that if $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ satisfies certain finiteness conditions, the ${{\cal D}}^-$-stack ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ is locally geometric and locally finitely presented. In particular, a smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Alg}}(k)$ satisfies the finiteness conditions needed for [@ToVa], so that there exists a locally geometric and locally finitely presented ${{\cal D}}^-$-stack ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$. Consider its de Rham realization $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$. For any $R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Comm}}^-(k)$, the category ${\operatorname{\sf Perf}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}},R_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}})$ is a symmetric monoidal category with respect to the direct sum, so that the realization ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is automatically an $E_\infty$-space. \[to.1\] The $E_\infty$-space ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is group-like. [[*Sketch of a possible proof.*]{}]{} One has to show that $\pi_0({\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})))$ is not only a commutative monoid but also an abelian group. A point in ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is represented by a DG module $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ over $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ which is perfect over $k$. One observes that $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\oplus M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}[1]$ can be deformed to a acyclic DG module; thus the sum of points represented by $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ and $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}[1]$ lies in connected component of $0$ in ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$. Thus for any smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Alg}}(k)$, the realization ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is an infinite loop space, that is, the $0$-th component of a spectrum. \[K.st\] The [*semi-topological $K$-theory*]{} $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^{st}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ of a smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is given by $$K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^{st}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) = \pi_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))),$$ the homotopy groups of the infinite loop space ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$. If we are only interested in $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \otimes k$, we may compute it using the de Rham model $\Omega^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$. Then $K_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}^{st}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is exactly the complex of primitive elements with respect to the natural cocommutative coalgebra structure on ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ induces by the direct sum map $${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \times {\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \to {\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}).$$ Since ${{\mathbb Q}}\subset k$, and rationally, spectra are the same as complexes of ${{\mathbb Q}}$-vector spaces, the groups $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \otimes k$ are the only rational invariants one can extract from the space ${\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$. Assume for the moment that $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}\in {\operatorname{\sf DG-Alg}}(k)$ is derived-Morita equivalent to a smooth and proper algebraic variety $X/k$. Then one can also consider the $\infty$-stack $\overline{{\operatorname{\mathcal M}}}(X)$ of all coherent sheaves on $X$; for any noetherian $R \in {\operatorname{\sf Comm}}(k)$, $\overline{{\operatorname{\mathcal M}}}(X)(R)$ is by definition the nerve of the category of coherent sheaves on $M \otimes R$ and isomorphisms between them. The realization ${\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X))$ is again an $E_\infty$-space, no longer group-like. By definition, we have a natural map $$\overline{{\operatorname{\mathcal M}}}(X) \to {\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}),$$ and the induced $E_\infty$-map of realizations. \[to.2\] The natural $E_\infty$-map $$\label{bar.m} {\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X)) \to {\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$$ induces a homotopy equivalence between ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ and the group completion of the $E_\infty$-space ${\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X))$. [[*Sketch of a possible proof.*]{}]{} Since ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is group-like by Lemma \[to.1\], it suffices to prove that the delooping $$B{\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X)) \to B{\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$$ of the $E_\infty$-map is a homotopy equivalence. Delooping obviously commutes with geometric realization, so that $B{\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ is the realization of the ${{\cal D}}^-$-stack $B{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, and similarly for $B{\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X))$. Instead of taking deloopings, we can apply Waldhausen’s $S$-construction. The resulting map $$S\overline{{\operatorname{\mathcal M}}}(X) \to S{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$$ is then an equivalence by Waldhausen’s devissage theorem, so that it suffices to prove that the natural map $${\operatorname{\rm Top}}(B{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})) \to {\operatorname{\rm Top}}(S{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$$ is a homotopy equivalence, and similarly for $\overline{{\operatorname{\mathcal M}}}(X)$. For this, one argues as in Lemma \[to.1\]: since every filtered complex can be canonically deformed to its associated graded quotient, the terms ${\operatorname{\rm Top}}(S_n{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$ of the $S$-construction can be retracted to $n$-fold products ${\operatorname{\rm Top}}({\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \times \dots \times {\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$, that is, the terms of the delooping ${\operatorname{\rm Top}}(B{\operatorname{\mathcal M}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}))$, and similarly for $\overline{{\operatorname{\mathcal M}}}(X)$. \[to.c\] The semitopological $K$-theory $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({{\mathbb Q}})$ is given by $$K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(k) \cong {{\mathbb Z}}[\beta],$$ the algebra of polynomials in one generator $\beta$ of degree $2$. [[*Proof.*]{}]{} By Lemma \[to.2\], computing $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(k)$ reduces to studying the group completion of the realization $${\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}({{\sf pt}})) \cong \coprod_n {\operatorname{\rm Top}}([{{\sf pt}}/GL_n]) \cong \coprod_n BU_n,$$ where $[{{\sf pt}}/GL_n]$ is the Artin stack obtained as the quotient of the point by the trivial action of the algebraic group $GL_n$. This group completion is well-known to be homotopy equivalent to the classifying space ${{\mathbb Z}}\times BU$. At present, Lemma \[to.1\] and Lemma \[to.2\] are unpublished, as well as Corollary \[to.c\]. The above sketches of proofs have been kindly explained to me by B. Toën. Lemma \[to.1\] is slightly older, and it also appears for example in [@To.lec]. Now, since $k \supset {{\mathbb Q}}$ by our assumption, we have a well-defined tensor product $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}\otimes V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ for any DG module $M_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ over $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ and every complex $V_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}$ of ${{\mathbb Q}}$-vector spaces. On the level of the stacks ${\operatorname{\mathcal M}}(-)$, this tensor product turns $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ into a module over $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}({{\mathbb Q}}) = {{\mathbb Z}}[\beta]$. We can now state the main conjecture. \[conj.1\] Assume that $k$ is a ring conatining ${{\mathbb Q}}$, and assume that a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}{\operatorname{\sf DG-Alg}}(k)$ is smooth and proper. Then there exists a map $$c:K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \to HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$$ such that $c(\beta(\alpha)) = u(c(\alpha))$ for any $\alpha \in K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$, where $u$ is the periodicity map. The map $c$ is functorial in $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$. Moreover, the induced map $$\label{ov.hp} K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \otimes_{{{\mathbb Z}}[\beta]} k[\beta,\beta^{-1}] \to HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$$ is an isomorphism. The reason this conjecture is relevant to the present paper is that the tensor product $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \otimes k$ by its very definition has all the structures possessed by the de Rham cohomology of an algebraic variety. In particular, if $k = {{\mathbb C}}$, $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ has a canonical real structure. \[conj.2\] Assume that $K = {{\mathbb C}}$, and assume given a smooth and proper DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}/K$ for which Conjecture \[conj.1\] holds. Equip $CP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ with the real structure induced from the canonical real structure on $K^{st}_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}) \otimes K$ by the isomorphism  \[ov.hp\]. Then for any integer $i$, the periodic cyclic homology group $HP_{{\:\raisebox{1pt}{\text{\circle*{1.5}}}}}(A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}})$ this real structure and the standard Hodge filtration $F^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ is a pure ${{\mathbb R}}$-Hodge structure of weight $i$. The two conjectures above are a slight refinement and/or reformulation of a conjecture made by B. Toën [@To.lec] with a reference to A. Bondal and A. Neeman, and described by L. Katzarkov, M. Kontsevich and T. Pantev in [@KS 2.2.6]. Apart from the basic case $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}= k$ of Corollary \[to.c\], the only real evidence for Conjecture \[conj.1\] comes from recent work of Fiedlander and Walker [@FW], where it has been essentially proved for a DG algebra $A^{{\:\raisebox{3pt}{\text{\circle*{1.5}}}}}$ equivalent to a smooth projective algebraic variety $X/k$. The definition of semi-topological $K$-theory used in [@FW] is different from Definition \[K.st\], but it is very close to the homotopy groups of the group completion of the $E_\infty$-space ${\operatorname{\rm Top}}(\overline{{\operatorname{\mathcal M}}}(X))$; Lemma \[to.2\] should then show that the two things are the same. Friedlander and Walker also show that their constructions are compatible with the complex conjugation, so that Conjecture \[conj.2\] then follows by the usual Hodge theory applied to $X$. In the general case, as far as I know, both Conjecture \[conj.1\] and Conjecture \[conj.2\] are completely open. They are now a subject of investigation by B. Toën and A. Blanc. Acknowledgements. {#acknowledgements. .unnumbered} ----------------- I have benefited a lot from discussing this material with A. Beilinson, R. Bezrukavnikov, A. Bondal, V. Drinfeld, L. Hesselholt, V. Ginzburg, L. Katzarkov, D. Kazhdan, B. Keller, M. Kontsevich, A. Kuznetsov, N. Markarian, J.P. May, G. Merzon, D. Orlov, T. Pantev, S.-R. Park, B. Toën, M. Verbitsky, G. Vezzosi, and V. Vologodsky; discussions with M. Kontsevich, on one hand, and B. Toën and G. Vezzosi, on the other hand, were particularly invaluable. [666]{} J.F. Adams, [*Stable Homotopy and Generalized Homology*]{}, Univ. of Chicago Press, 1974. A. Beilinson, J. Bernstein, and P. Deligne, [*Faiscaux pervers*]{}, Astérisque [**100**]{}, Soc. Math. de France, 1983. Bökstedt, [*Topological Hochschild homology*]{}, preprint, Bielefeld, 1985. M. Bökstedt, W.C. Hsiang, and I. Madsen, [ *The cyclotomic trace and algebraic $K$-theory of spaces*]{}, Invent. Math. [**111**]{} (1993), 465–539. M. Bökstedt and I. Madsen, [*Topological cyclic homology of the integers*]{}, in [*$K$-theory (Strasbourg, 1992)*]{}, Astérisque [**226**]{} (1994), 7–8, 57–143. A. Bondal and M. Kapranov, [*Representable functors, Serre functors, and reconstructions*]{}, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. [**53**]{} (1989), 1183–1205, 1337; translation in Math. USSR-Izv. [**35**]{} (1990), 519–541. A. Bondal and M. Van den Bergh, [*Generators and representability of functors in commutative and noncommutative geometry*]{}, Mosc. Math. J. [**3**]{} (2003), 1–36, S. Bloch and A. Ogus, [*Gersten’s conjecture and the homology of schemes*]{}, Ann. Sci. École Norm. Sup. (4) 7 (1974), 181–201. P. Deligne and L. Illusie, [*Relévements modulo $p^2$ et décomposition du complexe de de Rham*]{}, Inv. Math. [**89**]{} (1987), 247–270. T. tom Dieck, [*Transformation groups*]{}, De Gruyter, Berlin-New York, 1987. A.W.M. Dress, [*Contributions to the theory of induced representations*]{}, in [*Algebraic K-Theory II*]{}, (H. Bass, ed.), Lecture Notes in Math. [**342**]{}, Springer-Verlag, 1973; pp. 183–240. A.D. Elmendorf, I. Kriz, M.A. Mandell, and J.P. May, [*Rings, modules, and algebras in stable homotopy theory*]{}, Mathematical Surveys and Monographs, [**47**]{}, AMS, Providence, RI, 1997. G. Faltings, [*Crystalline cohomology and $p$-adic Galois-representations*]{}, in [*Algebraic analysis, geometry, and number theory (Baltimore, MD, 1988)*]{}, Johns Hopkins Univ. Press, Baltimore, MD, 1989; 25–80. B. Feigin and B. Tsygan, [*Additive $K$-Theory*]{}, in Lecture Notes in Math. [**1289**]{} (1987), 97–209. J.-M. Fontaine and G. Lafaille, [*Construction de représentations $p$-adiques*]{}, Ann. Sci. École Norm. Sup. (4) [**15**]{} (1982), 547–608 (1983). J.-M. Fontaine and W. Messing, [*$p$-adic periods and $p$-adic étale cohomology*]{}, in [*Current trends in arithmetical algebraic geometry (Arcata, Calif., 1985)*]{}, Contemp. Math. [**67**]{}, AMS, Providence, RI, 1987; 179–207. E. Friedlander and M. Walker, [*Semi-topological $K$-theory*]{}, in [*Handbook of $K$-theory*]{}, Springer, Berlin, 2005; 877–924. T.G. Goodwillie, [*Relative algebraic $K$-theory and cyclic homology*]{}, Ann. of Math. [**124**]{} (1986), 347–402. M. Gros, [*Régulateurs syntomiques et valeurs de fonctions $L$ $p$-adiques, I*]{}, Invent. Math. [**99**]{} (1990), 293–320. M. Gros, [*Régulateurs syntomiques et valeurs de fonctions $L\;p$-adiques, II*]{}, Invent. Math. [**115**]{} (1994), 61–79. L. Hesselholt and I. Madsen, [*On the $K$-theory of finite algebras over Witt vectors of perfect fields*]{}, Topology [**36**]{} (1997), 29–101. G. Hochschild, B. Kostant, and A. Rosenberg, [ *Differential forms on regular affine algebras*]{}, Trans. AMS [**102**]{} (1962), 383–408. M. Hovey, B. Shipley, and J. Smith, [*Symmetric spectra*]{}, J. AMS [**13**]{} (2000), 149–208. D. Kaledin, [*Non-commutative Hodge-to-de Rham degeneration via the method of Deligne-Illusie*]{}, Pure Appl. Math. Q. [**4**]{} (2008), 785–875. D. Kaledin, [*Cartier isomorphism and Hodge theory in the non-commutative case*]{}, in [*Arithmetic geometry*]{}, Clay Math. Proc. [**8**]{}, AMS, Providence, RI, 2009; 537–562. D. Kaledin, [*Derived Mackey functors*]{}, [ arXiv:0812.2519]{}, to appear in Moscow Math. J. D. Kaledin, [*Cyclotomic complexes*]{}, [ arXiv:math.AT]{}, submitted 15.03.10. L. Katzarkov, M. Kontsevich, and T. Pantev, [ *Hodge-theoretic aspects of mirror symmetry*]{}, [ arvix.org/0806.0107]{}. M. Kontsevich and Y. Soibelman, [*Notes on A-infinity algebras, A-infinity categories and non-commutative geometry, I*]{}, preprint math.RA/0606241. B. Keller, [*Deriving DG categories*]{}, Ann. Sci. Ecole Norm. Sup. (4) [**27**]{} (1994), 63–102. B. Keller, [*On differential graded categories*]{}, in [*International Congress of Mathematicians*]{}, Vol. II, Eur. Math. Soc., Zürich, 2006; 151–190. L.G. Lewis, J.P. May, and M. Steinberger, [*Equivariant stable homotopy theory*]{}, with contributions by J. E. McClure, Lecture Notes in Mathematics, [**1213**]{}, Springer-Verlag, Berlin, 1986. H. Lindner, [*A remark on Mackey functors*]{}, Manuscripta Math. [**18**]{} (1976), 273–278. J.-L. Loday, [*Cyclic Homology*]{}, second ed., Springer, 1998. R. McCarthy, [*Relative algebraic $K$-theory and topological cyclic homology*]{}, Acta Math. [**179**]{} (1997), 197–222. M.A. Mandell and J.P. May, [*Equivariant orthogonal spectra and $S$-modules*]{}, Memoirs of the AMS [ **755**]{} (2002). J.P. May, [*Equivariant homotopy and cohomology theory*]{}, with contributions by M. Cole, G. Comezana, S. Costenoble, A.D. Elmendorf, J.P.C. Greenlees, L.G. Lewis, Jr., R.J. Piacenza, G. Triantafillou, and S. Waner, CBMS Regional Conference Series in Mathematics, [**91**]{}. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1996. J.P. May, private communication. D. Orlov, [*Remarks on Generators and Dimensions of Triangulated Categories*]{}, Moscow Math. J. [**9**]{} (2009), 513–519. D. Quillen, [*Rational homotopy theory*]{}, Ann. of Math. (2) [**90**]{} 1969, 205–295. C. Simpson, [*Algebraic (geometric) $n$-stacks*]{}, [arXiv:alg-geom/9609014]{}. J. Thevenaz, [*Some remarks on $G$-functors and the Brauer morphism*]{}, J. Reine Angew. Math. [**384**]{} (1988), 24–56. B. Toën, [*Higher and derived stacks: a global overview*]{}, in [*Algebraic geometry—Seattle 2005*]{}, Part 1, 435–487, Proc. Sympos. Pure Math., [**80**]{}, Part 1, AMS, Providence, RI, 2009. B. Toën, [*Anneaux de définition des dg-algèbres propres et lisses*]{}, Bull. Lond. Math. Soc. [ **40**]{} (2008), 642–650. B. Toën, [*Saturated dg-categories III*]{}, a talk at Workshop on Homological Mirror Symmetry and Related Topics, January 18-23, 2010, University of Miami; handwritten notes by D. Auroux available at\ [http://www-math.mit.edu/$\sim$auroux/frg/miami10-notes/]{}. B. Toën and M. Vaquié, [*Moduli of objects in dg-categories*]{}, Ann. Sci. École Norm. Sup. (4) [**40**]{} (2007), 387–444. B. Toën and G. Vezzosi, [*Homotopical algebraic geometry, II. Geometric stacks and applications*]{}, Mem. Amer. Math. Soc. [**193**]{} (2008). [Independent University of Moscow & Steklov Math Institute\ Moscow, USSR]{} [*E-mail address*]{}: [kaledin@mi.ras.ru]{}
--- abstract: 'We present STAR’s measurement of the $e^{+}e^{-}$ continuum as a function of centrality, invariant mass, and transverse momentum for U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV. Also reported are the acceptance-corrected $e^{+}e^{-}$ invariant mass spectra for minimum-bias Au+Au collisions at $\sqrt{s_{NN}}$ = 27, 39, and 62.4 GeV and U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV. The connection between the integrated $e^{+}e^{-}$ excess yields normalized by charge particle multiplicity ($dN_{ch}/dy$) at mid-rapidity and the lifetime of the fireball is discussed.' author: - 'Joey Butterworth (for the STAR Collaboration)' title: 'Production of $e^{+}e^{-}$ from U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV and Au+Au collisions at $\sqrt{s_{NN}}$ = 19.6, 27, 39, 62.4, and 200 GeV as measured by STAR' --- Introduction {#LIntro} ============ Relativistic heavy-ion collisions produced by the Relativistic Heavy Ion Collider (RHIC) are capable of generating a hot, dense, strongly interacting medium. In order to study this medium, electromagnetic probes, such as $e^{+}e^{-}$ pairs, lend themselves as a natural choice. The $e^{+}e^{-}$ are generated at all stages of the collision, interact electromagnetically, and thus, are able to traverse the strongly interacting medium with minimal effects on their final state while preserving information imprinted on them by their parent(s). Pairs with an invariant mass (M$_{ee}$) less than $\sim 1.2$ GeV/c$^{2}$ are of particular interest because it contains production from the $\rho$ meson. The $\rho$ has been suggested to have its spectral function modified by the medium [@RappWamSpecFunc] and is in agreement with measurements reported in [@CERES; @NA60; @STAR200PRL; @PHENIX; @STAR19PLB; @SHUAI]. The in-medium modification of the $\rho$ spectral function is considered a possible link to chiral symmetry restoration [@CSR]. Moreover, it has been suggested that the integrated yield of $e^{+}e^{-}$ production within the low mass region (LMR, M$_{ee}$ $\lesssim$ 1.2 GeV/c$^{2}$) can be related to the lifetime of the fireball [@RappLife]. The versatility of RHIC enables a systematic study of the $e^{+}e^{-}$ continuum. The Beam Energy Scan Program has enabled the Solenoidal Tracker At RHIC (STAR) to collect enough statistics to study $e^{+}e^{-}$ production from Au+Au collisions at $\sqrt{s_{NN}}$ = 19.6, 27, 39, and 62.4 GeV while the total baryon density remained approximately constant, which the $\rho$ spectral function depends on. Here, the $e^{+}e^{-}$ continuum is studied as a function of $\sqrt{s_{NN}}$ and presented in the context of the fireball lifetime. Additionally, RHIC’s versatility is evident from the different colliding species that it can accelerate. By switching from a Au+Au collision system at $\sqrt{s_{NN}}$ = 200 GeV to a U+U collision system at $\sqrt{s_{NN}}$ = 193 GeV, the collision energy density is expected in most central collisions to be up to 20% higher than Au+Au collisions [@KIKOLA] while the collision energy remains within a couple percent. If the energy density is increased, one may expect a longer fireball lifetime, and in turn, a larger $e^{+}e^{-}$ yield in the LMR. Here, the $e^{+}e^{-}$ continuum from U+U collisions is studied and compared to the continuum from Au+Au collisions. This paper presents a systematic study of the acceptance-corrected $e^{+}e^{-}$ production measured by STAR as a function of $\sqrt{s_{NN}}$ = 27, 39, and 62.4 GeV. Furthermore, STAR measurements of the $e^{+}e^{-}$ continuum produced by U+U collisions are presented. ![image](Butterworth_J/PVsInvBeta_62_4.pdf){width=".99\textwidth"} ![image](Butterworth_J/PVsNsig_62.pdf){width=".99\textwidth"} ![image](Butterworth_J/eff_purity_62.pdf){width=".99\textwidth"} Data Sample and Analysis {#LData} ======================== STAR has collected 70M, 130M, and 67M events from the top-80% most central Au+Au collisions at $\sqrt{s_{NN}}$ = 27, 39, and 62.4 GeV, respectively, and 270M events from the top-80% most central U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV, where the top-80% represents STAR’s minimum bias selection. The Time Projection Chamber (TPC) [@TPC] and Time of Flight (TOF) [@TOF] detectors are used to identify electrons and positrons within the STAR acceptance ($p_{T}^{e}$ $>$ 0.2 GeV/c, $|\eta^{e}|$ $<$ 1, and $|y_{ee}|$ $<$ 1). The TPC provides the particle identification and tracking via ionization energy loss (dE/dx) while the TOF is used to reject the slower hadrons then enhancing the TPC’s particle identification and purity. This is depicted in Fig. \[fig:62GeVeID\], where the left panel demonstrates the TOF velocity selection criteria (between the red lines), the middle panel shows the electron sample selection based on the TPC electron identification (n$\sigma_{e}$) after the TOF velocity rejection of slower hadrons. The right panel illustrates the purity of the selected electrons as a function of momentum, the average purity for each data sample is at the level of 95% or greater. Selected electrons (and positrons) are combined to form the $e^{+}e^{-}$ foreground. This contains background from the combinatorial background and correlated pairs (e.g. jets and double Dalitz decays). To estimate and remove the background, a geometric mean with a charge-acceptance correction is subtracted from the uncorrected $e^{+}e^{-}$ distribution as a function of the pair invariant mass and pair transverse momentum ($p_{T}^{ee}$). After subtraction, the continuum is then corrected for efficiency and acceptance losses. Details on the methods used may be found here [@STAR200PRC]. Shown in the top panel of Fig. \[fig:UUmb\_ratio\] is the $e^{+}e^{-}$ continuum as a function of invariant mass for minimum bias U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV. Known hadronic contributions to the invariant mass spectrum are then modeled and compared to the data. Contributions are modeled from $\pi^{0}$, $\eta$, $\eta$’, $\omega$, $\phi$, J/$\psi$, $c\bar{c}$, Drell-Yan, and $b\bar{b}$, where Drell-Yan and $b\bar{b}$ are only modeled for U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV and Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV. The top panel in Fig. \[fig:UUmb\_ratio\] also shows the known hadronic contributions as perforated lines and the cocktail sum of their contributions shown as the solid line. ![(Color online) (Top) The corrected $e^{+}e^{-}$ invariant mass spectrum from U+U collisions (black markers). The perforated lines represent known hadronic contributions and the cocktail (solid line) is a summation of these contributions. (Bottom) The data over cocktail ratio as a function of invariant mass. The red line is the ratio of Rapp *et al.* [@RappWamSpecFunc; @RappPrivate] calculations plus the cocktail to the cocktail. The systematic uncertainties are shown as the shaded areas and the statistical uncertainties as the error bars. The yellow region along the dotted line at data/cocktail = 1 represents the cocktail uncertainty. []{data-label="fig:UUmb_ratio"}](Butterworth_J/UUmb_ratio.png){width="1.\textwidth"} Figures \[fig:UUpt\] and \[fig:UUCent\] are the $e^{+}e^{-}$ invariant mass spectrum for different $p_{T}^{ee}$ and centrality ranges, respectively. For reference, the all-inclusive distribution from Fig. \[fig:UUmb\_ratio\] is shown as the bottom distribution in each figure. ![(Color online) The $e^{+}e^{-}$ invariant mass spectrum (markers) for different $p_{T}^{ee}$ ranges. The systematic uncertainties are represented by the shaded regions and the statistical uncertainties are represented by the error bars. The hadronic cocktail is shown for each $p_{T}^{ee}$ range (solid line).[]{data-label="fig:UUpt"}](Butterworth_J/UUpTonly.png){width="1.\textwidth"} ![(Color online) The $e^{+}e^{-}$ invariant mass spectrum (markers) for different centrality ranges. The systematic uncertainties are represented by the shaded regions and the statistical uncertainties are represented by the error bars. The hadronic cocktail is shown for each centrality range (solid line).[]{data-label="fig:UUCent"}](Butterworth_J/UUCentOnly.png){width="1.\textwidth"} Results and Discussion {#LResults} ====================== Figures \[fig:UUmb\_ratio\], \[fig:UUpt\], and \[fig:UUCent\] exhibit a difference between the invariant mass spectrum and the hadronic cocktail without $\rho$ contributions, or excess. A model by Rapp *et al.* that incorporates broadening of the $\rho$ spectral function (HG\_med) and QGP thermal radiation is in agreement with this observation. This is demonstrated by overlaying the ratio of the model calculations plus the cocktail to the cocktail and comparing to the data over cocktail ratio in the bottom panel of Fig. \[fig:UUmb\_ratio\]. This has previously been shown for Au+Au collisions at $\sqrt{s_{NN}}$ = 19.6, 27, 39, 62.4, and 200 GeV [@STAR200PRL; @STAR19PLB; @Patrick]. Taking a step further, the excess is corrected for STAR’s kinematic acceptance. The acceptance-corrected excess at mid-rapidity has been normalized by the charge particle multiplicity at mid-rapidity ($dN_{ch}/dy$) to cancel volume effects and is presented in Fig. \[fig:AccCorrBESUU\] for U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV and Au+Au collisions at $\sqrt{s_{NN}}$ = 27, 39, 62.4, and 200 GeV. In the same figure, we compare our results with model calculations from Rapp *et al.* that incorporate thermal radiation from the QGP and a broadened $\rho$ spectral function. ![(Color online) The acceptance-corrected $e^{+}e^{-}$ invariant mass excess yield normalized by $dN_{ch}/dy$. Data from minimum bias Au+Au collisions (black markers) are shown in all four panels at $\sqrt{s_{NN}}$ = 27, 39, 62.4, and 200 GeV, starting from the top and reading from left to right, respectively. Data from minimum bias U+U collisions (red markers) at $\sqrt{s_{NN}}$ = 193 GeV is shown in the lower right panel. Systematic uncertainties are the shaded areas and statistical uncertainties are the error bars. Calculations from Rapp *et al.* [@RappWamSpecFunc; @RappPrivate] are shown in each panel for contributions from the hadronic medium (dashed blue curve), QGP (dashed pink curve), and their sum (solid red curve). In the lower right panel, the Rapp *et al.* calculations are for U+U collisions.[]{data-label="fig:AccCorrBESUU"}](Butterworth_J/BES_AccCorrExcessSpec_4QM.pdf){width="1.\textwidth"} To study the possible connection between the lifetime and excess yield, the $e^{+}e^{-}$ excess yield has been integrated from 0.4 to 0.75 GeV/c$^{2}$, normalized by $dN_{ch}/dy$, and plotted in Fig. \[fig:IntExcess\] as a function of $dN_{ch}/dy$. On the same figure, lifetime calculations from Rapp *et al.* [@RappLife; @RappPrivate] are plotted for each corresponding yield measurement as a bar while the trend for Au+Au at $\sqrt{s_{NN}}$ = 200 GeV centrality calculations is plotted as a dashed line. There is an increase in normalized yields at higher collision energies with respect to the lower energies, and at $\sqrt{s_{NN}}$ = 200 GeV there is an increase in normalized yields in more central collisions with respect to peripheral collisions. The expected lifetime in model calculations also has an increasing trend from peripheral to central collisions [@STAR19PLB]. ![(Color online) (Left y-axis) The integrated acceptance-corrected $e^{+}e^{-}$ excess yield normalized by $dN_{ch}/dy$ as a function of $dN_{ch}/dy$, where data markers represent each point. Statistical uncertainties are represented by the error bars and the systematic uncertainties are represented by the shaded regions. (Right y-axis) The lifetime of the fireball calculations by Rapp *et al.* [@RappLife; @RappPrivate] as a function of $dN_{ch}/dy$, where the calculations are represented by the solid bars that have been offset in the +x-direction and the red dashed curve are lifetime calculations for Au+Au at $\sqrt{s_{NN}}$ = 200 GeV. []{data-label="fig:IntExcess"}](Butterworth_J/IntExcess_4QM.pdf){width="1.\textwidth"} Summary and Outlook {#LSummary} =================== We have presented STAR measurements of the $e^{+}e^{-}$ invariant mass for U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV along with the acceptance-corrected excess $e^{+}e^{-}$ yields for minimum bias U+U collisions at $\sqrt{s_{NN}}$ = 193 GeV and minimum bias Au+Au collisions at $\sqrt{s_{NN}}$ = 27, 39, 62.4, and 200 GeV. Comparisons between these measurements and model calculations, which include broadening of the $\rho$ spectral function and QGP thermal radiation, have been shown to be in agreement. Also, we reported the normalized integrated excess yields as a function of $dN_{ch}/dy$ for various collision systems. These measurements are consistent with theoretical calculations that indicate a longer fireball lifetime for collisions that are central and have a higher $\sqrt{s_{NN}}$. Measurements made during the Beam Energy Scan Program were made at approximately the same total baryon density. At $\sqrt{s_{NN}}$ $<$ 20 GeV, the total baryon density rises as $\sqrt{s_{NN}}$ decreases. Models such as [@RappWamSpecFunc] suggest that the $\rho$ spectral function is dependent upon the total baryon density; hence, if the total baryon density increases, the $\rho$ yield is expected to increase too. To test this relation and distinguish between models, STAR will take advantage of RHIC’s second Beam Energy Scan Program [@BESIIWP] where Au+Au will be collided at $\sqrt{s_{NN}}$ = 7.7, 9.1, 11.5, 14.5, and 19.6 GeV. It is expected that the number of events recorded will provide statistical uncertainties similar to the statistical uncertainties quoted in STAR’s Au+Au measurement at $\sqrt{s_{NN}}$ = 200 GeV [@STAR200PRL]. In addition to the increased statistics, the inner sector of the TPC will be replaced to provide additional tracking points [@iTPC] and a complimentary forward Time of Flight (eTOF) detector will be installed [@eTOF]. The TPC upgrade will lead to additional reduction in the statistical and systematic uncertainties, while the eTOF will lead to an extension in the rapidity reach and the ability to measure the $e^{+}e^{-}$ dependence on rapidity. [99]{} R. Rapp and J. Wambach, Eur. Phys. J. A [**[6]{}**]{}, 415 (1999). D. Adamová *et al.* (CERES Collaboration), Phys. Lett. B [**[666]{}**]{}, 425 (2008). R. Arnaldi *et al.* (NA60 Collaboration), Eur. Phys. J. C [**[61]{}**]{}, 711 (2009). A. Adare *et al.* (PHENIX Collaboration), Phys. Rev. C [**[93]{}**]{}, 014904 (2016). L. Adamczyk *et al.* (STAR Collaboration), Phys. Rev. Lett. [**[113]{}**]{}, 022301 (2014). L. Adamczyk *et al.* (STAR Collaboration), Phys. Lett. B [**[750]{}**]{}, 64 (2015). S. Yang (STAR Collaboration), Nucl. Phys. A [**[956]{}**]{}, 429 (2016). R. Rapp and J. Wambach, “Chiral symmetry restoration and dileptons in relativistic heavy-ion collisions,” in *Advances in Nuclear Physics*, edited by J. W. Neagle and E. Vogt (Springer US, Boston, MA, 2002) pp. 1-205. R. Rapp and H. van Hees, Phys. Lett. B [**[753]{}**]{}, 586 (2016). D. Kikoła, G. Odyniec, and R. Vogt, Phys. Rev. C [**[84]{}**]{}, 054907 (2011). M. Anderson [*et al.*]{}, Nucl. Instrum. Methods A [**499**]{}, 659 (2003). STAR Collaboration, *Proposal for a Large Area Time of Flight System for STAR*, STAR Note:sn0621. L. Adamczyk *et al.* (STAR Collaboration), Phys. Rev. C [**[92]{}**]{}, 024912 (2015). P. Huck (STAR Collaboration), Nucl. Phys. A [**[931]{}**]{}, 659 (2014). R. Rapp, private communications. STAR Collaboration, *Studying the Phase Diagram of QCD Matter at RHIC*, STAR Note:sn0598. STAR Collaboration, *Technical Design Report for the iTPC Upgrade*, STAR Note:sn0644. STAR Collaboration and CBM Collaboration, *Physics Program for the STAR/CBM eTOF Upgrade*, arXiv:1609.05102.
--- abstract: 'Graphical functions are positive functions on the punctured complex plane ${{\mathbb C}}\setminus{\{0,1\}}$ which arise in quantum field theory. We generalize a parametric integral representation for graphical functions due to Lam, Lebrun and Nakanishi, which implies the real analyticity of graphical functions. Moreover we prove a formula that relates graphical functions of planar dual graphs.' author: - 'Marcel Golz, Erik Panzer and Oliver Schnetz' bibliography: - 'refs.bib' title: Graphical functions in parametric space --- Introduction ============ One main problem in perturbative quantum field theory is the calculation of Feynman integrals (see e.g. [@ItzyksonZuber]). As a new tool for this task, graphical functions were introduced by the third author in [@GraphicalFunctions]. Basically, these are special classes of massless Feynman integrals (3-point functions) that can be understood as single-valued functions on the punctured complex plane ${{\mathbb C}}\setminus{\{0,1\}}$. They are powerful tools in multi-loop calculations, see e.g. [@ZigZag; @Coaction]. A traditional method to study Feynman integrals is to represent them in a parametric version, where one integrates over variables associated to the edges of a Feynman graph [@ItzyksonZuber]. In many cases of interest, these integrals can be computed in terms of multiple polylogarithms, using a method developed by F. Brown [@Brown:SomeFI; @Brown:TwoPoint] and the second author [@Panzer:HyperInt; @Panzer:PhD]. The combination of graphical functions and this parametric integration (using the formulas derived in this article) has recently provided a breakthrough in the calculation of primitive log-divergent amplitudes of graphs with up to eleven independent cycles (‘loops’) [@Coaction]. In a complete quantum field theoretical calculation one encounters naive singularities which are most frequently treated by the ‘dimensional regularization scheme’ which demands the generalization to arbitrary space-time dimensions (away from the classical four dimensions). The parametric representation is the cleanest way to define Feynman integrals in non-integer ‘dimensions’. In this article, we derive fundamental formulas and results for graphical functions in parametric representations for arbitrary dimensions. Apart from [@Coaction], first applications of the results of this article include the calculation of the beta function and field anomalous dimension of minimally subtracted $(4-\varepsilon)$-dimensional $\phi^4$ theory to six and seven loops by the third author [@Schnetz6loops]. Feynman integrals in position space ----------------------------------- A Feynman graph is a graph $G$ with a distinguished subset ${{\mathcal V}}_G^{\mathrm{ext}}\subseteq {{\mathcal V}}_G$ of *external* vertices (the remaining vertices ${{\mathcal V}}_G^{\mathrm{int}}= {{\mathcal V}}_G\setminus {{\mathcal V}}_G^{\mathrm{ext}}$ are called *internal*). We often suppress the subscript $G$ and we use roman capital letters for cardinalities, so e.g. ${{\mathcal V}}^{\mathrm{ext}}={{\mathcal V}}_G^{\mathrm{ext}}$ and $V^{\mathrm{ext}}={|{{\mathcal V}}^{\mathrm{ext}}|}$. We fix the dimension[^1] $$d = 2\lambda+2 > 2$$ and associate to every vertex $v$ of $G$ a $d$-dimensional vector $x_v\in{{\mathbb R}}^d$. An edge $e$ between vertices $u$ and $v$ corresponds to the quadratic form $Q_e$ which is the square of the Euclidean distance between $x_u$ and $x_v$, $$\label{eq:euclidean-norm} Q_e ={\|x_u-x_v\|}^2 =\sum_{i=1}^d (x_u^i - x_v^i)^2.$$ Moreover, every edge $e$ has an edge weight $\nu_e\in{{\mathbb R}}$. Then the Feynman integral associated to $G$ in position space is defined as $$\label{eq:gf-xspace} f_G^{(\lambda)}(x) =\left(\prod_{v\in{{\mathcal V}}^{{\mathrm{int}}}}\int_{{{\mathbb R}}^d}\frac{{\mathrm{d}}^dx_v}{\pi^{d/2}}\right) \frac{1}{\prod_e Q_e^{\lambda\nu_e}},$$ where the first product is over all internal vertices of $G$ and the second product is over all edges of $G$. Note that $f_G^{(\lambda)}(x)$ is a function of the external vectors $x=(x_v)_{v\in{{\mathcal V}}^{\mathrm{ext}}}$ which we always assume to be pairwise distinct ($x_v\neq x_w$ for $v\neq w$). The convergence of is equivalent to two conditions named ‘infrared’ and ‘ultraviolet’ (this weighted analog of [@GraphicalFunctions Lemma 3.4] rests on *power counting* [@LowensteinZimmermann]): - The graph $G$ is called *ultraviolet convergent* if $$\label{ultraviolet} \lambda \nu_g < \tfrac{d}{2} (V_g-1)$$ holds for all induced[^2] subgraphs $g$ with ${|{{\mathcal V}}_g \cap {{\mathcal V}}^{\mathrm{ext}}|} \leq 1$. Here we write $$\nu_g =\sum_{e\in{{\mathcal E}}_g} \nu_e$$ and denote the sets of vertices and edges of $g$ with ${{\mathcal V}}_g$ and ${{\mathcal E}}_g$. - A vertex $v\in{{\mathcal V}}_g$ of a subgraph $g$ of $G$ is called $g$-internal if it is internal ($v \in {{\mathcal V}}^{\mathrm{int}}$) and all edges of $G$ which are incident to $v$ also belong to $g$. We write $V_g^{\mathrm{int}}$ for the number of such vertices. The graph $G$ is called *infrared convergent* if $$\label{infrared} \lambda\nu_g > \tfrac{d}{2} V_g^{\mathrm{int}}$$ holds for all subgraphs $g$ of $G$ which satisfy $V_g^{\mathrm{int}}>0$ and contain only edges which are incident to at least one $g$-internal vertex. \[ex:g4-convergence\] In case of the graph $G_4$ from figure \[fig:g4g7\], there are three ultraviolet conditions of the form $\lambda \nu_e< \frac{d}{2}$ (one for each edge $e$) and one infrared condition $\lambda \nu_{G_4} > \frac{d}{2}$ (from the full subgraph $g=G_4$). Graphical functions ------------------- In the special case of three external vertices, we label them with $0$, $1$ and $z$. Note that $f_G^{(\lambda)}$ is invariant under the Euclidean group, so we may translate $x_0$ to the origin and rotate $x_1$ and $x_z$ into the plane ${{\mathbb R}}^2\times{\{0\}}^{d-2}$ which we identify with the complex numbers ${{\mathbb C}}$. The *graphical function* $$f_G^{(\lambda)}(z)\colon {{\mathbb C}}\setminus{\{0,1\}} \longrightarrow {{\mathbb R}}_+$$ is a parametrization of $f_G^{(\lambda)}(x)$ defined in terms of a complex variable $z\neq 0,1$ via $$\label{externalvectors} x_0 = (0,\ldots,0)^t, \quad x_1 = (1,0,\ldots,0)^t \quad\text{and}\quad x_z = (\operatorname{Re}z,\operatorname{Im}z,0,\ldots,0)^t.$$ Graphical functions were introduced in [@GraphicalFunctions] basically as a tool for calculating Feynman periods in $\phi^4$ quantum field theory (see also [@Coaction; @Drummond:Ladders; @Todorov]). However, they can also appear in amplitudes and correlation functions, see for example [@OffShellConformal]. In [@GraphicalFunctions] ‘completions’ of graphical functions were defined. In this article, however, we use uncompleted graphs. ![Examples of connected graphs with four and seven vertices in total and three external vertices labeled $0$, $1$ and $z$.[]{data-label="fig:g4g7"}](g4 "fig:") ![Examples of connected graphs with four and seven vertices in total and three external vertices labeled $0$, $1$ and $z$.[]{data-label="fig:g4g7"}](g7 "fig:") \[ex:g4\] In $d=4$ dimensions and with edge weights $\nu_e=1$, the graph $G_4$ of Figure \[fig:g4g7\] has a convergent graphical function (see example \[ex:g4-convergence\]). It is (see [@GraphicalFunctions; @Todorov]) $$f_{G_4}^{(1)}(z) = \int_{{{\mathbb R}}^4} \frac{{\mathrm{d}}^4 x}{\pi^2} \frac{1}{{\|x\|}^2 {\|x-x_1\|}^2 {\|x-x_z\|}^2} = \frac{4{\mathrm i}D(z)}{z-{{\overline{z}}}}$$ in terms of the Bloch-Wigner dilogarithm $ D(z)=\operatorname{Im}({\mathrm{Li}}_2(z)+\log(1-z)\log{|z|}) $. The Bloch-Wigner dilogarithm $D(z)$ is a single-valued version of the dilogarithm ${\mathrm{Li}}_2(z)=\sum_{k=1}^\infty z^k/k^2$. It is real analytic on ${{\mathbb C}}\setminus{\{0,1\}}$ and antisymmetric under complex conjugation $D(z)=-D({{\overline{z}}})$. These properties of the Bloch-Wigner dilogarithm lift to general properties of graphical functions: \[generalthm\] Let $G$ be a graph which fulfills the ultraviolet and infrared conditions and . Then the graphical function $f_G^{(\lambda)}\colon {{\mathbb C}}\setminus{\{0,1\}} \longrightarrow {{\mathbb R}}_+$ has the following general properties: 1. $f_G^{(\lambda)}(z)=f_G^{(\lambda)}({{\overline{z}}})$, 2. $f_G^{(\lambda)}$ is single-valued and 3. $f_G^{(\lambda)}$ is real analytic on ${{\mathbb C}}\setminus\{0,1\}$. It was not possible to prove real analyticity (G3) in full generality with the methods in [@GraphicalFunctions]. In this article we obtain (G3) as a consequence of an alternative integral representation of graphical functions. In this representation, the integration variables $\alpha_e$ (known as *Schwinger* or *Feynman* parameters) are associated to edges of the graph [@ItzyksonZuber; @BEK]. Although we are mainly interested in the case of three external vertices 0, 1, $z$, our results effortlessly generalize to an arbitrary number $V^{\mathrm{ext}}$ of external vertices. Graph polynomials ----------------- We will use certain polynomials in the edge variables $\alpha_e$ that were defined and studied by F. Brown and K. Yeats [@BrownYeats:WD]. \[spanningforest\] Let $p={\{p_1,\ldots ,p_n\}}$ denote a partition of a subset of the vertices of a graph $G$ (so $p_i\subseteq {{\mathcal V}}$ and $p_i\cap p_j = \emptyset$ when $i\neq j$). We write ${{\mathcal F}}_G^p$ for the set of all spanning forests $T_1\cup\ldots\cup T_n$ consisting of exactly $n$ (pairwise disjoint) trees $T_i$ such that $p_i \subseteq T_i$. The *dual spanning forest polynomial* associated to $p$ is $$\label{eq:dual-forpol} \tilde{\Psi}_G^p(\alpha) {\mathrel{\mathop:}=}\sum_{F\in{{\mathcal F}}_G^p}\;\prod_{e\in F}\alpha_e.$$ We suppress curly brackets in the notation, so for example $\tilde{\Psi}_G^{01z} = \tilde{\Psi}_G^{{\{{\{0,1,z\}}\}}}$ denotes the sum of spanning forests ($n=1$), while the partition in $\tilde{\Psi}_G^{01,z}$ is ${\{{\{0,1\}},{\{z\}}\}}$ ($n=2$). Say we call the external vertices $1,\ldots,V^{\mathrm{ext}}$, then we write $\tilde{\Psi} {\mathrel{\mathop:}=}\tilde{\Psi}^{1,\ldots,V^{\mathrm{ext}}}$ for the partition into singletons ($n = V^{\mathrm{ext}}$). The partitions with $n=V^{\mathrm{ext}}-1$ have exactly one part containing two external vertices. We collect them in the polynomial $$\label{eq:dual-phi} \tilde{\Phi}_G(\alpha,x) {\mathrel{\mathop:}=}\sum_{1\leq i<j\leq V^{{\mathrm{ext}}}} {\|x_i-x_j\|}^2 \tilde{\Psi}_G^{ij,(k)_{k\neq i,j}}(\alpha) .$$ If we label the three edges adjacent to $0$, $1$ and $z$ in $G_4$ (see Figure \[fig:g4g7\]) by $1$, $2$ and $3$, then we find the polynomials $$\begin{aligned} \tilde{\Psi}_{G_4}^{1z,0} &= \alpha_2 \alpha_3, & \tilde{\Psi}_{G_4}^{01z} &= \alpha_1 \alpha_2 \alpha_3, \\ \tilde{\Psi}_{G_4}^{0z,1} &= \alpha_1 \alpha_3, & \tilde{\Psi}_{G_4}^{0,1,z} &= \alpha_1+\alpha_2+\alpha_3, \\ \tilde{\Psi}_{G_4}^{01,z} &= \alpha_1 \alpha_2, & \tilde{\Phi}_{G_4} &= (z-1)({{\overline{z}}}-1)\alpha_2 \alpha_3+z{{\overline{z}}}\alpha_1 \alpha_3 + \alpha_1 \alpha_2.\end{aligned}$$ Here ${{\overline{z}}}$ denotes the complex conjugate of $z\in{{\mathbb C}}\setminus{\{0,1\}}$ and we used . A parametric (i.e. depending on the edge parameters $\alpha_e$) formula for (massive) position space Feynman integrals in four-dimensional Minkowski space was discovered long ago [@Nakanishi:Hankel; @LamLebrun] and is also discussed in the book [@Nakanishi:Book Equation (8-33)]. In the massless, Euclidean case it becomes a parametric formula for graphical functions. We give an extension to arbitrary dimensions which also allows for negative edge weights.[^3] \[dualparam\] Let $G$ be a non-empty graph with $V^{{\mathrm{int}}}_G$ internal vertices and edges labeled $1,2,\ldots,E_G$. We assume that its graphical function converges, meaning that $G$ is subject to and , and define the *superficial degree of divergence* $$\label{defM} M_G {\mathrel{\mathop:}=}\lambda \nu_G-\tfrac{d}{2} V^{\mathrm{int}}_G.$$ Then for any set of non-negative integers $n_e$ such that $n_e+\lambda\nu_e>0$, we have the following *dual parametric representation* of $f_G^{(\lambda)}$ as a convergent projective integral: $$\label{fdualparam} f_G^{(\lambda)}(x) = \frac{(-1)^{\sum_e\!n_e}\,\Gamma(M_G)}{\prod_e\Gamma(n_e+\lambda\nu_e)} \int_\Delta \Omega \Big[\prod_e\alpha_e^{n_e+\lambda\nu_e-1}\partial_{\alpha_e}^{n_e}\Big] \frac{1}{\tilde{\Phi}_G^{M_G}\tilde{\Psi}_G^{d/2-M_G}},$$ where the integration domain is given by the positive coordinate simplex $$\Delta =\{(\alpha_1:\alpha_2:\ldots:\alpha_{E_G})\colon \alpha_e>0\text{ for all }e\in\{1,2,\ldots,E_G\}\}\subset{{\mathbb P}}^{E_G-1}{{\mathbb R}}$$ and we set $$\Omega =\sum_{e=1}^{E_G}(-1)^{e-1}\alpha_e {\mathrm{d}}\alpha_1\wedge\ldots\wedge\widehat{{\mathrm{d}}\alpha_e}\wedge\ldots\wedge {\mathrm{d}}\alpha_{E_G} .$$ For integer $\lambda\nu_e \leq 0$ one may set $n_e=1-\lambda\nu_e$ such that the integration over $\alpha_e$ trivializes to the evaluation at $\alpha_e = 0$ of a $(-\lambda\nu_e)$’s derivative. Readers who are not familiar with projective integrals can specialize to an affine integral by setting $\alpha_1=1$ and integrating the remaining $\alpha_e$ ($e>1$) from $0$ to $\infty$. Note that $M_G$ is restricted by convergence: From with $g=G$ and from with $g=G\setminus({{\mathcal V}}^{\mathrm{ext}}\setminus{\{v\}})$ (for some $v\in{{\mathcal V}}^{\mathrm{ext}}$) we obtain for a graph $G$ with no edges between external vertices that $$0<M_G<\lambda \min_{v\in V^{\mathrm{ext}}} \sum_{w\in{{\mathcal V}}^{\mathrm{ext}}\setminus{\{v\}}} \nu_w,$$ where $\nu_w$ is the sum of weights $\nu_e$ of all edges $e$ adjacent to the external vertex $w$. One immediate advantage of the parametric representation is that for many graphs with not more than nine vertices, the integral can be calculated (in terms of polylogarithms) with methods developed by F. Brown [@Brown:SomeFI] and the second author [@Panzer:PhD; @Panzer:HyperInt]. Note that we obtain another integral representation via the Cremona transformation $\alpha_e \rightarrow 1/\alpha_e$: \[paramthm\] Let $G$ be a non-empty graph with $E_G$ edges. We assume the convergence of $f_G^{(\lambda)}$ and also that every edge $e$ has a positive weight $\nu_e>0$. Then $$\label{fparam} f_G^{(\lambda)}(x) = \frac{\Gamma(M_G)}{\prod_{e} \Gamma(\lambda\nu_e)} \int_\Delta \frac{ \prod_{e} \alpha_e^{d/2-\lambda \nu_e - 1} }{ \Phi_G^{M_G}\Psi_G^{d/2-M_G} }\Omega,$$ where $\Psi_G = \Psi_G^{1,\ldots,V^{\mathrm{ext}}}$ and $ \Phi_G(\alpha,x) = \sum_{i<j} {\|x_{i} - x_{j}\|}^2 \Psi_G^{i j, (k)_{k\neq i,j}}(\alpha)$ are defined in terms of the spanning forest polynomials, which are dual to : $$\label{eq:forpol} \Psi_G^p(\alpha) =\sum_{F\in{{\mathcal F}}_G^p} \prod_{e\notin F}\alpha_e =\Big(\prod_e\alpha_e\Big)\tilde{\Psi}_G^p(\alpha^{-1}).$$ We set $n_e=0$ in for all edges $e$ of $G$. We use the affine chart $\alpha_1=1$ in and invert all $\alpha_e$, $e>1$. By this gives the integrand in for $\alpha_1=1$. The projective version of this integral is . Planar duals ------------ A planar dual $G^\star$ of a Feynman graph $G$ with external vertices $0,1,z$ is a usual planar dual graph to which we add external vertices at ‘opposite’ sides, see Figure \[fig:duals\] (a precise description will be given in Definition \[dualdef\]). ![The graphs $H_7^{}$ and $H_7^\star$ are planar duals.[]{data-label="fig:duals"}](h7 "fig:") ![The graphs $H_7^{}$ and $H_7^\star$ are planar duals.[]{data-label="fig:duals"}](h7dual "fig:") In the case when $M_G=d/2$, graphical functions of dual graphs are related: \[planarthm\] Let $G$ be a connected graph with external vertices $0,1,z$ and edge weights $\nu_e>0$ such that the graphical function $f_G^{(\lambda)}$ converges and $M_G=d/2$. Let $G^\star$ be a dual of $G$ and denote by $e^\star$ the edge of $G^\star$ which corresponds to the edge $e$ of $G$. Let the edge weights $\nu_{e^\star}$ of $G^\star$ be related to the edge weights $\nu_e$ of $G$ through $$\label{weightsstar} \lambda\nu_{e^\star} =d/2-\lambda\nu_e.$$ Then the graphical functions associated to $G$ and $G^\star$ are multiples of each other: $$\label{eq:planar-dual} f_{G^\star}^{(\lambda)}(z) = f_G^{(\lambda)}(z) \prod_e \frac{\Gamma(\lambda\nu_e)}{\Gamma(\lambda\nu_{e^\star})} .$$ Note that ultraviolet convergence (\[ultraviolet\]) for a single edge $e$ implies $\lambda\nu_e<d/2$, thus $\nu_e^\star>0$. Similarly, positive edge weights in $G$ ensure that the dual graphical function $f_{G^\star}^{(\lambda)}$ is ultraviolet convergent for each single edge $e^\star$ of $G^\star$. The convergence of $f_{G^\star}^{(\lambda)}$ is ensured by the proof of Theorem \[planarthm\]. If in four dimensions a graph $G$ has edge weights 1 then a dual graph $G^\star$ has also edge weights 1 and the graphical functions are equal if $M_G=2$. One can also use duality for a planar graph $G$ with $M_G\neq d/2$ if one adds an edge from 0 to 1 of weight $(d/2-M_G)/\lambda$, see the subsequent example \[ex:dual\]. It is well known (see [@MinamiNakanishi] for example) that the graphical function of every planar graph $G$ (without restrictions on $\nu_e$ and $d$) is related (by a constant factor) to the *momentum space* Feynman integral associated to $G^{\star}$. What makes Theorem \[planarthm\] interesting is that in the particular case when $V^{\mathrm{ext}}=3$ and $M_G=d/2$, the momentum- and position space Feynman integrals coincide. \[ex:dual\] We want to calculate the $4$-dimensional graphical function of the graph $G_7$ in Figure \[fig:g4g7\] with unit edge weights, so $M_{G_7}=1$. To apply Theorem \[planarthm\] we add an edge between 0 and 1 (see Figure \[fig:duals\]). This does not change the graphical function $f_{G_7}^{(1)}=f_{H_7}^{(1)}$, which is clear from . Theorem \[planarthm\] gives $f_{H_7}^{(1)}=f_{H^\star_7}^{(1)}$. The graphical function of $H^\star_7$ can be calculated by the techniques completion and appending of an edge [@GraphicalFunctions Sections 3.4 and 3.5]. We obtain $$f_{G_7}^{(1)} (z) = 20\zeta(5) \frac{4{\mathrm i}D(z)}{z-{{\overline{z}}}},$$ where $\zeta(s)=\sum_{k=1}^\infty k^{-s}$ is the Riemann zeta function. One obtains a self dual graph $H_4=H_4^{\star}$ with $M_{H_4}=2$ if one adds an edge from 0 to 1 to $G_4$. In this case planar duality leads to a trivial statement. Acknowledgements. {#acknowledgements. .unnumbered} ----------------- Parts of this article were written while Erik Panzer and Oliver Schnetz were visiting scientists at Humboldt University, Berlin. proof of Theorem \[dualparam\] ============================== Our proof follows the Schwinger trick (see e.g. [@ItzyksonZuber]). From the definition of the gamma function we obtain for $n+\lambda\nu>0$ the convergent integral (note $Q_e>0$) $$\label{trick} \frac{1}{Q_e^{\lambda\nu_e}} =\frac{1}{\Gamma(n_e+\lambda\nu_e)}\int_0^\infty\alpha_e^{n_e+\lambda\nu_e-1}(-\partial_{\alpha_e})^n\exp(-\alpha_e Q_e) \ {\mathrm{d}}\alpha_e.$$ We use this formula to replace the product of propagators in by an integral over the edge parameters $\alpha_e$. Since the integrand $\prod_e \big[ \alpha_e^{n_e + \lambda \nu_e - 1} Q_e^{n_e} \exp(-\alpha_e Q_e) \big]$ is positive, the integral is absolutely convergent and we may interchange the order of integration by Fubini’s theorem. In fact, we can also interchange the integration over the vertex variables with the partial derivatives $\partial_{\alpha_e}$ to obtain $$\label{f1} f_G^{(\lambda)}(x) =\frac{1}{\prod_e\Gamma(n_e+\lambda\nu_e)} \int_0^\infty\!\!\!\cdots\int_0^\infty \Big[\prod_e\alpha^{n_e+\lambda\nu_e-1}(-\partial_{\alpha_e})^{n_e}\Big] {{\mathcal I}}(\alpha)\prod_e{\mathrm{d}}\alpha_e,$$ where ${{\mathcal I}}(\alpha)$ is the Gau[ß]{}ian integral $${{\mathcal I}}(\alpha) =\left(\prod_{v\,{\rm internal}}\int_{{{\mathbb R}}^d}\frac{{\mathrm{d}}^dx_v}{\pi^{d/2}}\right)\exp\left(-\sum_e \alpha_eQ_e\right).$$ It factorizes into $d$ parts ${{\mathcal I}}_k$, one for each coordinate $k$, since the quadratic form is diagonal. We arrange the $i$th coordinates of the $V_G$ vertex variables to the vector $(x_{{\mathrm{int}}},x_{{\mathrm{ext}}})^t$ where $x_{{\mathrm{int}}}=(x_v^k)_{v\in{{\mathcal V}}^{\mathrm{int}}}$ and $x_{{\mathrm{ext}}}=(x_v^k)_{v\in{{\mathcal V}}^{{\mathrm{ext}}}}$. Then, the quadratic form in the exponential of ${{\mathcal I}}_k$ takes the form $$\sum_e\alpha_eQ_e^k =x^t_{{\mathrm{int}}} L^{\mathrm{ii}}(\alpha) x_{{\mathrm{int}}} +x^t_{{\mathrm{int}}} L^{\mathrm{ie}}(\alpha) x_{{\mathrm{ext}}} +x_{{\mathrm{ext}}}^t L^{\mathrm{ei}}(\alpha) x_{{\mathrm{int}}} +x_{{\mathrm{ext}}}^t L^{\mathrm{ee}}(\alpha) x_{{\mathrm{ext}}}$$ in terms of the (symmetric) Laplace matrix [@BognerWeinzierl] $$\label{Ldef} L = \begin{pmatrix} L^{\mathrm{ii}} & L^{\mathrm{ie}} \\ L^{\mathrm{ei}} & L^{\mathrm{ee}} \\ \end{pmatrix} \quad\text{with entries}\quad L(\alpha)_{uv} =\begin{cases} \sum\limits_{e{\rm\,incident\,to\,}v}\alpha_e & \text{if $u=v$ and}\\ -\sum\limits_{e=\{u,v\}}\alpha_e & \text{otherwise.} \end{cases}$$ By convergence, $L^{\mathrm{ii}}$ is positive definite. We complete the quadratic form to a perfect square, shift the integration variable to $x_{{\mathrm{int}}}+L^{\mathrm{ii}-1}L^{\mathrm{ie}}x_{{\mathrm{ext}}}$ and obtain by a standard calculation $${{\mathcal I}}_k = \det(L^{\mathrm{ii}})^{-1/2} \exp\Big( x_{{\mathrm{ext}}}^t [L^{\mathrm{ei}}L^{\mathrm{ii}-1}L^{\mathrm{ie}}-L^{\mathrm{ee}}] x_{{\mathrm{ext}}} \Big).$$ The summation over $k$ in the exponent therefore leads us to $$\label{temp} {{\mathcal I}}(\alpha) = \prod_{k=1}^d {{\mathcal I}}_k(\alpha) = \det(L^{\mathrm{ii}})^{-d/2} \exp\Big( \sum_{k,\ell=1}^{V^{\mathrm{ext}}} (x_k^t x_\ell^{}) [L^{\mathrm{ei}}L^{\mathrm{ii}-1}L^{\mathrm{ie}}-L^{\mathrm{ee}}]_{k,\ell} \Big).$$ An application of the matrix tree theorems [@Chaiken; @BognerWeinzierl] shows that[^4] $$\det(L^{\mathrm{ii}})=\tilde{\Psi} \quad\text{and}\quad \left(L^{\mathrm{ii}-1}\right)_{v,w} =\frac{1}{\tilde{\Psi}}\tilde{\Psi}_G^{vw,1,\ldots,V^{{\mathrm{ext}}}}$$ for all internal $v$ and $w$. We can therefore interpret the matrix elements $$\label{term1} \tilde{\Psi}(L^{\mathrm{ei}}L^{\mathrm{ii}-1}L^{\mathrm{ie}})_{k,\ell} = \sum_{\substack{e={\{k,v\}}\\f={\{\ell,w\}}}} \alpha_e\alpha_f \tilde{\Psi}_G^{vw,1,\ldots,V^{{\mathrm{ext}}}} \quad\text{($v, w$ internal)}$$ in the exponential of in terms of subgraphs of $G$. We distinguish two cases: $k\neq \ell$:   $k=\ell$:   - Adding the two edges $e$, $f$ to a spanning forest $F \in {{\mathcal F}}_G^{vw,1,\ldots,V^{\mathrm{ext}}}$ yields a forest $F' = F \cup {\{e,f\}} \in {{\mathcal F}}_G^{k\ell,(m)_{m\neq k,\ell}}$ (see Figure \[fig:forests\]). Conversely, each $F'$ arises exactly once this way, because it determines $e$ and $f$ as the initial and final edges on the unique path in $F'$ connecting $k$ and $\ell$. The only exception are forests $F'$ where this path is just a single edge $e={\{k,\ell\}}$ connecting them directly. But in this case $F'\setminus e \in {{\mathcal F}}_G^{1,\ldots,V^{{\mathrm{ext}}}}$, so we conclude $$\sum_{\mathclap{\substack{e={\{k,v\}}\\f={\{\ell,w\}}}}} \alpha_e\alpha_f\tilde{\Psi}_G^{vw,1,\ldots,V^{{\mathrm{ext}}}}(\alpha) =\tilde{\Psi}_G^{k\ell,(m)_{m\neq k,\ell}}(\alpha) -\tilde{\Psi} \sum_{\mathclap{e={\{k,\ell\}}}} \alpha_e .$$ - Adding $e$ to $F \in {{\mathcal F}}_G^{vw,1,\ldots,V^{\mathrm{ext}}}$ gives a forest $F'=F \cup {\{e\}} \in {{\mathcal F}}_G^{1,\ldots,kw,\ldots,V^{{\mathrm{ext}}}}$. Each such $F'$ occurs exactly once, because $e$ is necessarily the (unique) first edge on the path in $F'$ connecting $k$ with $w$, hence $$(\tilde{\Psi}L^{\mathrm{ei}}L^{\mathrm{ii}-1}L^{\mathrm{ie}})_{k,k} = \sum_{f={\{k,w\}}} \alpha_f \tilde{\Psi}_G^{1,\ldots,kw,\ldots,V^{\mathrm{ext}}} .$$ For a fixed $F'\in{{\mathcal F}}_G^{1,\ldots,kw,\ldots,V^{\mathrm{ext}}}$, $f$ runs over all edges that connect $k$ to a vertex $w$ that lies in the same connected component of $F'$. If we sum instead over all edges incident to $k$, we get additional contributions when $w$ lies in another component, say the one containing $\ell'$ (see Figure \[fig:forests\]). Therefore, $$(\tilde{\Psi}L^{\mathrm{ei}}L^{\mathrm{ii}-1}L^{\mathrm{ie}})_{k,k} = \tilde{\Psi}\sum_{k \in f} \alpha_f - \sum_{\ell'\neq k} \tilde{\Psi}_G^{k\ell',(m)_{m\neq k,\ell'}}.$$ According to , the contributions proportional to $\tilde{\Psi}$ cancel in both cases when we subtract $(\tilde{\Psi}L^{\mathrm{ee}})_{k,\ell}$ from , such that becomes $${{\mathcal I}}= \tilde{\Psi}^{-d/2}\exp\Big( -\tilde{\Psi}^{-1} \sum_{\mathclap{1\leq k<\ell\leq V^{\mathrm{ext}}}} (x_k^2-2x_k^t x_\ell+x_\ell^2) \tilde{\Psi}_G^{k\ell,(m)_{m\neq k,\ell}} \Big) = \tilde{\Psi}^{-d/2}\exp(-\tilde{\Phi}_G/\tilde{\Psi}).$$ Let us now insert a factor $1=\int_0^{\infty} \delta(t-H^{1/r}(\alpha)){\mathrm{d}}t$ into , where $H(\alpha)$ can be any homogeneous polynomial of degree $r>0$ which is positive inside $\Delta$. After we substitute all $\alpha_e$ by $t\alpha_e$ and collect the powers of $t$, the integrand of becomes $$\delta(1-H^{1/r}(\alpha)) \Big(\prod_{e}\alpha_e^{n_e+\lambda\nu_e-1}\partial_{\alpha_e}^{n_e}\Big) \tilde{\Psi}^{-d/2} \Big[ \int_0^{\infty} t^{M_G-1} \mathrm{e}^{-t\tilde{\Phi}_G/\tilde{\Psi}} {\mathrm{d}}t \Big] \prod_e{\mathrm{d}}\alpha_e,$$ because $\tilde{\Psi}$ and $\tilde{\Phi}_G$ are homogeneous in $\alpha$ of degree $V^{{\mathrm{int}}}$ and $V^{{\mathrm{int}}}+1$, respectively. We integrate over $t$ using . The choice $H(\alpha)=\alpha_e$ for some edge $e$ gives a particularly simple representation as an affine integral over ${{\mathbb R}}_+^{E_G-1}$ which is equivalent to . proof of Theorem \[generalthm\] =============================== In this section we prove the real analyticity of graphical functions. Because the polynomial $\tilde{\Phi}_G$ from depends on the squared distances $$s_{i,j} = {\|x_i-x_j\|}^2$$ between the external vertices, we may use the dual parametric representation to define $f_G^{(\lambda)}(s)$ as a function of the vector $s=(s_{i,j})_{1\leq i<j\leq V^{\mathrm{ext}}}$. In the (simply connected) domain where all components of $s$ have positive real parts, the integral remains absolutely convergent and hence $f_G^{(\lambda)}(s)$ an analytic function of $s$: \[analytic\] Let $G$ be a graph with a convergent graphical function . Then $f_G^{(\lambda)}(x)$ extends to a single-valued, analytic function $$f_G^{(\lambda)}(s) \colon \big\{ s\in {{\mathbb C}}^{V^{\mathrm{ext}}(V^{\mathrm{ext}}-1)/2}\colon \text{$\operatorname{Re}s_{i,j}>0$ for all $1\leq i<j\leq V^{{\mathrm{ext}}}$} \big\} \longrightarrow {{\mathbb C}}.$$ In the special case of three external vertices, this implies the real analyticity of $f_G^{(\lambda)}(z)$ on ${{\mathbb C}}\setminus{\{0,1\}}$: Let $z\in{{\mathbb C}}\setminus{\{0,1\}}$. For the three external labels $0$, $1$, $z$ we have $s_{0,1}=1>0$, $s_{0,z}=z{{\overline{z}}}>0$ and $s_{1,z}=(z-1)({{\overline{z}}}-1)>0$ according to . With theorem \[analytic\] we see that $f_G^{(\lambda)}(z,\bar{z})$ is composition of analytic functions, which proves (G3). The identity (G1) is immediate from as it expresses $f_G^{(\lambda)}(z)$ as a function of ${|z|}$ and ${|1-z|}$. Finally recall that $f_G^{(\lambda)}(z)$ is defined as the value of the (convergent) integral and thus manifestly single-valued. For the proof of Theorem \[analytic\] we need the following notation: Let $g$ be a subgraph of $G$ with edge set ${{\mathcal E}}_g\subseteq{{\mathcal E}}_G$ and let $Q\in{{\mathbb C}}[\alpha_e,e\in{{\mathcal E}}_G]$ be a polynomial in the edge variables of $G$. Then, the (low) degree (${\underline{\operatorname{deg}}_{g}}(Q)$) $\deg_g(Q)$ of $Q$ is the (low) degree of $Q$ in the edge variables $\alpha_e$, $e\in{{\mathcal E}}_g$ of the subgraph $g$. In other words, $c={\underline{\operatorname{deg}}_{g}}(Q)$ is the largest integer such that each monomial in $Q$ has at least $c$ factors $\alpha_e$ with $e \in {{\mathcal E}}_g$ (with multiplicity). Similarly, $C=\deg_g(Q)$ is the smallest integer such that each monomial in $Q$ has at most $C$ factors $\alpha_e$ with $e\in {{\mathcal E}}_g$. Note that ${\underline{\operatorname{deg}}_{g}}(Q)$ and $\deg_g(Q)$ are defined for polynomials $Q$ in $E_G$ variables. So for $Q=\alpha_1-\alpha_3+\alpha_2$ we have ${\underline{\operatorname{deg}}_{{\{2\}}}}(Q)=0$, even though on the subspace $\alpha_1=\alpha_3$ the low degree of $Q$ in $\alpha_2$ is 1. \[prop1\] Let $g$ be a subgraph of a graph $G$ with external vertices. Let $\tilde{\Psi}^p_G(\alpha)$ be a dual spanning forest polynomial for some partition $p$ of external vertices. Then $$\label{subgraphineq} {\underline{\operatorname{deg}}_{g}}(\tilde{\Psi}^p_G)\geq V^{{\mathrm{int}}}_g,\quad\deg_g(\tilde{\Psi}^p_G) \leq V_g-1,$$ where ${{\mathcal V}}_g$ and $V^{{\mathrm{int}}}_g$ are as in and , respectively. Let $F\in{{\mathcal F}}^p_G$ be a spanning forest of $G$. In every tree $T$ of $F$ we choose an external vertex $v_T\in T$ and we orient all edges of $T$ such that they point towards $v_T$. Because $F$ is spanning, every $g$-internal vertex $u$ has one outgoing edge in $F$. Conversely, every edge in $F$ has a unique vertex $u$ as source, therefore $${\underline{\operatorname{deg}}_{g}}(\tilde{\Psi}^p_G) =\min_{F\in{{\mathcal F}}^p_G} E_{g\cap F} \geq V^{\mathrm{int}}_g.$$ Finally we use that $g\cap F$ is a forest in $g$ and thus has at most $V_g-1$ edges, hence $$\deg_g(\tilde{\Psi}^p_G) =\max_{F\in{{\mathcal F}}^p_G} E_{g\cap F} =V_g-1 .\qedhere$$ We first derive Theorem \[analytic\] from in the case that all $n_e=0$. We consider the integrand as a function of the vector $s=(s_{i,j})_{i,j \in {{\mathcal V}}^{\mathrm{ext}},i< j}$ which we restrict to the complex domain ($\varepsilon>0$ may be chosen arbitrarily small) $$\Omega^{\varepsilon} = \left\{ s \colon \operatorname{Re}s_{i,j} > \varepsilon \quad\text{for all $1\leq i<j\leq V^{{\mathrm{ext}}}$} \right\} \subset {{\mathbb C}}^{V^{{\mathrm{ext}}}(V^{{\mathrm{ext}}}-1)/2}.$$ Let $\hat{s}_{i,j} = {\|\hat{x}_i-\hat{x}_j\|}^2$ denote the distances of an arbitrary set $\hat{x}\in {{\mathbb R}}^{d V^{\mathrm{ext}}}$ of pairwise distinct points. We can rescale $\hat{x}$ to ensure $\max_{i<j} (\hat{s}_{i,j}) = \varepsilon$, such that $${|\tilde{\Phi}_G(\alpha,s)|} \geq\operatorname{Re}\tilde{\Phi}_G(\alpha,s) > \tilde{\Phi}_G(\alpha,\hat{x})$$ for every $s \in \Omega^{\varepsilon}$ and all $\alpha \in {{\mathbb R}}_+^E$. As $f^{(\lambda)}_G(\hat{x})$ is convergent, its parametric integrand provides an integrable majorant $F(\alpha,\hat{s}) \geq F(\alpha,s)$ to the integrand $F(\alpha,s)$ of $f_G^{(\lambda)}(s)$, uniformly for all $s \in \Omega^{\varepsilon}$. This implies the analyticity of $f_G^{(\lambda)}(s)$ in $\Omega^{\varepsilon}$, for every $\varepsilon>0$ (we cite this result below as theorem \[holomorphic\]). Now let us remove the restriction that $n_e=0$. We compute the derivatives in and write the resulting integrand as $$\label{eq:integrand-differentiated} F(\alpha,s) = \left[ \prod_e \alpha_e^{n_e+\lambda \nu_e-1} \right] \frac{\sum_{m} \alpha^m q_m(s)}{\tilde{\Phi}_G(\alpha,s)^{M_G+\sum_e n_e}\tilde{\Psi}(\alpha)^{d/2-M_G+\sum_e n_e}},$$ where we expanded the numerator polynomial into its monomials $\alpha^m = \prod_e \alpha_e^{m_e}$ in Schwinger parameters and their coefficients $q_m \in {{\mathbb Q}}[s_{i,j}]$. Note that the operators $\alpha_e \partial_{\alpha_e}$ do not change the $\alpha$-degree, so $F$ stays homogeneous of degree $-E_G$ in the $\alpha$ variables, no matter which values are chosen for the $n_e$. This gives $$\sum_e m_e = \left( \deg_G(\tilde{\Phi}_G)+\deg_G(\tilde{\Psi}_G) - 1 \right)\sum_e n_e =2V^{{\mathrm{int}}}\sum_e n_e ,$$ because the polynomials $\tilde{\Phi}_G$ and $\tilde{\Psi}_G$ have the $\alpha$-degrees $V^{{\mathrm{int}}}+1$ and $V^{{\mathrm{int}}}$. If we write as $F(\alpha,s) = \sum_m q_m(s) F_m(\alpha,s)$ we can thus identify each $F_m$ with the (dual parametric) integrand for $f^{(\lambda')}_G(s)$ in $d'=2\lambda'+2=d+4\sum_e n_e$ dimensions with weights $\lambda' \nu_e' = \lambda \nu_e + n_e + m_e > 0$. With the first part of the proof it suffices to show that each of these $f^{(\lambda')}_G$ is a convergent graphical function. We therefore have to consider the infrared and ultraviolet conditions. Because differentiation $\partial_{\alpha_e}$ for $e\in{{\mathcal E}}_g$ can lower the low degree by at most one, we obtain $$\sum_{e\in g}m_e-({\underline{\operatorname{deg}}_{g}}(\tilde{\Phi}_G)+{\underline{\operatorname{deg}}_{g}}(\tilde{\Psi}_G))\sum_{e\in G}n_e\geq-\sum_{e\in g}n_e.$$ From the convergence of $f^{(\lambda)}_G$ and from Proposition \[prop1\] we obtain $$\sum_{e\in g}\lambda'\nu_e' =\sum_{e\in g}(\lambda\nu_e+n_e+m_e) >\tfrac{d}{2}V_g^{{\mathrm{int}}}+2V_g^{{\mathrm{int}}}\sum_{e\in G}n_e =\tfrac{d'}{2} V_g^{{\mathrm{int}}},$$ proving infrared convergence. Likewise, differentiation $\partial_{\alpha_e}$ for $e\in{{\mathcal E}}_g$ lowers the degree by at least one, yielding $$\sum_{e\in g}m_e-(\deg_g(\tilde{\Phi}_G)+\deg_g(\tilde{\Psi}_G))\sum_{e\in G}n_e\leq-\sum_{e\in g}n_e.$$ Together with Proposition \[prop1\] this proves ultraviolet convergence (and thus completes our proof of Theorem \[analytic\]): $$\sum_{e\in g}\lambda'\nu_e' =\sum_{e\in g}(\lambda\nu_e+n_e+m_e) <\left(\tfrac{d}{2}+2\sum_{e\in G}n_e \right)(V_g-1) =\tfrac{d'}{2}(V_g-1) .\qedhere$$ For convenience of the reader we cite here the result from calculus in the form [@Sauvigny Theorem 2.12], which is perfectly adapted to our application: \[holomorphic\] Let $\Theta\subset{{\mathbb R}}^m$ and $\Omega\subset{{\mathbb C}}^n$ denote domains in the respective spaces of dimensions $m,n\in{{\mathbb N}}$. Furthermore, let $$f(t,z) =f(t_1,\ldots,t_m,z_1,\ldots,z_n)\colon \Theta\times\Omega\longrightarrow{{\mathbb C}}$$ represent a continuous function with the following properties: - For each fixed $t\in\Theta$, the function $ z \mapsto f(t,z) $ is holomorphic in $z\in\Omega$. - We have a continuous function $ F(t)\colon\Theta\longrightarrow[0,\infty) $ which is integrable, $$\int_\Theta F(t)\ {\mathrm{d}}t< \infty,$$ and uniformly majorizes $f$: $ {|f(t,z)|} \leq F(t)$ for all $(t,z)\in\Theta\times\Omega$. Then the function $ z \mapsto \int_\Theta f(t,z)\ {\mathrm{d}}t $ is holomorphic in $\Omega$. We may consider a graphical function $f_G^{(\lambda)}(z)$ as a function of two complex variables $z$ and ${{\overline{z}}}$ and analytically continue away from the locus where ${{\overline{z}}}$ is the complex conjugate of $z$. In this case Theorem \[analytic\] states that $f_G^{(\lambda)}$ is analytic in $z$ and ${{\overline{z}}}$ if $\operatorname{Re}z{{\overline{z}}}>0$ and $\operatorname{Re}(1-z)(1-{{\overline{z}}})>0$. If one continues analytically beyond this domain, additional singularities will in general appear. Already in example \[ex:g4\] we encounter $z=\bar{z}$, which corresponds to the vanishing of the Källén function $$(z-{{\overline{z}}})^2 = s_{0,z}^2+s_{1,z}^2+s_{0,1}^2-2 s_{0,z} s_{1,z} - 2 s_{0,z} s_{0,1} - 2 s_{1,z} s_{0,1}.$$ For bigger graphs the singularity structure outside $\operatorname{Re}z{{\overline{z}}}>0$, $\operatorname{Re}(1-z)(1-{{\overline{z}}}) >0 $ becomes more and more complicated (see [@Panzer:ManyScales table 1] for a few examples). proof of Theorem \[planarthm\] ============================== Planar duality for graphical functions is specific to three external labels for which we use $0$, $1$, $z$. Let us first recall the notion of planarity and planar duality for Feynman graphs in this case.[^5] \[dualdef\] Let $G$ be a graph with three external labels $0$, $1$, $z$. Let $G_v$ be the graph that we obtain from $G$ by adding an extra vertex $v$ which is connected to the external vertices of $G$ by edges $\{0,v\}$, $\{1,v\}$, $\{z,v\}$, respectively. We say that $G$ is *externally planar* if and only if $G_v$ is planar. Let $G_v$ be planar and $G_v^\star$ a planar dual of $G_v$. The edges $e^\star$ of $G_v^\star$ are in one to one correspondence with the edges $e$ of $G_v$. A planar dual of $G$ is given by $G_v^\star$ minus the triangle $\{0,v\}^\star$, $\{1,v\}^\star$, $\{z,v\}^\star$ with external labels $0$, $1$, $z$ corresponding to the faces $1zv$, $0zv$, $01v$, respectively. The edge weights of $G^\star$ are related to the edge weights of $G$ by : $\lambda\nu_e+\lambda\nu_{e^\star}=d/2$. We can draw an externally planar graph $G$ with the external labels 0, 1, $z$ in the outer face. A dual $G^\star$ then has also the labels in the outer face, ‘opposite’ to the labels of $G$, see Figure \[fig:duals\]. Another way to construct this dual is by adding three edges $e_{01}=\{0,1\}$, $e_{0z}={\{0,z\}}$, $e_{1z}={\{1,z\}}$ to $G$ to obtain a graph $G_e$. Its dual $G_e^\star$ differs from $G_v^\star$ upon replacing the triangle ${\{0,v\}}^\star$, ${\{1,v\}}^\star$, ${\{z,v\}}^\star$ by a star $e_{01}^\star$, $e_{0z}^\star$, $e_{1z}^\star$. From $G_e^\star$ we obtain $G^\star$ by removing this star and labeling its tips with $z$, $1$, $0$, respectively. Clearly both constructions (starting from the same planar embedding of $G$) lead to the same dual $G^\star$ and prove Let $G$ be externally planar with dual $G^\star$. Then $G^\star$ is externally planar and $G$ is a dual of $G^\star$. Because the edge weights are positive we can use $n_e=0$ in . From $M_G=d/2$ we obtain (see and ) $$M_{G^\star} =\sum_e(\tfrac{d}{2}-\lambda\nu_e)-\tfrac{d}{2}V^{{\mathrm{int}}}_{G^\star} = \tfrac{d}{2}(E_G-V^{{\mathrm{int}}}_{G^\star}-V^{{\mathrm{int}}}_G-1) = \tfrac{d}{2}(E_{G_v}-V_{G_v^\star}-V_{G_v}+3)$$ where $E_G$ is the number of edges of $G$. As the vertices of $G_v^\star$ are the faces of the planar embedding of $G_v$, Euler’s formula for planar graphs shows $M_{G^\star}=d/2$. Comparing for the graph $G$ with for the graph $G^\star$ leads to if we identify $\alpha_e=\alpha_{e^\star}$ for all edges $e$, provided that $ \tilde{\Phi}_G=\Phi_{G^\star} $. This amounts to the identity $\tilde{\Psi}^{ij,k}_G=\Psi^{ij,k}_{G^\star}$ of spanning forest polynomials for all triples $\{i,j,k\}=\{0,1,z\}$ and hence follows from the bijection of 2-forests given by $${{\mathcal F}}^{ij,k}_G \ni F \longleftrightarrow F^\star := \{e^\star\colon e\not\in F\}\in{{\mathcal F}}^{ij,k}_{G^\star}.$$ Namely, for any given $F\in{{\mathcal F}}^{ij,k}_G$ consider the spanning tree $T_i = F \cup {\{{\{i,v\}},{\{k,v\}}\}}$ of $G_v$. As Tutte points out [@Tutte Theorem 2.64], its dual $T_i^\star = {\{e^{\star}\colon e\notin T\}}\subseteq {{\mathcal E}}_{G_v^\star}$ is a spanning tree of $G_v^\star$ and therefore, $F^{\star} = T_i^\star\setminus{\{j,v\}}^\star$ is indeed a $2$-forest. Furthermore, the edge ${\{j,v\}}^\star$ connects the external vertices $i$ and $k$ of $G^\star$ and thus $F^\star$ cannot connect $i$ with $k$ (otherwise, $T_i^\star = F^\star \cup {\{j,v\}}^\star$ would contain a cycle). Likewise (interchanging $i$ and $j$) $F^\star$ does not connect $j$ with $k$, hence $F^{\star} \in {{\mathcal F}}^{i,k}_{G^\star} \cap {{\mathcal F}}^{j,k}_{G^\star} = {{\mathcal F}}^{ij,k}_{G^\star}$. Finally, the symmetry $F=(F^\star)^\star$ implies that the map $F\mapsto F^\star$ is injective and onto. [^1]: In two dimensions ($\lambda=0$), non-trivial graphical functions always diverge. However one can redefine $\lambda\nu_e=:\nu_e'\in{{\mathbb R}}$ as the edge weights and all of the following results extend to this case. [^2]: A subgraph $g$ is induced when every edge of $G$ which has both endpoints in ${{\mathcal V}}_g$ belongs to $g$. [^3]: The validity for arbitrary dimensions is straightforward and was noticed already in [@Nakanishi:Book Remark 7-10]. [^4]: In the notation of the (All minors) matrix tree theorem [@Chaiken equation (2)], the first equality is precisely the case $W=U=V^{\mathrm{ext}}$, $S=V$. The second identity follows from Cramer’s rule by setting $W=V^{\mathrm{ext}}\cup {\{v\}}$ and $U=V^{\mathrm{ext}}\cup {\{w\}}$ and noting that $\varepsilon(W,S)\varepsilon(U,S) = (-1)^{v+w}$ by the remarks after [@Chaiken equation (3)]. [^5]: In the physics literature, this definition is standard [@MinamiNakanishi]. We did not find an established name for this in the literature on graph theory, except for the term “circular planar graph” used in [@CurtisIngermanMorrow].
--- abstract: 'Fluctuations in the measured mRNA levels of unperturbed cells under fixed conditions have often been viewed as an impediment to the extraction of information from expression profiles. Here, we argue that such expression fluctuations should themselves be studied as a source of valuable information about the underlying dynamics of genetic networks. By analyzing microarray data taken from Saccharomyces cerevisiae, we demonstrate that correlations in expression fluctuations have a highly statistically significant dependence on gene function, and furthermore exhibit a remarkable scale-free network structure. We therefore present what we view to be the simplest phenomenological model of a genetic network which can account for the presence of biological information in transcript level fluctuations. We proceed to exactly solve this model using a path integral technique and derive several quantitative predictions. Finally, we propose several experiments by which these predictions might be rigorously tested.' author: - 'William W. Chen' - 'Jeremy L. England' - 'Eugene I. Shakhnovich' title: An Exact Model of Fluctuations in Gene Expression --- [^1] [^2] The classical approach to molecular biology is to characterize living systems principally by analyzing their individual components and determining how these components interact with each other on a case-by-case basis. In contrast, the recent development of high-throughput gene expression assays has made it possible to study the cell’s web of interacting gene products as a single, complex system with many degrees of freedom. [@schena_brown; @wen_somogyi]. This approach has proved to be useful not only as a new tool for medical diagnosis [@veer_friend; @alizadeh_staudt], but also as a fount of novel insights into how biological systems function as a whole [@arava_herschlag; @laub_shapiro; @spellman_futcher]. It has previously been observed in the case of gene microarray assays that data which are taken from repetitions of the same experiment can nonetheless exhibit a substantial amount of variation. These fluctuations in measured gene expression have typically been treated as a drawback to using microarrays, so much so that even where it has been observed that expression fluctuations do contain information about gene function, efforts still have been focused on finding ways to deconvolute their impact from the data [@hughes]. This attitude is mirrored in the characteristics of many previously proposed theoretical models of gene networks; despite ample evidence that gene expression is a fundamentally “noisy,” stochastic process [@thattai_oudenaarden; @hasty_collins; @elowitz_swain], many such models have attempted to describe whole-network dynamics using deterministic differential equations or Boolean models [@collins_gardner; @kauffman_troein] In this work, we present analyses of yeast microarray data which illustrate the value of expression fluctuations as a potential source of information about genetic networks. We therefore propose what we believe to be the simplest analytical model of a whole genetic network which can explain the biological information we observe in these fluctuations. In what follows, we first solve the model using a path integral technique and derive from it several quantitative predictions. Subsequently, we propose experiments by which these predictions could be straightforwardly tested. Biological Information in Expression Fluctuations ================================================= In their large scale study of yeast expression profiles conducted in 2000, Hughes et al.  [@hughes] carried out 63 identically prepared gene expression assays of yeast cells under normal conditions. Though the investigators noted in passing that correlations in the fluctuations of their control data could be used to cluster genes by function, their aim was principally to gauge the magnitude of this “intrinsic biological noise” in order to be able subtract its effects from their subsequent experiments. Here, we present similar analyses of the same set of control data, arguing that the amount of biological information contained in such noise merits further study in its own right. Using data from the same 63 assays, we constructed a series of gene networks. Networks were made by representing each gene as a node in a graph, and by drawing links between gene pairs when the Pearson correlations of their levels of expression exceeded some cut-off threshhold in magnitude. The Pearson correlation between the expression levels of two genes is defined as follows: $$\frac{\langle x y \rangle - \langle x \rangle \langle y \rangle} {\sqrt{\langle x^2 \rangle - \langle x \rangle^2} \, \sqrt{\langle y^2 \rangle - \langle y \rangle^2}}$$ where $x$ and $y$ are the expression levels of the genes. The averaging is taken over the 63 experiments. The Pearson correlation measures how the natural fluctuations of any two genes track each other in identically prepared experiments. Two types of analyses were applied to these correlation networks. In a fashion very similar to the original analysis done by Hughes et al., we gauged the ability of these networks to assign genes with identical functional classifications [@mips] to the same cluster. For each constructed network, clusters were defined as sets of genes belonging to disconnected subgraphs. By tuning the Pearson correlation cut-off threshhold, we constructed networks with varying “connectedness” and thus varying sizes of the giant component. The giant component is defined as the largest disconnected subgraph of the network. At one extreme, when the cut-off threshhold is very low, the network is completely connected and the giant component is the entire set of genes. At the other extreme, when the cut-off threshhold is very high, the network is completely disconnected and all subgraphs are orphan nodes. Thus the size of the giant component varies between 1 and $N$ (where $N$ is the number of genes) and serves as a natural scale for the “connectedness” of the network. To assess how well identically annotated genes were clustered, we calculated the fraction of gene pairs that were in the same cluster and annotated with the same functional classification, for different sizes of the giant component (Fig. \[fig:probpair\]). Specifically, any cluster of size $n$ has $(n)(n-1)/2$ total possible pairs. Thus a network of $m$ clusters with sizes $\{n_1, n_2, ... n_m\}$, has a total number of possible pairings given by $\Sigma_{i=1}^{m}{(n_i)(n_i-1)/2}$. Of these possible clustered pairings, a certain number will have matching functional classification. The calculated fraction is then the number of gene pairs with matching classification divided by the total number of possible pairs. This fraction increases with decreasing giant component size until a peak value of 0.375, and then steeply drops when the size of the giant component reaches 1. For permissive cut-offs and giant components exceeding more than half the total number of genes, this fraction is near that of a random network, reflecting the loss of information as the network becomes, trivially, completely connected. Two types of random networks served as controls. The first random control set was created by fixing the topology of the network, and randomizing the classification of the genes to calculate a complementary control probability. The second random control set was created by randomly linking genes, in effect creating random topologies. Comparing the pair matching fraction of the Pearson data set to either null model shows substantial significance. At the peak of the Pearson fraction curve in Fig. \[fig:probpair\], the p-value is $2.12 \times 10^{-212}$ over the fixed topology control, and $8.08 \times 10^{-2226}$ over the random topology control. At each cut-off, we also looked at a global property of the networks generated from noise data. We calculated the degree distribution of the networks, which is the distribution of the number of edges per node. We show the degree distribution at the giant component transition in Fig. \[fig:powerlaw\]. The giant component transition occurs at the cut-off (0.69) at which the size of the largest connected cluster is half the total number of nodes. What is observed is a striking power-law. In a variety of other types of networks, power-laws are thought to reflect highly non-random assignment of links [@albert_barabasi]. Such degree distributions have been identified previously in microarray-based gene clusterizations, but have been restricted to the context of examining large-scale time-series changes in expression due to some drastic change in the cell’s overall state (e.g. diauxic shift or cell-division cycling) [@bhan_dewey; @bergmann_barkai]. In contrast, we observe here that power-law behavior is visible in the mere fluctuations in expression across identically prepared cell samples. The above analyses illustrate that the seeming randomness of mRNA expression fluctuations belies the presence of highly structured, biologically significant patterns in the correlations of these fluctuations. In what follows, we propose a model which incorporates both deterministic and stochastic aspects into an explanation of why these patterns might emerge. Our goal is not to derive this model from any set of established, underlying equations known to govern biological systems. Rather, our purpose is to keep the model as simple and analytically tractable as possible, and to let the question of its validity in describing gene networks be decided based on its empirical success or failure in the laboratory. The Model ========= We seek to model the stochastic dynamics of gene expression. A natural starting point for this undertaking would seem to be the description of expression in a single cell. This would certainly be the most sensible approach if it were our intention to test our model using data measured on a single-cell basis. However, many high-throughput gene expression assays (such as mRNA microarray experiments) are carried out on samples containing populations of cells. Within these populations, it is certainly reasonable to expect *a priori* that there might be correlations in the noise experienced from one cell to another. Thus, in order to be as general as possible, we begin by assuming that we must describe some population of cells as a single, irreducible system, although this does not rule out the possibility that the same simple model of gene expression might also be applied gainfully to the study of single cells. We begin by postulating, for the time being, that the total levels of all types of mRNA throughout the cell population fully characterize the state space of the system. This is obviously a gross over-simplification of real gene expression dynamics; a gene’s activity can be regulated post-transcriptionally through processes as diverse as alternative splicing, RNA interference, phosphorylation, and proteolysis. Furthermore, an mRNA’s region of localization can in many cases be at least as important to its impact as the amount of it that is transcribed, particular if it is found in some cells in the population, but not in others [@ghaemmaghami_weissman; @huh_oshea]! Nevertheless, since our aim is to develop the simplest possible model of gene expression that will grant us some predictive power, we proceed with our assumption despite these valid criticisms. Our next assumption is that, in the absence of noise, the expression levels (total mRNA levels) in the population of cells would reach a stable, steady-state profile. From a biological perspective, such a premise may at first seem totally indefensible: a cell often passes through successive, cycling stages of growth during which aspects of its expression profile must change dramatically. However, we may wish to restrict our discussion for the moment to the modeling of growth-arrested cells which, for the most part, lack cycling degrees of freedom. Furthermore, since we are modeling the expression dynamics for a population of cells, it may be the case that individual members of this population are at differents points in the cell cycle, but that a steady state has been achieved, on average, for the population as a whole. We also require, still in the absence of noise, that the rate at which the expression level of one gene changes at a given time is some analytic function of the current expression levels of all genes in the system. Though it is reasonable to think that such a view of gene expression is at least approximately valid for describing a network of genes affecting each other’s rate of production, it is also entirely possible that it is factually incorrect. Current changes in expression may, for example, depend in a discontinuous fashion on whether certain levels are above or below some threshold [@vilar_leibler; @hasty_collins], or else they may depend more strongly on the expression profile at some finite time in the past than on the current profile. The sensibility of this analyticity requirement can only be judged by how accurate the quantitative predictions of the model prove to be in the laboratory. Finally, we assume that the rate of expression of each gene is subject to random noise which is uncorrelated from gene to gene. Rather than attempting to tie this noise directly to any particular source, we leave its origin deliberately vague. Our aim is solely to introduce a stochastic aspect to the dynamics of our simple phenomenological model, without attempting to explain things in terms of underlying biological mechanisms. The next step is to formalize the simple model that is implicit in these assumptions. In a system of $N$ genes, let the element $x_{i}$ in the vector $\mathbf{x}= [x_{1},\ldots,x_{N}]$ describe the deviation of the expression level of gene $i$ from its steady-state level. In this case, we have $$\frac{dx_{i}}{dt}=F_{i}(\mathbf{x})$$ where the steady-state condition is expressed as $\mathbf{F}(0)=0$. If we define $$K_{ij}=-\frac{\partial F_{i}}{\partial x_{j}}\bigg|_{\mathbf{x}=0}$$ then, for small deviations from the steady-state in the absence of noise, we obtain $$\frac{d\mathbf{x}}{dt} = -\mathbf{Kx}\label{linear}$$ Demanding that the steady-state (SS) be stable is clearly equivalent to requiring the real parts of all eigenvalues of $\mathbf{K}$ to be positive so that all trajectories eventually decay to the SS. Thus far, our model has followed the same course of previous theoretical work on linear response experiments in gene networks [@collins_gardner]. At this point, however, we move in a new direction by introducing noise to the system, which can be achieved by simply adding a randomly distributed quantity $f_{i}(t)$ to the right hand side of Eq. \[linear\]: $$\frac{d\mathbf{x}}{dt} = -\mathbf{Kx} +\mathbf{f}(t)\label{basic}$$ The above $N$-dimensional first order heterogeneous equation was first investigated by Ornstein and Zernicke [@ornstein_zernike] as a class of Langevin models. Here we reinterpret it to be the equation of motion for gene expression dynamics. In another work, a Langevin model has been used in the single gene context to explore dynamics of transcription and translation  [@ozbudak_vanoudenaarden]. However, our work is contrasted by the fact that we are specifically interested in a many-gene network and the effects of noise on this network. The simplest, non-trivial type of noise our system can experience is so-called uncorrelated white noise, which is defined through the following: $$\begin{aligned} \langle \mathbf{f}(t) \rangle & = & 0 \\ \langle \mathbf{f}(t)\mathbf{f}^T(t') \rangle & = & 2T\,\mathbf{I}\delta(t - t')\end{aligned}$$ The matrix $\mathbf{I}$ is the identity matrix. $T$ is some fictitious parameter that governs the global noise strength. Here the $\langle...\rangle$ average is an ensemble average over all possible noise trajectories of the system. Implicit in the noise distribution described above is the assumption that the magnitude of noise experienced by each gene is the same. More generally, it is possible that the noise correlation is fixed by some general diagonal matrix, $\langle \mathbf{f}(t)\mathbf{f}(t')^T \rangle = 2T\mathbf{\Lambda}\delta(t - t')$. Thus, when comparing the predictions of the model to experiment, it might be necessary to treat the different elements of $\mathbf{\Lambda}$ as free, unknown parameters. The differences in the magnitude of noise may have a simple biological underpinning. For example, large relative fluctuations can arise from low expression levels. For this study, however, we consider the analytically simpler case of uniform noise magnitude. The formal solution to the equation of motion is given by $$[\mathbf{x}(t)] = \int_0^t dt'\,[\exp(-\mathbf{K}t')]\mathbf{f}(t') + [\exp(-\mathbf{K}t)]\mathbf{x}_0 \label{solution}$$ The solution is a functional that depends on the noise trajectory ${\mathbf{f}(t)}$. Since any one specific realization of the noise is uninteresting, we would like to know the statistical behavior of the system over all possible realizations. In other words, we would like to obtain the *propagator* of the system, which gives the probability that the cell population attains a state $\mathbf{x}(t)$, given that it began in some initial state $\mathbf{x}_0$ at time zero. The propagator of Eq. \[solution\] is given by averaging a $\delta$-function of $[\mathbf{x}(t)]$ over all possible time trajectories of noise ${\mathbf{f}(t)}$. $$\begin{aligned} G(\mathbf{x},\mathbf{x}_0;t) & = & \langle \delta(\mathbf{x} - [\mathbf{x}]) \rangle \label{defnprop} \\ \langle ... \rangle & = & \int \mathcal{D}\mathbf{f}(t)\, ...\, \exp\biggl[-\int_0^t dt'\, \mathbf{f}^2(t')\biggr] \label{pathavg}\end{aligned}$$ Substituting Eq. \[solution\] into Eq. \[defnprop\] and performing the Gaussian path integral in Eq. \[pathavg\] yields the propagator. $$G(\mathbf{x},\mathbf{x}^0;t) = \biggl({ \frac{\pi}{\mbox{det}\,\mathbf{U} } }\biggr) ^\frac{N}{2} \exp[(-(\mathbf{x}^T - \mathbf{x}_0^T\exp(-\mathbf{K}^Tt))\mathbf{U}(t)(\mathbf{x} - \exp(-\mathbf{K}t)\mathbf{x}_0))] \label{finalg}$$ The propagator is complicated by the argument $\mathbf{U}(t)$ of the exponential. The $\mathbf{U}(t)$ matrix of the argument is only expressible in terms of an integral and is not easily simplifiable except for special cases of $\mathbf{K}$. $$\mathbf{U}(t) = \biggl[\biggl(\frac{1}{2T}\biggr)\,\int_0^t\,dt'\, \exp(-\mathbf{K}^Tt')\exp(-\mathbf{K}t')\biggr]^{-1} \label{ureln}$$ For finite time, the inverse of the $\mathbf{U}(t)$ matrix gives the equal time correlators of the system. $$\mathbf{U}^{-1}(t) = \langle \mathbf{x}(t) \mathbf{x}^T(t) \rangle - \langle \mathbf{x}(t) \rangle \langle \mathbf{x}^T(t) \rangle$$ While time-dependent pheneomena are very interesting, the experimental measurement of noise-averaged, time-dependent quantities is likely to be a technically challenging task. We therefore focus our attention for the present on the simpler case of taking repeated, uncorrelated expression measurements from identically prepared cell samples. This is equivalent to probing the system in its *equilibrium* limit, when traces of the initial conditions are lost. This condition can be realized by taking the infinite-time limit of Eq. \[finalg\] and \[ureln\]. $$\begin{aligned} G_{eq}(\mathbf{x}) & = & \biggl({ \frac{\pi}{\mbox{det}\,\mathbf{U}_{eq} } }\biggr) ^\frac{N}{2} \exp{(-\mathbf{x}^T\,\mathbf{U}_{eq}\,\mathbf{x}}) \\ \label{eqprop} \mathbf{U}_{eq} & = &\biggl[ \biggl(\frac{1}{2T}\biggr)\,\int_0^{\infty}\,dt'\,\exp(-\mathbf{K}^Tt')\exp(-\mathbf{K}t')\biggr]^{-1} \label{Ueq}\end{aligned}$$ Thus, we obtain an expression relating the covariance of fluctuations in expression measurements to the underlying matrix of interactions between the different genes in the network: $$\mathbf{C}\equiv\langle\mathbf{x}~\mathbf{x}^{T}\rangle -\langle\mathbf{x}\rangle\langle\mathbf{x}^{T}\rangle \propto \int _{0}^{\infty}dt \exp(-\mathbf{K}^Tt) \exp(-\mathbf{K}t)\label{covar}$$ Results and Discussion ====================== Eq. \[covar\] is striking because it describes an exact relationship that must exist between the rate constants of a deterministic gene network near a steady-state and the expression fluctuations that will result if such a system is driven by random noise ($\mathbf{C}$). Interestingly, the relationship implied by our model provides us with a means to understand how biological information contained in the underlying interactions between genes ($\mathbf{K}$) can be preserved to some extent in the covariance matrix ($\mathbf{C}$) that we examined in Fig. \[fig:probpair\]. For the sake of being most illustrative, let us assume for the moment that $\mathbf{K}$ is a symmetric matrix that can be expressed as the difference of the identity matrix $\mathbf{I}$ and a symmetric off-diagonal matrix $\mathbf{k}$, all of whose eigenvalues are less than unity in magnitude. In this special case, Eq.\[covar\] can be simplified, yielding the simple result $$C_{ij}\propto (\mathbf{K}^{-1})_{ij}=(\mathbf{I}-\mathbf{k})^{-1}_{ij}=\delta_{ij}+k_{ij}+\sum_{l}k_{il}k_{lj}+\sum_{lm}k_{il}k_{lm}k_{mj}+\ldots \label{pwseries}$$ In other words, the covariance of fluctuations in genes $i$ and $j$ is obtained by summing over all of the ways in which the two genes are indirectly coupled to each other across the network of interactions. Eq. \[pwseries\] indicates that whatever biologically meaningful structure might originally be represented in $\mathbf{k}$, it is likely to be preserved, and possibly even reinforced, in the subsequent correlation fluctuations described in $\mathbf{C}$, since it is likely that genes of similar function will be more highly connected to each other through indirect routes across the network. Even more significant than the qualitative result discussed above, however, is the rigorously testable quantitative prediction that is also implicit in Eq.\[covar\]. Obviously, it is possible to measure $\mathbf{C}$. If we were also able to determine all of the linear rate constants contained in the matrix $\mathbf{K}$ by some independent means, then we would be able to calculate $\mathbf{C}$ using the model and then compare the predicted covariances to the experimentally measured ones. A straight-forward method of determining $\mathbf{K}$ has already been proposed and implemented on a small scale [@collins_gardner]. The matrix elements of $\mathbf{K}$ can be calculated by measuring the linear response of the system’s steady state to a small, constant, exogenous increase in the rate of production of some gene in the network. More formally, this amounts to introducing a “source term” into (\[basic\]): $$\frac{d \mathbf{x} }{dt} = -\mathbf{K} \mathbf{x} +\mathbf{h}+\mathbf{f}$$ which gives $$\frac{d( \mathbf{x}-\mathbf{K}^{-1}\mathbf{h}) }{dt} = -\mathbf{K} ( \mathbf{x}-\mathbf{K}^{-1}\mathbf{h})+\mathbf{f}$$ By measuring the SS shift of every gene $x_i$ in response to a source $h_j$ pointing in the “direction” of the $j^{th}$ gene, we obtain one column of the inverse of $\mathbf{K}$: $$\frac{\partial \langle x_i \rangle_{eq}}{\partial h_j} = (\mathbf{K}^{-1})_{ij}$$ It should be possible to introduce such a source experimentally in a number of ways, such as by insertion of an inducible plasmid in *S. cerevisiae*, or by effecting low, quantitative levels of RNAi in *Drosophila* cell culture. The validation of the model’s equilibrium predictions using such experimental approaches would have remarkable implications for our understanding of the dynamics of these systems over shorter times scales. Another intriguing possibility is that one might use experimental measurements of $\mathbf{C}$ in order to extract information about the linear response coefficients $\mathbf{K}$. In the special case examined above, we were able to derive a closed-form expression for $\mathbf{K}$ in terms of $\mathbf{C}$. In the more general and realistic case of a non-symmetric $\mathbf{K}$, however, the noise erases some of the information we need in order to completely reconstruct $\mathbf{K}$ from $\mathbf{C}$. To see this most clearly, it is helpful to consider the cases where either the symmetric or anti-symmetric part of $\mathbf{K}$ is small. In either instance, we can then Fourier transform our expression for $\mathbf{C}$ and obtain the approximate result $$\mathbf{C}^{-2}\propto \mathbf{K}\mathbf{K}^{T}\label{phase}$$ If we think of $\mathbf{K}$ as the an $N$-dimensional analog of a complex number, then we can see that while it is possible to recover the “magnitude” of $\mathbf{K}$ using $\mathbf{C}$, the noise has caused us to lose information about the “phase”. Put another way, if we were able to find one matrix which satisfied (\[phase\]), we could generate another by applying any $N$-dimensional rotation to the rows of $\mathbf{K}$ It is conceivable that a well-designed set of experiments might provide us with the complementary information necessary to recover $\mathbf{K}$ from $\mathbf{C}$. This raises the exciting possibility that the dynamical coefficients of large gene networks might be obtainable without the need for conducting large numbers of painstaking linear response measurements. Alternatively, it may be fruitful to follow the example of Gardner et. al. [@collins_gardner] and make simplifying assumptions about the underlying structure of $\mathbf{K}$ which would make its construction from $\mathbf{C}$ into an overdetermined problem. One point that remains to be addressed is the issue of sampling. In any statistical system with multiple variates, the covariance matrix obtained from a finite sampling of the distribution is singular, and therefore not invertible, at least until the sample size exceeds the number of variates. This immediately implies that good sampling can only be achieved with microarrays for relatively small networks because of the costliness of the technology. This stumbling block might be avoided, however, if we were to use *protein* as our proxy for gene expression instead of mRNA (This substitution is a perfectly acceptable one as far as the model is concerned, since it was constructed to be a rough, general model of gene expression). In this case, it would be possible to prepare cell lines with pairs of proteins labeled with different fluorescent reporters [@huh_oshea]. Through the use of FACS methods, one could therefore measure the expression covariance of genes using thousands of independent samples. Finally, it should be noted that, though we have focused on a linear network model here, the theoretical methods we have employed here can be easily generalized to incorporate non-linear effects. Using a related path integral technique, it should be possible to perturbatively account for higher-order couplings between different genes in the network, as well as more sophisticated noise distributions. Such an undertaking, however, would be much more involved, and should almost certainly be postponed until the limitations of the linear model are probed in experiment. In this study, we presented evidence for the presence of ample amounts of biological information in the expression fluctuations of unperturbed genetic networks, and we introduced a dynamical model of gene expression that seeks to explain the informational content of these fluctuations. We believe our model to be the first to incorporate the intrinsic noisiness of gene expression into an exactly solvable dynamical description of the web of genetic interactions that exists in a cell. We used the model to derive a non-trivial prediction about the relationship between the dynamical linear response coefficients of the network near a steady-state and the correlations which must exist among steady-state fluctuations in the expression of different genes over time. Given enough experimental data, it is not inconceivable that at some future time, researchers will be able to model genetic networks exactly with detailed simulations on powerful computers. The spirit of this work lies at the other end of the spectrum; we are hopeful that the simple, coarse-grained model presented here incorporates enough of the salient features of genetic networks that it may provide some analytical insight into their fundamental nature in addition to being a useful tool for predicting their behavior. [99]{} Wen, X., Fuhrman, S., Michaels, G. S., Carr, D. B., Smith, S., Barker, J. L. & Somogyi, R., (1998) [*Proc. Natl. Acad. Sci.*]{} **95**, 334-339. Schena, M., Shalon, D., Davis, R. & Brown, P. O., (1995) [*Science*]{} **270**, 467-470. van’t Veer, L. J., Dai, H., van de Vijver, M. J., He, Y. D., Hart, A. A. M., Mao, M., Peterse, H. L., van der Kooy, K., Marton, M. J., Witteveen, A. T., Schreiber, G. J., Kerkhoven, R. M., Roberts, C., Linsley, P. S., Bernards, R. & Friend, S. H., (2002) [*Nature*]{} **415**, 530-536. Alizadeh, A. A., Eisen, M. B., Davis, R. E., Ma, C., Lossos, I. S., Rosenwald, A., Boldrick, J. C., Sabet, H, Tran, T., Yu, X., Powell, J. I., Yang, L., Marti, G. E., Moore, T., Hudson Jr, J., Lu, L., Lewis, D. B., Tibshirani, R., Sherlock, G., Chan, W. C., Greiner, T. C., Weisenburger, D. D., Armitage, J. O., Warnke, R., Levy, R., Wilson, W., Grever, M. R., Byrd, J. C., Botstein, D., Brown, P. O. & Staudt, L. M., (2000) [*Nature*]{} **403**, 503-511. Arava, Y., Wang, Y., Storey, J. D., Liu, C. L., Brown, P. O. & Herschlag, D., (2003) [*Proc. Natl. Acad. Sci.*]{} **100**, 3889-3894. Laub, M. T., McAdams, H. H., Feldblyum, T., Fraser, C. M. & Shapiro, L., (2000) [*Science*]{} **290**, 2144-2148. Spellman, P. T., Sherlock, G., Zhang, M. Q., Iyer, V. R., Anders, K., Eisen, M. B., Brown, P. O., Botstein, D. & Futcher, B., (1998) [*Mol. Biol. Cell.*]{} **9**, 3273-3297. Hughes, T. R., Marton, M. J., Jones, A. R., Roberts, C. J., Stoughton, R., Armour, C. D., Bennet, H. A., Coffey, E., Dai, H., He, Y. D., Kidd, M. J., King, A. M., Meyer, M. R., Slade, D., Lum, P. Y., Stepaniants, S. B., Shoemaker, D. D., Gachotte, D., Chakraburtty K., Simon, J., Bard, M. & Friend, S. H., (2000) [*Cell*]{} **102**, 109-126. Thattai, M. & van Oudenaarden, A., (2001) [*Proc. Natl. Acad. Sci.*]{} **98**, 8614-8619. Hasty, J., Pradines, J., Dolnik, M. & Collins, J. J., (2000) [*Proc. Natl. Acad. Sci.*]{} **97**, 2075-2080. Elowitz, M. B., Levine, A. J., Siggia, E. D. & Swain, P. S., (2002) [*Science*]{} **297**, 1183-1186. Gardner, T. S., di Bernardo, D., Lorenz, D. & Collins, J. J., (2003) [*Science*]{} **301**, 102-105. Kauffman, S., Peterson, C., Samuelsson, B. & Troein, C., (2003) [*Proc. Natl. Acad. Sci.*]{} **100**, 14796-14799. http://mips.gsf.de/genre/proj/yeast/ Albert, R. & Barabasi, A-L., (2002) [*Rev. Mod. Phys.*]{} **74**, 47-97. Bhan, A., Galas, D. J. & Dewey, T. G., (2002) [*Bioinformatics*]{} **18**, 1486-1493. Bergmann, S., Ihmels, J. & Barkai, N., (2004) [*PLoS Biology*]{} **2**, 0085-0093. Ghaemmaghami, S., Huh, W., Bower, K., Howson, R. W., Belle, A., Dephoure, N., O’Shea, E. K. & Weissman, J. S., (2003) [*Nature*]{} **425**, 737-741. Huh, W., Falvo, J. V., Gerke, L. C., Carroll, A. S., Howson, R. W., Weissman, J. S. & O’Shea, E. K., (2003) [*Nature*]{} **425**, 686-691 Vilar, J. M. G., Guet, C. C. & Leibler, S., (2003) [*J. Cell Biol.*]{} **161**, 471-476. Risken, H., (1989) [*The Fokker-Planck Equation: Methods of Solution and Applications*]{}, Springer Verlag, Berlin. Ozbudak, E. M., Thattai, M., Kurtser, I. & van Oudenaarden, A., (2002) [*Nature Genetics*]{} **31**, 69-73. [^1]: These authors contributed equally to this work. [^2]: These authors contributed equally to this work.
--- abstract: 'The two scale convergence of the solution to a Robin’s type-like problem of a stationary diffusion problem in a periodically perforated domain is investigated. It is shown that the Robin’s problem converges to a problem associated to a new operator which is the sum of a standard homogenized operator plus an extra first order “strange” term; its appearance is due to the non-symmetry of the diffusion matrix and to the non rescaled resistivity.' author: - | Abdelhamid Ainouz\ Department of Mathematics, University of Sciences\ and Technology Houari Boumediene,\ Po Box 32 El Alia Bab Ezzouar 16111 Algiers, Algeria.\ aainouz@usthb.dz title: 'Two-Scale limit of the solution to a Robin Problem in Perforated Media [^1]' --- Introduction ============ Periodic homogenization in perforated media with Robin boundary conditions prescribed on the boundary of the holes has been extensively studied by many authors and we refer for instance to [@bpc], [@ciodon1], [ciodon2]{}, [@pas1], .... In this paper we study the stationary diffusion equation in a periodic perforated body where the heat flow is proportional to the temperature field on the boundary of the holes with a resistivity having zero average value on the boundary of the reference hole. In [@bpc], the authors studied a model problem for a second-order symmetric elliptic operator in a periodically perforated domain with the Robin boundary condition prescribed on the boundary of the holes. They use the asymptotic expansion technique [@blp], [@san], [@tar] to obtain the homogenized problem and they construct correctors to justify the expansion and then estimate the error. In this paper, we consider a problem but with another configuration, namely the holes may or may not be connected and the boundary of the holes may intersect the exterior boundary of the body. Moreover we assume that the matrix diffusion of the second-order operator may or may not be symmetric. We use the two-scale convergence technique [@all], [@lnw], [@ngue], [@ngue1] to obtain the two-scale limit system. After the decoupling technique, we show that the homogenized problem contains a convective term. Its appearance is due essentially to the general character of the matrix diffusion and on the fact that the resistivity function is not rescaled as usually assumed when dealing with two-scale convergence on periodic surfaces, see for instance [@ain], [@ain1], [@adh], [@nr],... The paper is organized as follows: in section \[1\], we define the geometry of the perforated body and we give the Robin boundary-value problem setting. Section \[2\] is aimed at showing the existence and unicity of the solution of the Robin problem and obtaining a priori estimates of the solution. The asymptotic limit via two-scale convergence procedure is analyzed in section \[3\]. We obtain the homogenized boundary-value problem which is a second order elliptic operator containing first and zero order terms. The latter term is classical see e.g. [@adh]. The first order term is a convection one and it is null when the diffusion matrix is symmetric or with constant coefficients. Setting of the Problem\[1\] =========================== Let $\Omega $ be a bounded domain in $\mathbb{R}^{n}$ of variable $x=\left( x_{i}\right) _{1\leq i\leq n}\ $($n\geq 2$) with a smooth boundary $\Gamma $ and $\varepsilon $ a real parameter taking values in a sequence of positive numbers tending to zero. As usual in periodic homogenization let $Y=\left[ 0,1\right] ^{n}$ be the generic unit cell of periodicity in the auxiliary space $\mathbb{R}^{n}$ of variable $y=\left( y_{i}\right) _{1\leq i\leq n}$. The cell $Y$ is identified to the unit torus $\mathbb{R}^{n}/\mathbb{Z}^{n}$. A function defined on $\mathbb{R}^{n}$ is said to be $Y$-periodic if it is periodic of period $1$ in each $y_{i}$ variable with $1\leq i\leq n$. In the sequel we will suppose that any function defined on $Y$ is extended periodically to the whole space $\mathbb{R}^{n}$. If $E\left( Z\right) $ is a function space (where $Z$ is a subset of $Y$) we denote $E_{\#}\left( Z\right) :=\left\{ w\in E\left( Z\right) \text{; }w\text{ is extended periodically to }\mathbb{R}^{n}\right\} $. Let $H$, the reference hole, be an open subset of $Y$ with a smooth boundary $\Sigma $ and set $Y_{s}=Y\backslash cl(H)$ where $cl\left( \cdot \right) $ denotes the closure. Thus $Y$ is partitioned as $Y=Y_{s}\cup \Sigma \cup H$. Note that we do not require that $H$ is strictly included in $Y$. As a consequence the periodic extension of $H$ may or may not be connected. Let us denote $\chi \left( y\right) $ the characteristic function of $Y_{s}$ in $Y$. We define the perforated material $$\Omega _{\varepsilon }=\left\{ x\in \Omega ;\chi \left( \frac{x}{\varepsilon }\right) =1\right\}$$and the one codimensional periodic surface $$\Sigma _{\varepsilon }=\left\{ x\in \Omega ;\frac{x}{\varepsilon }\in \Sigma \right\} .$$ Here $\Omega _{\varepsilon }$ represents the matrix or the solid part of $\Omega ,$ by opposition to the holes or the void part that is represented by the open subset $H_{\varepsilon }:=\Omega \backslash cl\left( \Omega _{\varepsilon }\right) $. By construction, all of these holes are identical and they are periodically distributed in $\Omega $ with period $\varepsilon $ in each $x_{i}$-direction. Since we use the two-scale convergence method we do not require that the boundary $\Sigma _{\varepsilon }=\partial H_{\varepsilon }$ does not intersect $\Gamma $. As in [@all], We shall use the natural extension by zero of any function defined on $\Omega _{\varepsilon }$. Let $f_{\varepsilon }$ be a given function in $L^{2}\left( \Omega _{\varepsilon }\right) $, $g_{\varepsilon }$ be given in $L^{2}\left( \Sigma _{\varepsilon }\right) $ such that $$\Vert f_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }+\sqrt{\varepsilon }\Vert g_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }\leq C\text{.} \label{dc}$$Here and throughout this paper $C$ denotes a positive constant independent of $\varepsilon $. Let $A\left( x,y\right) =\left( a_{ij}\right) _{1\leq i,j\leq n}$ be a real-valued matrix function defined on $\Omega \times Y$, $Y$-periodic in the second variable $y$ such that there exists two positive constants $m$, $M $ independent of $\varepsilon $ satisfying the following inequality:$$m\mid \zeta \mid ^{2}\leq (A\zeta ,\zeta )\leq M\mid \zeta \mid ^{2} \label{ac}$$for all $\zeta \in \mathbb{R}^{n}$. We suppose that the matrix $A$ lies in $C\left( \Omega ;L_{\#}^{\infty }\left( Y\right) \right) ^{n^{2}}$. We note that no symmetry condition on $A$ is assumed. Let $\mu \left( y\right) \in L_{\#}^{\infty }\left( Y_{s}\right) $ such that $\int_{Y_{s}}\mu \left( y\right) \geq \mu _{0}>0$ where $\mu _{0}$ is independent of $\varepsilon $. Let $\alpha $ be a $Y$-periodic measurable bounded function defined on $\Sigma $ such that $$\int_{\Sigma }\alpha \left( y\right) d\sigma \left( y\right) =0\text{.} \label{as1}$$Let us decompose the function $\alpha $ into its positive and negative parts as follows:$$\alpha =\alpha ^{+}-\alpha ^{-},\ \alpha ^{+}=\max \left( \alpha ,0\right) \text{, }\alpha ^{-}=\max \left( -\alpha ,0\right) \text{.}$$Assume that the positive part of $\alpha $ satisfies the condition: $$\alpha ^{+}\left( y\right) \geq \alpha _{0}>0\text{ a.e. in }\Sigma \text{.} \label{ca}$$Let us consider the following Robin boundary value problem: $$\begin{aligned} -div\left( A_{\varepsilon }\nabla u_{\varepsilon }\right) +\mu _{\varepsilon }u_{\varepsilon } &=&f_{\varepsilon }\text{ in }\Omega _{\varepsilon }, \label{eq1} \\ \left( A_{\varepsilon }\nabla u_{\varepsilon }\right) \cdot \nu _{\varepsilon }+\alpha _{\varepsilon }u_{\varepsilon } &=&\varepsilon g_{\varepsilon }\text{ on }\Sigma _{\varepsilon }, \label{eq2} \\ u_{\varepsilon } &=&0\text{ on }\Gamma \label{eq3}\end{aligned}$$ where $$A_{\varepsilon }\left( x\right) =A\left( x,\frac{x}{\varepsilon }\right) ,\ \mu _{\varepsilon }\left( x\right) =\mu \left( \frac{x}{\varepsilon }\right) ,\ \alpha _{\varepsilon }\left( x\right) =\alpha \left( \frac{x}{\varepsilon }\right)$$and $\nu _{\varepsilon }$ is the unit outward normal to $\Omega _{\varepsilon }$. This problem can be regarded as a simplified model of the condensation of stream in a periodic cooling structure (see [@adh]). We can also consider as a model for treatment planning hyperthemia in microvascular tissue- see, e.g. [@dh]. The boundary condition (\[eq2\]) means that the heat flow $\left( A_{\varepsilon }\nabla u_{\varepsilon }\right) \cdot \nu _{\varepsilon }$ is proportional to the temperature $u_{\varepsilon }$ with a periodic resistivity given by the function $\alpha _{\varepsilon }$. In many situations, the resistivity function is taken to be $\varepsilon ^{m}\alpha _{\varepsilon }$. Since the operator is of order $2$ the interesting cases are then $m=-2,-1,0,1$ and $2$. The case $m=2$ is trivial since we obtain the classical homogenized equation.This can be seen easily by using the asymptotic expansion method. The case $m=1$ with $\alpha _{\varepsilon }\geq 0$ has been studied by [@adh], [@nr] using the two-scale convergence technique. In this situation $\alpha _{\varepsilon }$ is rescaled since the surface $\Sigma _{\varepsilon }$ is of codimension $1$ Here we study the case $m=0$, i.e. a non rescaled resistivity. We use the same technique but with $\alpha _{\varepsilon }$ changing sign. We show that the assumptions (\[as1\]), (\[ca\]) and a non-symmetric $A_{\varepsilon }\left( x\right) $ contribute to the description of the effective thermal conductivity with convection. The case $m=-1$ will be studied in a forthcoming paper.  Note that the case $m=-2$ is also trivial since it yields that the effective thermal conductivity is $0$. Study of the Problem and A priori Estimates\[2\] ================================================ Let $$V_{\varepsilon }=\left\{ v\in H^{1}\left( \Omega _{\varepsilon }\right) \text{; }v=0\text{ on }\Gamma \right\}$$equipped with the scalar product $$(u,v)_{V_{\varepsilon }}=\int_{\Omega _{\varepsilon }}\nabla u\left( x\right) \nabla v\left( x\right) dx$$and the associated norm $\Vert u\Vert _{V_{\varepsilon }}=(u,u)_{V_{\varepsilon }}^{1/2}$ which is equivalent to the $H^{1}$-norm thanks to the Poincaré inequality$.$ The variational formulation of the boundary-value problem (\[eq1\])-(\[eq3\]) reads as follows:$$\left\{ \begin{array}{c} \text{For each }\varepsilon >0\text{, find }u_{\varepsilon }\in V_{\varepsilon }\text{ such that } \\ a_{\varepsilon }\left( u_{\varepsilon },v\right) =L_{\varepsilon }\left( v\right) \text{ for any }v\in V_{\varepsilon },\end{array}\right. \label{wf}$$where $a_{\varepsilon }\left( \cdot ,\cdot \right) $ is the bilinear form defined on $V_{\varepsilon }\times V_{\varepsilon }$ by: $$\begin{aligned} a_{\varepsilon }\left( u,v\right) &=&\int_{\Omega _{\varepsilon }}A_{\varepsilon }(x)\nabla u\left( x\right) \nabla v\left( x\right) dx+\int_{\Omega _{\varepsilon }}\mu _{\varepsilon }\left( x\right) u\left( x\right) v\left( x\right) dx \\ &&+\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }\left( x\right) u\left( x\right) v\left( x\right) d\sigma _{\varepsilon }\left( x\right)\end{aligned}$$and $L_{\varepsilon }\left( \cdot \right) $ is the linear form defined on $V_{\varepsilon }$ by: $$L_{\varepsilon }\left( v\right) =\int_{\Omega _{\varepsilon }}f_{\varepsilon }(x)v\left( x\right) dx+\varepsilon \int_{\Sigma _{\varepsilon }}g_{\varepsilon }\left( x\right) v\left( x\right) d\sigma _{\varepsilon }\left( x\right) .$$ \[lcs\]There exists a positive constant $C_{s}$ independent of $\varepsilon $ such that for every $v\in V_{\varepsilon }$ and for every $\delta >0$ we have $$\Vert v\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C_{s} \left[ \left( \delta \varepsilon \right) ^{-1}\Vert v\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\left( \delta \varepsilon \right) \Vert \nabla v\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}\right] . \label{cs}$$ Let us introduce the notation $$v_{\varepsilon }^{k}\left( x\right) =v\left( \varepsilon \left( k+y\right) \right)$$where $k\in K_{\varepsilon }=\left\{ k\in \mathbb{Z}^{n};\varepsilon \left( Y+k\right) \cap \Omega \neq \emptyset \right\} $. By the change of variable $x=\varepsilon \left( k+y\right) $ we have $$\int_{\Sigma _{\varepsilon }}v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) =\underset{k\in K_{\varepsilon }}{\sum }\int_{\varepsilon \left( \Sigma +k\right) }v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) =\varepsilon ^{n-1}\underset{k\in K_{\varepsilon }}{\sum }\int_{\Sigma }\left[ v_{\varepsilon }^{k}\left( y\right) \right] ^{2}d\sigma \left( y\right) .$$From the trace theorem we see that for every $\delta >0$ $$\begin{aligned} \int_{\Sigma }\left[ v_{\varepsilon }^{k}\left( y\right) \right] ^{2}d\sigma \left( y\right) &\leq &C_{s}\left[ \delta ^{-1}\int_{Y_{s}+k}\left[ v_{\varepsilon }^{k}\left( y\right) \right] ^{2}dy+\delta \int_{Y_{s}+k}|\nabla _{y}v_{\varepsilon }^{k}\left( y\right) |^{2}dy\right] \\ &\leq &\frac{C_{s}}{\varepsilon ^{n}}\left[ \delta ^{-1}\int_{\varepsilon \left( Y_{s}+k\right) }v\left( x\right) ^{2}dx+\delta \varepsilon ^{2}\int_{\varepsilon \left( Y_{s}+k\right) }|\nabla _{x}v\left( x\right) |^{2}dx\right] .\end{aligned}$$Hence $$\begin{aligned} \int_{\Sigma _{\varepsilon }}v^{2}\left( x\right) d\sigma _{\varepsilon }\left( x\right) &\leq &\varepsilon ^{n-1}\frac{C_{s}}{\varepsilon ^{n}}[\delta ^{-1}\underset{k\in K_{\varepsilon }}{\sum }\int_{\varepsilon \left( Y_{s}+k\right) }v\left( x\right) ^{2}dx+ \\ &&\delta \varepsilon ^{2}\underset{k\in K_{\varepsilon }}{\sum }\int_{\varepsilon \left( Y_{s}+k\right) }|\nabla _{x}v\left( x\right) |^{2}dx] \\ &\leq &C_{s}\left[ \left( \delta \varepsilon \right) ^{-1}\int_{\Omega _{\varepsilon }}v^{2}\left( x\right) dx+\left( \delta \varepsilon \right) \int_{\Omega _{\varepsilon }}|\nabla v\left( x\right) |^{2}dx\right] \text{.}\end{aligned}$$ The Lemma is proved. Let $\sqrt{\mu _{0}m}>C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }$ where $C_{s}$ is the constant given in Lemma \[lcs\]. Then $a_{\varepsilon }\left( \cdot ,\cdot \right) $ is coercive on $V_{\varepsilon }$. Let $v\in V_{\varepsilon }$. Then using (\[ac\]), we have $$a_{\varepsilon }\left( v,v\right) \geq m\int_{\Omega _{\varepsilon }}|\nabla v\left( x\right) |^{2}dx+\mu _{0}\int_{\Omega _{\varepsilon }}v\left( x\right) ^{2}dx-\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\int_{\Sigma _{\varepsilon }}v\left( x\right) ^{2}d\sigma _{\varepsilon }\left( x\right) .$$By (\[cs\]) we see that for every $\delta >0$ $$\begin{aligned} a_{\varepsilon }\left( v,v\right) &\geq &\left( m-\delta \varepsilon C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \int_{\Omega _{\varepsilon }}|\nabla v\left( x\right) |^{2}dx+ \\ &&\left( \mu _{0}-\left( \delta \varepsilon \right) ^{-1}C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \int_{\Omega _{\varepsilon }}\left[ v\left( x\right) \right] ^{2}dx.\end{aligned}$$ Choosing $\delta =\dfrac{1}{\varepsilon }\sqrt{\dfrac{m}{\mu _{0}}}$. Then $$a_{\varepsilon }\left( v,v\right) \geq c_{0}\left[ \Vert \nabla v\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert v\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}\right]$$where $c_{0}$ is the positive constant given by $$c_{0}=\left( 1-\frac{C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }}{\sqrt{m\mu _{0}}}\right) \min \left( m,\mu _{0}\right) >0.$$This completes the proof. In the sequel we shall assume that the condition $\sqrt{\mu _{0}m}>C_{s}\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }$ is fulfilled. The variational formulation (\[wf\]) admits a unique solution $u_{\varepsilon }\in V_{\varepsilon }$. Moreover we have the a priori estimates.$$\Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }\leq C. \label{ape}$$ The existence and uniqueness is a straightforward application of Lemma [lcs]{} and the Lax-Milgram Lemma. It remains to prove the a priori estimates (\[ape\]). Take $v=$$u_{\varepsilon }$ in (\[wf\]). We have$$\begin{aligned} &&\int_{\Omega _{\varepsilon }}\left( A_{\varepsilon }\nabla u_{\varepsilon }\nabla u_{\varepsilon }+\mu _{\varepsilon }u_{\varepsilon }^{2}\right) dx+\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }^{+}u_{\varepsilon }^{2}d\sigma _{\varepsilon }\left( x\right) \\ &=&\int_{\Omega _{\varepsilon }}f_{\varepsilon }u_{\varepsilon }dx+\int_{\Sigma _{\varepsilon }}\left( \varepsilon g_{\varepsilon }+\alpha _{\varepsilon }^{-}u_{\varepsilon }\right) u_{\varepsilon }d\sigma _{\varepsilon }\left( x\right) .\end{aligned}$$Let us denote $$A_{\varepsilon }\left( u_{\varepsilon }\right) :=\Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}.$$Then using (\[ac\]) and (\[ca\]) we obtain $$A_{\varepsilon }\left( u_{\varepsilon }\right) \leq \frac{1}{c_{1}}(\int_{\Omega _{\varepsilon }}f_{\varepsilon }u_{\varepsilon }dx+\int_{\Sigma _{\varepsilon }}\left( \varepsilon g_{\varepsilon }+\alpha _{\varepsilon }^{-}u_{\varepsilon }\right) u_{\varepsilon }d\sigma _{\varepsilon }\left( x\right) ) \label{ione}$$where $c_{1}=\min \left( m,\mu _{0},\alpha _{0}\right) >0$. Applying Young’s inequality on the right hand side of (\[ione\]), we get $$\begin{aligned} A_{\varepsilon }\left( u_{\varepsilon }\right) &\leq &\frac{1}{c_{1}}[\frac{\beta ^{2}}{2}\Vert f_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\frac{1}{2\beta ^{2}}\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2} \notag \\ &&+\frac{\gamma ^{2}\varepsilon ^{2}}{2}\Vert g_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\left( \frac{1}{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}]. \label{itwo}\end{aligned}$$But in view of (\[cs\]) inequality (\[itwo\]) becomes$$\begin{aligned} A_{\varepsilon }\left( u_{\varepsilon }\right) &\leq &\frac{1}{c_{1}}[\left( \frac{1}{2\beta ^{2}}+\left( \frac{\varepsilon }{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) \frac{C_{s}}{\varepsilon \delta }\right) \Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2} \\ &&+\left( \frac{\varepsilon }{2\gamma ^{2}}+\Vert \alpha \Vert _{L^{\infty }\left( \Sigma \right) }\right) C_{s}\varepsilon \delta \Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}]+C.\end{aligned}$$Now, appropriate choice of $\beta ,\gamma ,\delta $ yields $$A_{\varepsilon }\left( u_{\varepsilon }\right) =\Vert \nabla u_{\varepsilon }\Vert _{\left( L^{2}\left( \Omega _{\varepsilon }\right) \right) ^{n}}^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Omega _{\varepsilon }\right) }^{2}+\Vert u_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C.$$The Proposition is now proved. One is led to determine the homogenized problem of (\[eq1\])-(\[eq3\]). Namely we study the limiting behavior of the solutions $u_{\varepsilon }$ as $\varepsilon $ tends to zero. This the subject of the next section. Homogenization Procedure\[3\] ============================== We shall use the well-known two-scale convergence method that we briefly recall here the definition and the main results. Two-scale Convergence --------------------- 1. A sequence $v_{\varepsilon }$ in $L^{2}(\Omega )$ *two-scale* converges to $v_{0}(x,y)\in L^{2}((\Omega \times Y)$ and we denote this $v_{\varepsilon }\rightrightarrows v_{0}$ if for any $\varphi (x,y)\in L^{2}\left( \Omega ;C_{\#}\left( Y\right) \right) $, $$\lim_{\varepsilon \rightarrow 0}\int_{\Omega }v_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) dx=\int_{\Omega }\int_{Y}v_{0}(x,y)\varphi (x,y)dydx.$$ 2. A sequence $v_{\varepsilon }$ in $L^{2}(\Sigma _{\varepsilon })$ *two-scale* converges to $v_{0}(x,y)\in L^{2}((\Omega \times \Sigma )$ and we denote this $v_{\varepsilon }\overset{S}{\rightrightarrows }v_{0}$ if for any $\varphi (x,y)\in C\left( \overline{\Omega };C_{\#}\left( Y\right) \right) $, $$\lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}\varepsilon v_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }v_{0}(x,y)\varphi (x,y)d\sigma _{\varepsilon }\left( y\right) dx.$$ \[p1\] 1. For any uniformly bounded sequence $v_{\varepsilon }$ in $L^{2}\left( \Omega \right) $ one can extract a subsequent still denoted by $\varepsilon $ and a two-scale limit $v_{0}\in L^{2}((\Omega \times Y)$ such that $v_{\varepsilon }\rightrightarrows v_{0}$. 2. If $v_{\varepsilon }$ is in $L^{2}\left( \Sigma _{\varepsilon }\right) $ such that $$\varepsilon \Vert v_{\varepsilon }\Vert _{L^{2}\left( \Sigma _{\varepsilon }\right) }^{2}\leq C\text{,}$$then one can extract a subsequent still denoted by $\varepsilon $ and a two-scale limit $v_{0}\in L^{2}((\Omega \times \Sigma )$ such that $v_{\varepsilon }\overset{S}{\rightrightarrows }v_{0}$. Two-scale limit system ---------------------- By virtue of the estimate (\[dc\]) and the proposition \[p1\], there exists $f\in L^{2}\left( \Omega \times Y\right) $ and $g\in L^{2}\left( \Omega \times \Sigma \right) $ such that, up to a subsequence, one has $$\chi \left( \frac{x}{\varepsilon }\right) f_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) f\left( x,y\right) ,\ g_{\varepsilon }\left( x\right) \overset{S}{\rightrightarrows }g\left( x,y\right) . \label{fg}$$ Furthermore we have . \[l2\] [@all], [@ngue1], [@adh]. Let $u_{\varepsilon }$ be the solution of (\[wf\]). Then there exists a subsequence still denoted by $\varepsilon $ and two functions $u\left( x\right) \in H_{0}^{1}\left( \Omega \right) $, $u_{1}\left( x,y\right) \in L^{2}(\Omega ;H_{\#}^{1}(Y_{s})/\mathbb{R})$ such that $$\chi \left( \frac{x}{\varepsilon }\right) u_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) u\left( x\right) \text{,} \label{l2_1}$$$$\chi \left( \frac{x}{\varepsilon }\right) \nabla u_{\varepsilon }\left( x\right) \rightrightarrows \chi \left( y\right) \left( \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right) \text{.} \label{l2_2}$$Moreover we have $$\lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}\varepsilon u_{\varepsilon }(x)\varphi \left( x,\frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }u(x)\varphi (x,y)d\sigma \left( y\right) dx \label{l2_3}$$for every $\varphi \in C(\overline{\Omega };C_{\#}(Y_{s}))$. \[l3\]We have $$\lim_{\varepsilon \rightarrow 0}\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\varphi \left( x\right) \alpha \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{\Sigma }u_{1}(x)\varphi (x)\alpha \left( y\right) d\sigma \left( y\right) dx$$for all $\varphi \left( x\right) \in C\left( \overline{\Omega }\right) $. Define a function $\psi \left( y\right) \in $ $H_{\#}^{1}(Y_{s})/\mathbb{R}$ solution of the problem $$\left\{ \begin{array}{c} -\Delta \theta \left( y\right) =0\text{ in }Y_{s}\text{,} \\ \left( \nabla \theta \left( y\right) \right) \cdot \nu \left( y\right) =\alpha \left( y\right) \text{ on }\Sigma \text{,} \\ y\longmapsto \theta \left( y\right) \text{ Y-periodic.}\end{array}\right. \label{pr1}$$Such a function exists since $\alpha (y)$ satisifies (\[as1\]) which is the compatibility condition for the solvability of the problem (\[pr1\]). Set $\psi \left( y\right) =\nabla \theta $ and consider the function $\psi _{\varepsilon }\left( x\right) =\psi \left( \frac{x}{\varepsilon }\right) $. Then we have $$\begin{aligned} \int_{\Omega _{\varepsilon }}\nabla u_{\varepsilon }(x)\psi _{\varepsilon }\left( x\right) dx &=&\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\psi _{\varepsilon }\left( x\right) \cdot \nu \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) \label{le12} \\ &=&\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\alpha \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) . \notag\end{aligned}$$Passing to the limit in the left hand side of (\[le12\]) and taking into account (\[l2\_2\]) we find $$\underset{\varepsilon \rightarrow 0}{\lim }\int_{\Sigma _{\varepsilon }}u_{\varepsilon }(x)\alpha \left( \frac{x}{\varepsilon }\right) d\sigma _{\varepsilon }\left( x\right) =\int_{\Omega }\int_{Y}\chi \left( y\right) \left( \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right) \psi \left( y\right) dydx. \label{le13}$$Since $u\in H_{0}^{1}\left( \Omega \right) $ we have $$\int_{\Omega }\int_{Y}\chi \left( y\right) \nabla u\left( x\right) \psi \left( y\right) dydx=\left( \int_{\Omega }\nabla u\left( x\right) dx\right) \int_{Y}\chi \left( y\right) \psi \left( y\right) dy=0.$$Hence the right hand side of (\[le13\]) becomes $$\int_{\Omega }\int_{Y}\chi \left( y\right) \nabla _{y}u_{1}\left( x,y\right) \psi \left( y\right) dydx.$$On the other hand, we have $$\begin{aligned} \int_{\Omega }\int_{Y}\chi \left( y\right) \nabla _{y}u_{1}\left( x,y\right) \psi \left( y\right) dydx &=&-\int_{\Omega }\int_{Y_{1}}u_{1}\left( x,y\right) div_{y}\psi \left( y\right) dydx \\ &&+\int_{\Omega }\int_{\Sigma }u_{1}\left( x,y\right) \psi \left( y\right) \cdot \nu \left( y\right) d\sigma \left( y\right) dx \\ &=&\int_{\Omega }\int_{\Sigma }u_{1}\left( x,y\right) \alpha \left( y\right) d\sigma \left( y\right) dx\end{aligned}$$which proves the Lemma. Now we are able to give the two-scale limit system: The couple $\left( u,u_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $ is the solution of the following two-scale homogenized system :$$\begin{gathered} -div_{y}\left( A\left( \nabla u+\nabla _{y}u_{1}\right) \right) =0\text{\ \ \ in }\Omega \times Y_{s}, \label{tsl1} \\ \left( A\left( \nabla u+\nabla _{y}u_{1}\right) \cdot \nu \right) +\alpha u=0\ \ \text{ on }\Omega \times \Sigma \text{,} \label{tsl2} \\ y\longmapsto u_{1}\text{\ \ \ }Y-\text{periodic,} \label{tsl3} \\ -div_{x}\left( \int_{Y_{1}}A\left( \nabla u+\nabla _{y}u_{1}\right) dy\right) +\tilde{\mu}u+\int_{\Sigma }\alpha u_{1}d\sigma \left( y\right) =F\ \ \ \text{in}\ \Omega \text{,} \label{tsl4} \\ u=0\text{\ \ \ on }\Gamma \label{tsl5}\end{gathered}$$where $\tilde{\mu}=\int_{Y_{s}}\mu \left( y\right) dy$ and $F(x)=\int_{Y}\chi \left( y\right) f(x,y)dy+\int_{\Sigma }g\left( x,y\right) d\sigma \left( y\right) .$ Let $\varphi \left( x\right) \in \mathcal{D}\left( \Omega \right) $ and $\varphi _{1}\left( x,y\right) \in \mathcal{D}\left( \Omega ;C_{\#}^{\infty }\left( Y_{s}\right) \right) $. Choosing $v\left( x\right) =\varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) $ as a test function in problem (\[wf\]), we have$$\begin{gathered} \int_{\Omega }\nabla u\left( x\right) \chi \left( \frac{x}{\varepsilon }\right) ^{t}A(\frac{x}{\varepsilon })\left( \nabla \varphi \left( x\right) +\varepsilon \nabla _{x}\varphi _{1}\left( x,\frac{x}{\varepsilon }\right) +\nabla _{y}\varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right) dx \notag \\ +\int_{\Omega _{\varepsilon }}\mu _{\varepsilon }\left( x\right) u\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right] dx \notag \\ +\int_{\Sigma _{\varepsilon }}\alpha _{\varepsilon }\left( x\right) u\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right] d\sigma _{\varepsilon }\left( x\right) = \label{e4} \\ \int_{\Omega }\chi \left( \frac{x}{\varepsilon }\right) f_{\varepsilon }(x)\left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right] dx \notag \\ \varepsilon \int_{\Sigma _{\varepsilon }}g_{\varepsilon }\left( x\right) \left[ \varphi \left( x\right) +\varepsilon \varphi _{1}\left( x,\frac{x}{\varepsilon }\right) \right] d\sigma _{\varepsilon }\left( x\right) . \notag\end{gathered}$$By virtue of (\[l2\_2\]) the first two terms of the left hand side of ([e4]{}) converges to $$\begin{aligned} &&\int_{\Omega }\int_{Y_{s}}[A\left( y\right) \left[ \nabla u\left( x\right) +\nabla _{y}u_{1}\left( x,y\right) \right] \left[ \nabla \varphi \left( x\right) +\nabla _{y}\varphi _{1}\left( x,y\right) \right] \notag \\ &&+\mu \left( y\right) u\left( x\right) \varphi \left( x\right) ]dydx. \label{r1}\end{aligned}$$Taking into account (\[l2\_3\]), and the lemma \[l3\], the third term of the left hand side of (\[e4\]) tends to $$\int_{\Omega }\int_{\Sigma }\alpha \left( y\right) u_{1}\left( x,y\right) \varphi \left( x\right) d\sigma \left( y\right) dx+\int_{\Omega }\int_{\Sigma }\alpha \left( y\right) u\left( x\right) \varphi _{1}\left( x,y\right) d\sigma \left( y\right) dx. \label{r2}$$Thanks to (\[fg\]) the right hand side of (\[e4\]) converges to $$\int_{\Omega }\left[ \int_{Y_{s}}f(x,y)dy+\int_{\Sigma }g\left( x,y\right) d\sigma \left( y\right) \right] \varphi \left( x\right) dx=\int_{\Omega }F(x)\varphi \left( x\right) dx. \label{r3}$$ By the density of $\mathcal{D}\left( \Omega \right) $ $\times \in \mathcal{D}\left( \Omega ;C_{\#}^{\infty }\left( Y_{s}\right) \right) $ in $H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $ we get from the limits (\[r1\])-(\[r3\]) the following two-scale weak formulation system:$$\left\{ \begin{array}{c} \left( u,u_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) \text{ is such that } \\ \int_{\Omega }\int_{Y_{s}}A\left[ \nabla u+\nabla _{y}u_{1}\right] \left[ \nabla v+\nabla _{y}v_{1}\right] dydx+ \\ \tilde{\mu}\int_{\Omega }uvdx+\int_{\Omega }\int_{\Sigma }\alpha u_{1}vd\sigma \left( y\right) dx+\int_{\Omega }\int_{\Sigma }\alpha uv_{1}d\sigma \left( y\right) =\int_{\Omega }Fvdx\end{array}\right. \label{e5}$$for all $\left( v,v_{1}\right) \in H_{0}^{1}\left( \Omega \right) \times L^{2}\left( \Omega ;H_{\#}^{1}\left( Y_{s}\right) /\mathbb{R}\right) $. Integration by parts in (\[e5\])  with respect to $v_{1}$ ($v=0$) gives (\[tsl1\])- (\[tsl3\]) and with respect to $v$ ($v_{1}=0$) yields ([tsl4]{})-(\[tsl5\]). The Proposition is now proved. Thanks to the linearity of the first equation of (\[tsl1\]) we can compute $u_{1}(x,y)$ in terms of $u\left( x\right) $ as follows:$$u_{1}(x,y)=\underset{k=1}{\overset{n}{\sum }}\zeta _{k}\left( y\right) \frac{\partial u}{\partial x_{k}}\left( x\right) +\gamma (y)u\left( x\right) +\tilde{u}\left( x\right) \label{rel1}$$where for each $k$ the function $\zeta _{k}\left( y\right) $ satisfies the auxiliary problem:$$\begin{aligned} -div\left( A\left( y\right) \nabla \zeta _{k}\left( y\right) \right) &=&div\left( A\left( y\right) e_{k}\right) \text{ in }\Omega \times Y_{s}\text{, } \\ A\left( y\right) \nabla \zeta _{k}\left( y\right) \cdot \nu &=&-A\left( y\right) e_{k}\cdot \nu \text{ on }\Omega \times \Sigma \text{,} \\ y &\rightarrow &\zeta _{k}\left( y\right) \text{ }Y\text{-periodic, }x\in \Omega \text{.}\end{aligned}$$where $e_{k}=\left( \delta _{ik}\right) _{1\leq i\leq n}$, $\delta _{ik}$ is the Krönecker symbol. The function $\gamma (y)$ satisfies $$\begin{gathered} -div\left( A\left( y\right) \nabla \gamma \left( y\right) \right) =0\text{ in }\Omega \times Y_{s}\text{, } \\ A\left( y\right) \nabla \gamma \left( y\right) \cdot \nu =-\alpha \left( y\right) \text{ on }\Omega \times \Sigma \text{,} \\ y\rightarrow \gamma \left( y\right) \text{ }Y\text{-periodic, }x\in \Omega \text{.}\end{gathered}$$   Finally, inserting the relation (\[rel1\]) into the equation (\[tsl4\]) yields to the homogenized equation $$-div\left( A^{\hom }\nabla u(x)\right) +B\cdot \nabla u(x)+\lambda u\left( x\right) =F\left( x\right) \label{h1}$$where $A^{\hom }$ is the matrix with coefficients$$a_{ij}^{\hom }=\underset{k=1}{\overset{n}{\sum }}\int_{Y_{s}}\left[ a_{ij}\left( y\right) \left( \delta _{kj}+\frac{\partial \zeta _{j}}{\partial y_{k}}\left( y\right) \right) \right] dy$$$B$ is the vector with components:$$\begin{aligned} b_{i} &=&\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma \left( y\right) -\underset{k=1}{\overset{n}{\sum }}\int_{Y_{s}}a_{ik}\left( y\right) \frac{\partial \gamma }{\partial y_{k}}\left( y\right) dy \\ &=&\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma \left( y\right) -\int_{Y_{s}}A\left( y\right) e_{i}\nabla \gamma \left( y\right) dy\end{aligned}$$$\lambda $ is the real number:$$\begin{aligned} \lambda &=&\int_{\Sigma }\alpha \left( y\right) \gamma \left( y\right) d\sigma \left( y\right) +\tilde{\mu} \\ &=&-\int_{Y_{s}}A\left( y\right) \nabla \gamma \left( y\right) \nabla \gamma \left( y\right) dy+\tilde{\mu}\end{aligned}$$ Thus we have proved the following result Let $u_{\varepsilon }$ be the solution in $V_{\varepsilon }$ of the Robin boundary problem (\[eq1\])-(\[eq3\]). Then $\chi _{\varepsilon }\left( x\right) u_{\varepsilon }\left( x\right) $ two-scale converges to $\chi \left( y\right) u\left( x\right) $ where $u\left( x\right) $ is a solution in $H_{0}^{1}\left( \Omega \right) $ of the homogenized problem:$$\left\{ \begin{array}{l} -div\left( A^{\hom }\nabla u(x)\right) +B\cdot \nabla u(x)+\lambda u\left( x\right) =F\left( x\right) \text{ in }\Omega \text{,} \\ u=0\text{ on }\Gamma \text{.}\end{array}\right. \label{hom}$$ We observe that the limit equation (\[hom\]) contains an extra strange term of order $1$. Namely the convection term $B\cdot \nabla u$. The vector $B$ depends closely on the matrix $A$ and the resistivity function $a$. For example, if $A$ is symmetric then $B=0$. Indeed$$\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma =-\int_{Y_{s}}A\left( y\right) \nabla \gamma \left( y\right) \nabla \zeta _{i}\left( y\right) dy=-\int_{Y_{s}}A\left( y\right) \nabla \zeta _{i}\left( y\right) \nabla \gamma \left( y\right) dy\text{)}$$and since $A$ is symmetric $$\int_{\Sigma }\alpha \left( y\right) \zeta _{i}\left( y\right) d\sigma =\int_{Y_{s}}A\left( y\right) e_{i}\nabla \gamma \left( y\right) dy.$$ [99]{} A. Ainouz, Homogenization of a Wentzell type like problems in elasticity ( In French), Thesis, Algiers, 1997. A. Ainouz, Derivation of a double-diffusion model in poro-elastic media, Proceeding of the First Indo-German Conference on PDE, Scientific Computing and Optimization in Applications, September 8-10, 2004, Univ. of Trier, Germany. G. Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal., Vol. 23, 6,1482-1518, 1992. G. Allaire, A. Damlamian, U. Hornung, two-scale convergence on periodic surfaces and applications, In Proceedings of the International Conference on Mathematical Modelling of Flow through Porous Media (May 1995), A. Bourgeat et al. eds., pp.15-25, World Scientific Pub., Singapore (1996). A. G. Belyaev, A. L. Pyatnitskii and G. A. Chechkin, Averaging in a perforated domain with an oscillating third boundary condition. (Russian) Mat. Sb. 192 (2001), no. 7, 3–20; translation in Sb. Math. 192 (2001), no. 7-8, 933–949 A. Bensoussan, J. L. Lions and G. Papanicolaou, Asymptotic analysis for periodic structure, North Holland, Amsterdam, 1978. D. Cioranescu, P. Donato, Homogénéisation du problème de Neumann non homogène dans des ouverts perforés. Asymptotic Anal. 1 (1988), no. 2, 115–138. D. Cioranescu, P. Donato, On a Robin problem in perforated domains. Homogenization and applications to material sciences (Nice, 1995), 123–135, GAKUTO Internat. Ser. Math. Sci. Appl., 9, Gakkuto, Tokyo, 1995. D. Cioranescu, P. Donato, An introduction to homogenization, Oxford Lectures Series in Mathematics and its Applications 17, Owford, University Press 1999. P. Deuflhard, R. Hochmuth, Multiscale analysis of thermoregulation in the human microvascular system, Math. Meth. Appli. Sci. 27, pp. 2004 971-989. D. Lukkassen, G. Nguetseng and P. Wall, Two-scale Convergence, J. of Pure and Appl. Math. 2, 1, 35-86, 2002. M. Neuss-Radu, Some extensions of two-scale convergence. C. R. Acad. Sci. Paris Sér. I Math. 322 (1996), no. 9, 899–904. G. Nguetseng, A general convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal. 20 (1989) 608-623. G. Nguetseng, Asymptotic analysis for a stiff variational problem arising in mechanics, SIAM J. Math. Anal. Vol. 12 N° 6, pp 1394-1414, 1990. S. E. Pastukhova, On the character of the distribution of the temperature field in a perforated body with a given value on the outer boundary under heat exchange conditions on the boundary of the cavities that are in accord with Newton’s law. (Russian) Mat. Sb. 187 (1996), no. 6, 85–96; translation in Sb. Math. 187 (1996), no. 6, 869–880. E. Sanchez-Palencia, Non-Homogeneous Media and Vibration Theory, Lecture Notes in Physics 127, 1980. L. Tartar, Cours Peccot, Collège de France, 1977. [^1]: *2000 Mathematics Subject Classification:* 35B27. *Keywords:* Homogenization, Two scale convergence, Robin boundary condition.
--- abstract: 'We present a summary of results for searches for new particles and interactions at the Fermilab Tevatron collider by the CDF and the D0 experiments. These include results from Run I as well as Run II for the time period up to July 2014. We focus on searches for supersymmetry, as well as other models of new physics such as new fermions and bosons, various models of excited fermions, leptoquarks, technicolor, hidden–valley model particles, long–lived particles, extra dimensions, dark matter particles, and signature–based searches.' address: - | Mitchell Institute for Fundamental Physics and Astronomy\ Department of Physics and Astronomy, Texas A$\&$M University\ College Station, TX 77843–4242, USA\ toback@tamu.edu - | Laboratory for High Energy Physics, Institute of Physics Belgrade\ Pregrevica 118, 11080 Zemun, Serbia\ lidiaz@fnal.gov author: - DAVID TOBACK - LIDIJA ŽIVKOVIĆ bibliography: - 'NP.bib' title: 'REVIEW OF PHYSICS RESULTS FROM THE TEVATRON: SEARCHES FOR NEW PARTICLES AND INTERACTIONS' --- **Introduction** ================ The standard model (SM) of particle physics has had great success describing the known particles, their properties and the interactions between them, up to energies of about 1 TeV[@Baak:2012kk]. The recent discovery of the Higgs boson[@Aad:2012tfa; @Chatrchyan:2012ufa] completed the model, but there are still many unanswered questions as well as many unexplained phenomena that remain. In this review we present a summary of results for searches for new particles and interactions at the Fermilab Tevatron collider. These include results from Run I as well as Run II which produced about 10 [fb$^{-1}$]{} of $p{\bar p}$ collisions at $\sqrt{s}=1.96$ TeV recorded by each experiment. We focus on searches for supersymmetry (SUSY)[^1], new fermions and bosons, excited fermions, leptoquarks, technicolor particles, hidden–valley model particles, long–lived particles, extra dimensions, dark matter, and signature–based searches. While we will not discuss the full set of searches, the references contain a fairly complete set of results. Many other searches for new particles and interactions that are not presented here (e.g. non–SM Higgs boson searches, $B_s\to\mu\mu$) are presented in the different chapters of this review. We begin with a quick overview of some of the theoretical motivations that influenced the set of searches that were ultimately done by the experiments. In section \[sec\_run1\], we provide a historical review of some of the Run I results that had a large impact on the world–wide searches, including the [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event, follow–ups on the leptoquark hints from the DESY $ep$ collider (HERA), signature–based searches like [[sleuth]{}]{} and other searches that kept the Fermilab Tevatron collider experiments at the frontier. In section \[sec\_susy\] we discuss the Run II SUSY results and in section \[sec\_nonsusy\] we discuss the various other beyond the standard model (BSM) searches results from Run II. In section \[sec\_sum\] we conclude. **Theoretical Motivation** {#sec_theory} ========================== There are many reasons to search for new particles and interactions beyond the SM, and different theoretical viewpoints can guide the ways in which we search. On one end of the search strategy spectrum is the fact that we have many compelling and well specified models of BSM physics which predict new particles and how to look for them. On the opposite end of the spectrum, it is possible that we have not guessed the new physics, but the Tevatron collider has the ability to produce these new particles. Searching must also be done thoughtfully and carefully in more model–independent ways to be ready for surprises. In this section we provide an overview of both types of motivations, with others in between the two extremes, with an eye towards searches. We will point the reader to more details on theoretical issues as they are discussed extensively in the literature; our references here are not intended to be complete but rather a guide for the reader to get started. Phenomenological issues, like production mechanisms, decay products, final states and relevant models parameters are discussed in sections \[sec\_susy\] and \[sec\_nonsusy\]. ***Supersymmetry*** ------------------- The motivations for SUSY are well known and documented[@Wess:1974tw; @Fayet:1976cr; @Nilles:1983ge; @Haber:1984rc] and include its ability to potentially solve hierarchy problem for the Higgs boson mass, provide a dark matter candidate, and satisfy consistency requirements of modern models of string theory. Inherent in the theory is that for every fermion observed in the SM there is a supersymmetric boson partner that has not yet been observed; the same is true for the known bosons, including the observed Higgs boson, and the hypothetical graviton (the particle mediator of gravity). The non–-observation of low–mass sparticles with equal masses to their SM counterparts has focused efforts on SUSY models with broken symmetry. Since there are many new particles to be searched for, and a 128 free parameters in the most general models, other “clues” and possible tie–-ins have been used by model builders to focus on weak-scale SUSY [@Martin:1997ns]. The hallmark of these SUSY models is their ability to provide a dark matter candidate, and not contradict other observations[@Beringer:1900zz]. Since experimental results from the proton and electron lifetime measurements imply conservation of baryon number and lepton number, it is not unreasonable that SUSY has an additional symmetry, known as $R$–parity[^2]. If $R$–parity is conserved this has the consequence that the lightest SUSY particle (LSP) must be stable, potentially making it a dark matter candidate. We note for now that for many SUSY models the LSP couple to normal matter with a tiny strength and when produced in a collision, it would leave the detector without a trace, yielding significant missing transverse energy, , giving a signature for SUSY that is searched for in many models. With this in mind we quickly mention the models focused on at the Fermilab Tevatron collider which are typically selected for simplicity and general features. These include: (i) gravity–mediated SUSY breaking (minimal SuperGravity or mSUGRA) type models, where the lightest neutralino is the LSP, has a mass at the electroweak scale and becomes a natural cold–dark matter candidate (discussed in section \[sec\_susy1\]), (ii) gauge–mediated SUSY breaking models (GMSB) which have a $\sim$keV mass gravitino as the LSP, and often have a photon and  in the final state (discussed in section \[sec\_susy2\]), and (iii) $R$–parity violating (RPV) searches which release the desire to solve the dark matter problem with SUSY, but must be considered in the most general SUSY frameworks (discussed in \[sec\_susy3\]). Other models that contain SUSY, like hidden–valley models, models that include charged massive stable particles (CHAMPS), etc, are discussed more in section \[sec\_theo\_ll\] for their theory, and section \[sec\_LL\] for results. ***Resonances: new fermions and bosons, excited fermions, leptoquarks, technicolor and other new particles*** ------------------------------------------------------------------------------------------------------------- There are many other models of new physics which also predict new particles. For example, models which extend the gauge structure of the SM generically predict new gauge bosons and possibly new scalars and fermions. These new patterns may be produced on– (or nearly on–) shell and decay into SM particles yielding a tell–tale bump in an invariant mass spectrum. We search for resonances in a general way, but optimize and report our sensitivity to a small number of specific model types. These include models that contain new fermions and/or bosons, excited fermions, leptoquarks, as well as particles from technicolor and other models. We next describe some of the models that garnered the most attention during Runs I and II. Note that typically the mass of any new particle is the most relevant parameter of the theory, from a phenomenological standpoint, but when there are other parameters of importance we note them in section \[sec\_nonsusy\]. New fermions are predicted in many BSM models. While there are significant experimental constraints from LEP[@Collaborations:2000aa], and many models of extra dimensions would give an unobserved large enhancement of the Higgs boson cross section if there are extra fermions [@PhysRevD.76.075016], there is currently no compelling theoretical reason for there to be three and only three fermion generations in the SM. Thus, it becomes natural to look for fourth generation chiral quarks and leptons[@Frampton:1999xi], and vector–like quarks[@Atre:2008iu] (which have right–handed and left–handed components that transform in the same way under $SU(3)\times SU(2)\times U(1)$). Similarly, additional bosons are predicted in many new models. For example, new gauge bosons are predicted in the minimal extensions of the SM that restore left–right symmetry[@Mohapatra:1974hk; @Mohapatra:1974gc; @Senjanovic:1975rk] with the gauge group $SU(2)_L\times SU(2)_R$. In these theories additional $W$ and $Z$ bosons, usually denoted as $W'$ and $Z'$, will couple to the right–handed fermions with weak coupling strength. In addition, grand unified theories and other theories also predict the existence of new heavy bosons, where often the gauge group can be broken to the SM gauge group or have additional $U(1)$’s which could yield multiple $Z'$ bosons. For a review of neutral heavy bosons, see Ref. . The simple organization of the SM particles into a table that resembles the periodic table of elements is suggestive that the known “fundamental” particles may actually be composite or otherwise have substructure[@Baur:1989kv]. This idea is inherent in string theory[@Schwarz:2000ew] or models of technicolor (more below). If the known particles were composite, excited versions of each SM particle could be produced (like the excited states of atoms or hadrons); signatures of excited leptons could involve the production and decay $\ell^*\to\ell\gamma$, or, for excited quarks, $q^*\to q\gamma$. Many grand unified theory models have unification of the quarks and leptons at the highest energies, suggesting the possibility of leptoquarks ($LQ$) in nature[@Buchmuller:1986zs]. These new particles are color–triplet bosons, carry both quark and lepton quantum numbers, and have fractional electric charge, but their spin can be 0 (scalar $LQ$) or 1 (vector $LQ$). They could produce resonant signatures in the $\ell q$ or $\nu q$ final state. Theories of strong dynamics, such as technicolor[@Weinberg:1979bn; @Susskind:1978ms; @Hill:2002ap; @Lane:2002sm], predict a host of new particles known as technifermions. In many ways this model posits that there are no fundamental bosons, and that the vector bosons and the Higgs boson are composite objects made of technifermions. An advantage of this model is that it removed the need of the only fundamental scalar in the SM, the Higgs boson and/or explain why it had not been observed in Run I or at LEP. With the discovery of the Higgs boson with SM properties, these models have fallen out of favor. One of the up sides of these models, is that they did provide natural search strategies for a number of resonances which were followed. Another resonance search is for the production of light axigluons which can produce an anomalous top–quark forward–backward asymmetry[@Aaltonen:2008hc; @Abazov:2007ab; @Aaltonen:2011kc] $A_{fb}$[^3]. Alternative axigluon decay modes include low mass, strongly interacting particles which will further decay to pairs of jets, yielding resonances in the four–jet final states. These final states are also predicted by various theories where no intermediate resonance is necessary. The searches for resonances from new fermions and bosons are presented in section \[sec\_reson1\]. Similarly, searches for excited fermions, leptoquarks and technicolor are presented in sections \[sec\_reson2\]–\[sec\_reson4\]. Other searches for other resonances, such as $Z\to\gamma\gamma$ and the $W+\rm{dijet}$ search from CDF in Run II[@PhysRevLett.106.171801], are presented in section \[sec\_reson5\]. ***Hidden–valley models, CHAMPS and other long-lived particles*** {#sec_theo_ll} ----------------------------------------------------------------- During Run II hidden–valley (HV) models were constructed that predict a new, confining gauge group that is weakly coupled to the standard model, leading to the production of new particles. These low mass particles could help explain potential hints in astrophysical and dark matter searches. These models are often incorporated into SUSY models with sparticles known as “dark particles”, and a hallmark of their production is the decay of long–lived particles with unusual signatures in the detectors[@Han:2007ae; @ArkaniHamed:2008qn; @Baumgart:2009tn]. Other models of new long-–lived particles include charged massive particles (CHAMPS) which are predicted in many models of new physics, especially in SUSY models[@PhysRevD.66.075007]. Dirac monopoles have also been predicted for many years in GUT models [@Beringer:1900zz], and models that symmetrize electromagnetism. Finally, a new class of models predict new particles known as quirks which arise when there is a new, unbroken $SU(N)$ gauge group added beyond the SM[@Kang:2008ea]. ***Extra dimensions and dark matter*** -------------------------------------- Many versions of string theory posit (and in most cases require) the existence of other dimensions in addition to our three spatial + one time dimensions. There are a number of different types of models which have received the most attention. The first are large extra dimension (LED) models [@ArkaniHamed:1998rs] which postulate the existence of two or more extra dimensions in which only gravity can propagate. The weakness of gravity can thus be explained by a propagation through higher–dimension space. In universal extra dimensions (UED) models[@PhysRevD.64.035002] extra spatial dimensions are accessible to all SM fields. Consequently, the difference between UED and LED is that the spatial dimensions in UED are compactified resulting in KK excitations, which are “towers” of the SM fields. A third model is warped extra dimensions [@PhysRevLett.83.3370] in which the existence of the fifth dimension, with a warped spacetime metric, is bounded by two three–dimensional branes. The SM fields and gravity are on different branes with a small overlap, causing gravity to appear weak at the TeV scale. Dark matter has been inferred from the dynamics of galaxies and clusters of galaxies for decades, and the evidence that it is due to a new kind of elementary particle has steadily mounted[@Bertone:2004pz; @Feng:DM]. There have been many models of dark matter put forward by the community, but from the perspective of searches at the Fermilab Tevatron collider there have been just a few types that can be searched for: (i) production and decay of SUSY particles into dark matter particles (typically the LSP), (ii) axions[@Duffy:2009ig], and (iii) theory–independent models of production. While SUSY has already been mentioned, we quickly note that axion production is out of reach of colliders. Recently, more model–independent searches have been done with few assumptions about the new physics and focus on the direct production of dark matter. ***Signature–based searches and model–independent searches*** ------------------------------------------------------------- While model–based searches have always been favored by the theory community, signature–based searches became an important part of the search program as a more model–independent way to search for physics beyond the SM. Perhaps by observing the unexpected we could find explanations of various unexplained phenomena (dark matter, electroweak symmetry breaking etc.). Indeed, looking at history, many of the major discoveries in particle physics have been made in unexpected ways (the muon is a prime example). For these reasons, powerful and systematic ways of searching for new physics were developed to help ensure that the unexpected was not missed. New methodologies focused on the idea of just looking at the final state particles to see if there was any indication of an unexpected resonance, or anomalous number of events or kinematic distribution when considering combinations of high transverse momentum ($p_T$) particles. This method of searching with a set of final state particles is known as a “signature–based” search; because it is not looking for any particular new model, it is known as model–independent. Many different variations on these themes were created and executed, starting strongly in Run I (especially after the unexpected observation of an event with two electrons, two photons and ) and continuing throughout the search program. **Run I Results** {#sec_run1} ================= The Run I dataset consists of $\sim$100 [pb$^{-1}$]{} of collision data at $\sqrt{s}=1.8$ TeV. Most searches focused on the simplest resonance models, mSUGRA searches and RPV models. With the world’s then highest energy collisions, these were the cutting edge, bracketed on each side by complementary searches from LEP. We learned that the SM worked up to a higher energy scale since there was no evidence for new physics other than the discovery of the top quark. However, there were some results that were exciting enough to have significant impact on the field. We mention three before proceeding to the Run II results. ***The [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event and its influence*** {#sec_eeggmet} -------------------------------------------------------------------------------------- During Run I, the CDF experiment observed a very unusual event which created significant interest[@PhysRevD.64.092002; @PhysRevLett.81.1791]. This event had two high energy electron candidates, two high energy photons and large  (see Fig. \[f1\](a)). Of particular note was that the  was 55 GeV and that the event could not be readily explained as a $W\to e\nu$, $Z\to ee$ or radiative versions of any combination of the above. There were no searches for this type of event at the time, and while the large  was suggestive of SUSY, there were no models that were in favor that had photons in the final state. ![(color online) (a) An event display of the CDF [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event observed in Run I. (b) The significance ${\cal P}$ of the excess, in units of standard deviations, obtained using [[sleuth]{}]{} at the D0 experiment from Run I. \[f1\]](cdf_eeggmet.eps "fig:"){height="5.5cm"}![(color online) (a) An event display of the CDF [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event observed in Run I. (b) The significance ${\cal P}$ of the excess, in units of standard deviations, obtained using [[sleuth]{}]{} at the D0 experiment from Run I. \[f1\]](d0_sleuth_fig12.eps "fig:"){height="5.5cm"} While a detailed description of the set of models which were proposed to explain it is beyond the scope of this review, a long lasting impact was the rise of interest in GMSB SUSY. Example production and decay chains include ${\tilde e}{\tilde e} \rightarrow (e{\tilde{\chi}_1^0})(e{\tilde{\chi}_1^0}) \rightarrow e(\gamma{\tilde{G}})e(\gamma{\tilde{G}}) \rightarrow ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$, or similar ones from chargino pair production and decay with virtual $W$ bosons[@Martin:1997ns]. GMSB has been a popular hunting ground ever since although no other hint for GMSB or other versions of SUSY were found in Run I [@PhysRevD.64.092002; @PhysRevLett.81.1791; @PhysRevLett.78.2070; @PhysRevLett.80.442]. Run II searches for GMSB are described in section \[sec\_susy2\]. The second major thing that came out of this observation was the clear need to be on the lookout for hints of new particles using more model–independent methods; if this event was an example of a new particle decay, then it becomes natural to speculate about what kind of particles produced it and search for other events “like it” in the hopes of providing evidence one way or the other. Unbiased follow up was difficult because, since there was no *a priori* search for this event, *a posteriori* methods had to be determined. The simplest quasi–model–independent search method used the idea that this event could have been produced by anomalous $WW\gamma\gamma$ production and decay (SM $WW\gamma\gamma\to e\nu e\nu\gamma\gamma\to{ee\gamma\gamma{\mbox{$\not\!\!E_T$}}}$ production and decay was the dominant background to this event type with $10^{-6}$ events expected). The signature–based way to look for this type of production is to consider all $\gamma\gamma$ events and search each for evidence of associated $WW$ production and decay, for example in the $WW\gamma\gamma\to(jj)(jj)\gamma\gamma$ final state. No excess in this or other $\gamma\gamma$ or $\ell+\gamma+{\mbox{$\not\!\!E_T$}}$ searches [@PhysRevD.64.092002; @PhysRevLett.81.1791; @PhysRevLett.78.2070] turned up any further indications of new particles. Other, more model–dependent, but still signature–based searches[@PhysRevD.66.012004; @PhysRevLett.89.041802; @PhysRevD.65.052006] also found no evidence of new physics in Run I or Run II. Ultimately, it was recognized that new, [*a priori*]{} methods of finding and following up on interesting events needed to found, and developed in ways that avoid potential biases. Model–independent and signature–based searches, in particular [[sleuth]{}]{}, which is discussed bellow, arose at the D0 experiment in Run I for these reasons (see sections \[sec\_sleuth\] and \[sec\_sbmi\]). Ultimately, it is not clear what was the source of the event (perhaps it was a very unusual example of whatever it was), but the legacy of this event is still with us. ***Follow up on the leptoquark hints from DESY*** ------------------------------------------------- In 1997 the H1[@Adloff:1997fg] and the ZEUS[@Breitweg:1997ff] experiments at the DESY $ep$ collider (HERA) reported an excess of events at high momentum transfer $Q^2$ with a potential explanation being the production of a single first generation scalar $LQ$ with a mass of around 200 GeV. By that time, the D0 and the CDF experiments had already excluded scalar $LQ$ masses of up to 130 GeV[@PhysRevD.48.R3939; @PhysRevLett.72.965; @PhysRevLett.75.1012; @PhysRevLett.75.3618]. Both experiments quickly followed up on these hints in the same final state, but with $LQ$ pair production and decay and they were able to exclude first generation scalar leptoquarks in the simplest models above 200 GeV. They were then expanded to second and third generation searches, and from $LQ\to\ell q$ to include $LQ\to\nu q'$ modes. Limits were set in both scalar and vector resonances. Ultimately, all the results were found to be consistent with the SM[@PhysRevLett.79.4327; @PhysRevLett.79.4321; @PhysRevLett.80.2051; @PhysRevD.64.092004; @PhysRevLett.81.4806; @PhysRevLett.85.2056; @PhysRevLett.83.2896; @PhysRevLett.84.2088; @PhysRevLett.78.2906; @PhysRevLett.82.3206; @PhysRevLett.81.38; @PhysRevLett.81.5742; @PhysRevLett.88.191801]. These searches were extended in Run II, again with null results (see section \[sec\_reson3\]). ***Signature–based searches and SLEUTH*** {#sec_sleuth} ----------------------------------------- Signature–based searches emerged at the end of Run I. In each the analysis selection criteria are established before doing the search using systematic ways to separate any data event into a unique group based on its final state signature; specifically based on the set of final state particle objects. For example, those objects passing standardized lepton, photon, , jet, $b$–tagging ID requirements, and above various $p_T$ thresholds are selected. With a clear definition of all event requirements this allows for definite predictions of the rates and kinematic properties of events from SM background processes. Note that there is no prediction of what new physics might arise, just a comparison to the SM–only hypothesis, and, consequently, there is nothing that can be optimized for sensitivity. As previously mentioned, many searches were done in Run I and Run II which followed this methodology. The major leap forward in this area was the development of the quasi–model–independent [[sleuth]{}]{}formalism at the D0 experiment[@PhysRevD.62.092004]. [[sleuth]{}]{}  traded the ability to optimize for a particular model of new physics, for breadth in covering previously unsearched territory. By looking for excesses on the tails of distributions (with a bias towards the large $Q^2$ interactions as it is more likely that new physics has a large scale or mass as the lower scales and masses are already well probed) [[sleuth]{}]{} looked for regions in the data that were not well described by the SM–only background predictions. It made a novel use of pseudoexperiments (and was a powerful early user of these methods) to quantify how unusual the largest observed deviation was. As a test, [[sleuth]{}]{} was able to show that it could discover $WW$ and top–quark production in the dilepton final state at many standard deviations (s.d.) in the case that neither were included in the background modeling. It was the same with leptoquarks, at a certain production level and mass, in lepton+jets. Ultimately, [[sleuth]{}]{} was run on $\sim\!\!50$ final states at the D0 experiment and compared the fluctuations to expectations[@PhysRevD.64.012004; @Abbott:2001ke] (see Fig. \[f1\](b)). The distribution of the fluctuations were consistent with statistical expectations. This methodology was eventually adopted by other experiments, for example at the HERA experiments [@Aktas:2004pz], and the CDF experiment in Run II where it was extended for the other types of systematic, model–independent search strategies (described in section \[sec\_sbmi\]). This methodology is currently less used for a number of reasons, primarily that it is not always clear how to quantify the search sensitivity to new physics. **Supersymmetry** {#sec_susy} ================= In this section we focus on the searches for SUSY during Run II. We note that there are good reasons to expect to find sparticles at Tevatron energies, in particular since the Higgs mass would potentially diverge if there are no sparticles (most importantly the top squark, or stop for short) with a mass at or below the TeV scale[@Martin:1997ns]. Until the observation of the [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event in Run I, most analyses focused on mSUGRA–type models with a hierarchy of heavy colored states and a light LSP as a dark matter candidate and its smoking–gun signature of large . We will focus on mSUGRA in section \[sec\_susy1\]. In section \[sec\_susy2\] we will discuss GMSB SUSY searches with their smoking gun final states of photons and  from light gravitinos. Finally, in section \[sec\_susy3\] we will discuss $R$–parity violating (RPV) scenarios. Ultimately, most of these results are from the first half of Run II data as the turn on of the LHC, with its larger energy and production cross sections and comparable luminosities, eclipsed the Fermilab Tevatron collider sensitivity and obviated the need to get results using the full dataset. Other searches with SUSY interpretations, such as hidden–valley models, and other long–lived particles like CHAMPS, are described in section \[sec\_nonsusy\]. Other searches, like $B_s \to \mu\mu$, which have important SUSY interpretations are found in the heavy flavor chapter of this review[@BPhysReview]. ***mSUGRA/Heavy LSP models*** {#sec_susy1} ----------------------------- While mSUGRA models have many theoretical advantages, from an experimental standpoint they are valued because they simplify the 128 parameter model down to four parameters[^4] and a sign[^5] which specify the sparticle masses and decay products. Equally valuable is that large chunks of the regions turn out to be qualitatively similar and much of the non–excluded parameter space has the lightest neutralino, ${\tilde{\chi}_1^0}$, as the LSP, which provides a cold–dark matter candidate. An important difference is between low and high [$\tan\beta$]{}; at low [$\tan\beta$]{} the sparticles from all three generations are degenerate (or nearly degenerate) in mass, while in models with high [$\tan\beta$]{}, the third generation sparticles (stops, sbottoms and staus) can be much lighter than all other generations, leading to final states enriched with $\tau$–leptons and/or $b$–quarks. While the couplings of the sparticles to their SM counterparts is important, perhaps the most important issue in the production of sparticles at the Fermilab Tevatron collider is their masses. If the colored objects (squarks and gluinos) are light enough, their production cross sections will dominate; if they are too heavy, they cannot be produced in significant enough quantities to be seen. Typically the gauginos are much lighter in mass so an important region of parameter space is the case where the colored objects are out of reach and gaugino pair production dominates the overall sparticle production. These two cases are again separated by high and low values of [$\tan\beta$]{}. ### *Light flavor squarks and gluinos* If the masses of the gluinos or the first and second generation squarks are favourable, then the large production rate of these sparticles provides a golden channel for the search for SUSY. Squarks and gluinos are expected to be produced in pairs, $\tilde{g}\tilde{g}$, $\tilde{g}\tilde{q}$ and $\tilde{q}\tilde{q}$, and then to decay via ${\tilde{q}\to q{\tilde{\chi}_1^0}}$ and ${\tilde{g}\to q\tilde{q}{\tilde{\chi}_1^0}}$. Each will lead to final states with jets and large , with the number of jets depending on the whether the squark or gluino is heavier. Alternatively, leptonic decay modes of squarks from gluino pair production (e.g. $\tilde{g}\to q\tilde{q}\to qq\tilde{\chi}^{\pm}_1\to qq\ell\nu{\tilde{\chi}_1^0}$) can lead to final states with same–sign leptons, jets and large [@PhysRevLett.87.251803]. Both the CDF[@PhysRevLett.102.121801] and the D0[@Abazov:2006bj; @Abazov:2007aa] experiments searched for evidence of these particles, with limits in different scenarios shown in Fig. \[f2\](a,b) interpreted as limits on squarks and gluinos on the one hand and in the $m_0$ vs. $m_{1/2}$ mSUGRA parameter plane on the other. For many years these results were the most sensitive, with no substantive competition from LEP. To be complementary, a new set of high [$\tan\beta$]{} searches emerged with squarks decaying to jets, $\tau$–leptons and large  in the final state[@Abazov:2009rj]. No evidence was observed as shown in Fig. \[f2\](c). These were, at the time, the world’s most sensitive searches. ![(color online) Limits from the search for squarks and gluino production in the ${\mbox{$\not\!\!E_T$}}$+jets final state, including jets for hadronically decaying $\tau$–lepton. Shown are the excluded region (a) in the $m_{\tilde{q}}$ vs. $m_{\tilde{g}}$ plane from the CDF experiment, and (b) in the $m_{1/2}$ vs. $m_0$ plane from the D0 experiment. (c) Limits in the $m_{1/2}$ vs. $m_0$ plane from the large [$\tan\beta$]{} search with hadronic $\tau$–lepton combined with the jets+ search from the D0 experiment. \[f2\]](CDF_susylsq_fig2.eps "fig:"){height="5.cm"} ![(color online) Limits from the search for squarks and gluino production in the ${\mbox{$\not\!\!E_T$}}$+jets final state, including jets for hadronically decaying $\tau$–lepton. Shown are the excluded region (a) in the $m_{\tilde{q}}$ vs. $m_{\tilde{g}}$ plane from the CDF experiment, and (b) in the $m_{1/2}$ vs. $m_0$ plane from the D0 experiment. (c) Limits in the $m_{1/2}$ vs. $m_0$ plane from the large [$\tan\beta$]{} search with hadronic $\tau$–lepton combined with the jets+ search from the D0 experiment. \[f2\]](D0_sqglu.eps "fig:"){height="5.cm"} ![(color online) Limits from the search for squarks and gluino production in the ${\mbox{$\not\!\!E_T$}}$+jets final state, including jets for hadronically decaying $\tau$–lepton. Shown are the excluded region (a) in the $m_{\tilde{q}}$ vs. $m_{\tilde{g}}$ plane from the CDF experiment, and (b) in the $m_{1/2}$ vs. $m_0$ plane from the D0 experiment. (c) Limits in the $m_{1/2}$ vs. $m_0$ plane from the large [$\tan\beta$]{} search with hadronic $\tau$–lepton combined with the jets+ search from the D0 experiment. \[f2\]](D0_sqg_taucombo.eps "fig:"){height="5.2cm"} ### *Bottom and top squarks* The large mass difference between the third generation particles in the SM and their lighter counterparts in the first and second generations suggests that perhaps the third generation is special. This specialness can manifest itself in SUSY models with high [$\tan\beta$]{} which predict that third generation squarks (and/or sleptons) are lighter than their first and second generation counterparts and decay differently, often to third generation particles like $\tau$–leptons and $b$–quarks. One strategy to search for bottom squarks is to use similar analysis techniques for light flavor squarks and gluinos searches in the jets+ final state, but with the additional requirement of $b$–tagging of one or more of the jets. The simplest is direct sbottom pair production with $\tilde{b}\tilde{b}\to b{\tilde{\chi}_1^0}b{\tilde{\chi}_1^0}\to bb+{\mbox{$\not\!\!E_T$}}$[@PhysRevLett.105.081802; @PhysRevLett.97.171806; @Abazov:2010wq]. Complementary searches can be done where the sbottoms are produced as the decay products of a gluino. Specifically, $\tilde{g}\tilde{g}\to b\tilde{b} b\tilde{b}\to bb{\tilde{\chi}_1^0}bb{\tilde{\chi}_1^0}\to 4b+{\mbox{$\not\!\!E_T$}}$[@PhysRevLett.96.171802; @PhysRevLett.102.221801]. The results are shown in Fig. \[f3\]. ![(color online) (a) The excluded region in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{\tilde{b}}$ from the D0 experiment from the search for direct sbottom production and decay. (b) The exclusions in the $m_{\tilde{b}}$ vs. $m_{\tilde{g}}$ plane from the CDF experiment in the search for sbottoms from gluino pair production and decay. \[f3\]](D0_sbottom.eps "fig:"){height="4.9cm"}![(color online) (a) The excluded region in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{\tilde{b}}$ from the D0 experiment from the search for direct sbottom production and decay. (b) The exclusions in the $m_{\tilde{b}}$ vs. $m_{\tilde{g}}$ plane from the CDF experiment in the search for sbottoms from gluino pair production and decay. \[f3\]](CDF_susysbottoms_fig3.eps "fig:"){height="5.5cm"} In recent years, the case for light stops has been motivated by the need to regularize the Higgs boson mass[@Martin:1997ns]. Many more versions of stop searches are required because of the many possible decay modes of the stop. For example, the stop can decay via charged or neutral modes. In the charged modes, the stop decays via ${\tilde{t}}\to b\tilde{\chi}^{\pm}_1$ and the chargino decays via $\nu\tilde{\ell}$, $\ell\tilde{\nu}$ or $ bW{\tilde{\chi}_1^0}$, where the $W$ boson is either real or virtual depending on the mass differences. In all cases, we get ${\tilde{t}}\to b\ell+{\mbox{$\not\!\!E_T$}}$, but with different kinematics depending on the masses. There are multiple searches in this final state [@PhysRevD.82.092001; @Abazov2008500; @Abazov:2008kz; @Abazov:2010xm; @Abazov:2012cz; @PhysRevLett.104.251801; @Abazov:2009ps] with results shown in Fig. \[fig\_stop\](a,b). In the neutral modes, ${\tilde{t}}\to t{\tilde{\chi}_1^0}$ or ${\tilde{t}}\to c+{\tilde{\chi}_1^0}$[@Aaltonen:2012tq; @PhysRevD.76.072010; @Abazov2007119; @Abazov:2008rc], there are also a number of searches, again with null results and limits shown in Fig. \[fig\_stop\](c). ![(color online) Exclusion regions in the searches for stops. (a) The results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane for several values of ${\cal BR}(\tilde{\chi}^{\pm}_1\to{\tilde{\chi}_1^0}\ell\nu)$ and two different chargino masses form the CDF experiment. (b) The results in the $m_{\tilde{\nu}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to \tilde{\chi}^{\pm}_1 b$ through sleptons and sneutrinos form the D0 experiment, and (c) the results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to {\tilde{\chi}_1^0}c$ from the CDF experiment.\[fig\_stop\]](stop_pre_exclsn_jan11.eps "fig:"){height="6.5cm"}\ ![(color online) Exclusion regions in the searches for stops. (a) The results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane for several values of ${\cal BR}(\tilde{\chi}^{\pm}_1\to{\tilde{\chi}_1^0}\ell\nu)$ and two different chargino masses form the CDF experiment. (b) The results in the $m_{\tilde{\nu}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to \tilde{\chi}^{\pm}_1 b$ through sleptons and sneutrinos form the D0 experiment, and (c) the results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to {\tilde{\chi}_1^0}c$ from the CDF experiment.\[fig\_stop\]](D0_stop.eps "fig:"){height="5cm"} ![(color online) Exclusion regions in the searches for stops. (a) The results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane for several values of ${\cal BR}(\tilde{\chi}^{\pm}_1\to{\tilde{\chi}_1^0}\ell\nu)$ and two different chargino masses form the CDF experiment. (b) The results in the $m_{\tilde{\nu}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to \tilde{\chi}^{\pm}_1 b$ through sleptons and sneutrinos form the D0 experiment, and (c) the results in the $m_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{t}}}$ plane from the search for the ${\tilde{t}}\to {\tilde{\chi}_1^0}c$ from the CDF experiment.\[fig\_stop\]](CDF_stop_jhep.eps "fig:"){height="5.5cm"} ### *Gauginos* While the squarks and gluinos have the largest production cross section (at given mass) it is very possible (and favored in some scenarios) that their masses are out of reach of the Fermilab Tevatron collider. Thus, a full set of searches for the lighter sparticles, in particular the lightest chargino and the next–to–lightest neutralino are crucial. The golden final state modes for gaugino pair production and decay is $\ell\ell\ell+{\mbox{$\not\!\!E_T$}}$ (the trilepton final state)[@PhysRevLett.101.251801; @Aaltonen:2013vca; @Abazov:2009zi; @PhysRevLett.95.151805] or two same–sign leptons + [@PhysRevLett.98.221803; @PhysRevLett.110.201802] as there are very few SM backgrounds for each. Late in Run II many of the searches included $\tau$–lepton final states to extend the searches to include higher [$\tan\beta$]{} [@Aaltonen:2013vca]. No evidence of these sparticles have been observed at the CDF or the D0 experiments, with limits shown in Fig. \[fig\_gaugino\]. ![(color online) Excluded region in the $m_{1/2}$ vs. $m_0$ plane in the search for charginos and neutralinos in the final state with three leptons from (a) the CDF experiment and (b) the D0 experiment. \[fig\_gaugino\]](CDF_trilep.eps "fig:"){height="3.97cm"} ![(color online) Excluded region in the $m_{1/2}$ vs. $m_0$ plane in the search for charginos and neutralinos in the final state with three leptons from (a) the CDF experiment and (b) the D0 experiment. \[fig\_gaugino\]](D0_trilepton.eps "fig:"){height="4cm"} ***Gauge–mediated supersymmetry breaking (GMSB) models*** {#sec_susy2} --------------------------------------------------------- The hallmark of gauge mediated supersymmetry breaking models is the light gravitino, ${\tilde{G}}$, and searches for GMSB are usually (but not always) done in the context of minimal models, typically using the SPS–8 relations[@Allanach:2002nj]. This essentially leaves two free theory parameters: the masses (which typically assume fixed mass relations) and the lifetime ($\tau_{{\tilde{\chi}_1^0}}$) of the lightest neutralino which is the next–to–lighest sparticle (NLSP). Since the parameter space where squarks and gluinos are accessible at Tevatron collider energies is easily ruled out (although not done explicitly), searches focus on the pair production of the lightest charginos and neutralinos. Through cascade decays, each chargino and/or neutralino typically decays to a ${\tilde{\chi}_1^0}$ or $\tilde{\tau}$ accompanied by other high $p_T$ SM light particles. The ${\tilde{\chi}_1^0}$ typically decays to $\gamma\tilde{G}$ (if the mass is low) or to $Z\tilde{G}$ if the mass is large; the $\tilde{\tau}$ decays via $\tilde{\tau}\to\tau\tilde{G}$. In all cases, the $\tilde{G}$, like the ${\tilde{\chi}_1^0}$ in mSUGRA models, leaves the detector and gives significant . The lifetime and masses of the sparticles dictate the different final states. Specifically, in the ${\tilde{\chi}_1^0}\to\gamma\tilde{G}$ scenario with $\tau_{{\tilde{\chi}_1^0}}\leqslant 1$ ns, both ${\tilde{\chi}_1^0}$ decay in the detector, giving a final state of $\gamma\gamma+{\mbox{$\not\!\!E_T$}}+X$. For intermediate lifetimes, $1\leqslant \tau_{{\tilde{\chi}_1^0}}\leqslant 50$ ns, frequently one ${\tilde{\chi}_1^0}$ travels a significant distance in the detector before decaying and the other leaves the detector without decaying or interacting. If this occurs, the event will be reconstructed as a $\gamma+{\mbox{$\not\!\!E_T$}}$ event, where the time–of–arrival of the photon at the calorimeter will be slightly later than “expected”; these photons are known as “delayed photon” $\gamma_{\rm{delayed}}$[@PhysRevD.70.114032]. The different lifetime scenarios are considered separately[^6]. For the scenario where the ${\tilde{\chi}_1^0}$ can decay via ${\tilde{\chi}_1^0}\to Z\tilde{G}$ we can have both $ZZ+{\mbox{$\not\!\!E_T$}}$ and $Z\gamma+{\mbox{$\not\!\!E_T$}}$ final states. For decays with $\tilde{\tau}$ sleptons as the intermediate sparticle, we can have multiple $\tau$–leptons and  in the final state. In Run II, the CDF and the D0 experiments did a full suite of searches for GMSB. The short–lifetime searches were done in the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state [@PhysRevD.71.031104; @PhysRevLett.104.011801; @Abazov:2007ag; @PhysRevLett.94.041801; @PhysRevLett.105.221802] and were a natural follow up to the searches done in Run I for the [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} candidate event. In the scenarios with the intermediate $\tau_{{\tilde{\chi}_1^0}}$, the CDF experiment used the electromagnetic calorimeter timing readout system installed in Run II [@2006NIMPA.565..543G], and searches were done in the $\gamma_{\rm{delayed}}+\rm{jet}+{\mbox{$\not\!\!E_T$}}$ final state [@PhysRevD.78.032015; @PhysRevLett.99.121801]. The D0 experiment did the first search in $Z\gamma+{\mbox{$\not\!\!E_T$}}$[@PhysRevD.86.071701] and the CDF experiment did a search with same–sign $\tau$–leptons+[@PhysRevLett.110.201802]. No evidence was observed and limits are shown in Fig. \[f5\]. Recently, scenarios with a light neutralino and gravitino (with all other sparticles out of the reach of colliders) have been proposed[@Mason2011377], and searches for this final state (without limits) have been done at the CDF experiment, in the exclusive $\gamma_{\rm{delayed}}+{\mbox{$\not\!\!E_T$}}$ with no evidence for new physics[@PhysRevD.88.031103]. ![(color online) Limits on GMSB scenarios. (a) The 95% C.L. cross section upper limits from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state as a function of scale $\Lambda$, $m_{{\tilde{\chi}_1^0}}$ and $m_{\tilde{\chi}^{\pm}_1}$ from the D0 experiment. (b) The excluded regions in the $\tau_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{\chi}_1^0}}$ plane from the CDF experiment from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ and $\gamma_{\rm{delayed}}+{\mbox{$\not\!\!E_T$}}+\rm{jet}$ searches. (c) The excluded region in the $m_{\tilde{\chi}^{\pm}_1}$ vs. $m_{\tilde{l}}$ plane from the CDF experiment in the search with $\tau$–leptons +, and (d) the 95% C.L. cross section upper from $Z\gamma+{\mbox{$\not\!\!E_T$}}$ production as a function of $\Lambda$ and $m_{{\tilde{\chi}_1^0}}$ from the D0 experiment. \[f5\]](D0_GMSB.eps "fig:"){height="4.5cm"}![(color online) Limits on GMSB scenarios. (a) The 95% C.L. cross section upper limits from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state as a function of scale $\Lambda$, $m_{{\tilde{\chi}_1^0}}$ and $m_{\tilde{\chi}^{\pm}_1}$ from the D0 experiment. (b) The excluded regions in the $\tau_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{\chi}_1^0}}$ plane from the CDF experiment from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ and $\gamma_{\rm{delayed}}+{\mbox{$\not\!\!E_T$}}+\rm{jet}$ searches. (c) The excluded region in the $m_{\tilde{\chi}^{\pm}_1}$ vs. $m_{\tilde{l}}$ plane from the CDF experiment in the search with $\tau$–leptons +, and (d) the 95% C.L. cross section upper from $Z\gamma+{\mbox{$\not\!\!E_T$}}$ production as a function of $\Lambda$ and $m_{{\tilde{\chi}_1^0}}$ from the D0 experiment. \[f5\]](CDF_GMSBshort_fig3.eps "fig:"){height="4.2cm"} ![(color online) Limits on GMSB scenarios. (a) The 95% C.L. cross section upper limits from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state as a function of scale $\Lambda$, $m_{{\tilde{\chi}_1^0}}$ and $m_{\tilde{\chi}^{\pm}_1}$ from the D0 experiment. (b) The excluded regions in the $\tau_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{\chi}_1^0}}$ plane from the CDF experiment from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ and $\gamma_{\rm{delayed}}+{\mbox{$\not\!\!E_T$}}+\rm{jet}$ searches. (c) The excluded region in the $m_{\tilde{\chi}^{\pm}_1}$ vs. $m_{\tilde{l}}$ plane from the CDF experiment in the search with $\tau$–leptons +, and (d) the 95% C.L. cross section upper from $Z\gamma+{\mbox{$\not\!\!E_T$}}$ production as a function of $\Lambda$ and $m_{{\tilde{\chi}_1^0}}$ from the D0 experiment. \[f5\]](CDF_susyML_tau_fig2.eps "fig:"){height="4.5cm"}![(color online) Limits on GMSB scenarios. (a) The 95% C.L. cross section upper limits from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state as a function of scale $\Lambda$, $m_{{\tilde{\chi}_1^0}}$ and $m_{\tilde{\chi}^{\pm}_1}$ from the D0 experiment. (b) The excluded regions in the $\tau_{{\tilde{\chi}_1^0}}$ vs. $m_{{\tilde{\chi}_1^0}}$ plane from the CDF experiment from the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ and $\gamma_{\rm{delayed}}+{\mbox{$\not\!\!E_T$}}+\rm{jet}$ searches. (c) The excluded region in the $m_{\tilde{\chi}^{\pm}_1}$ vs. $m_{\tilde{l}}$ plane from the CDF experiment in the search with $\tau$–leptons +, and (d) the 95% C.L. cross section upper from $Z\gamma+{\mbox{$\not\!\!E_T$}}$ production as a function of $\Lambda$ and $m_{{\tilde{\chi}_1^0}}$ from the D0 experiment. \[f5\]](D0_GMSB_Zmet.eps "fig:"){height="4.5cm"} ***$R$–parity violation*** {#sec_susy3} -------------------------- While one of the most attractive features of SUSY is its potential to solve the dark matter problem, there is no inherent requirement for $R$–parity to be conserved. If the restriction that $R$–parity conservation is released, then there is a great deal of variety in the new final states allowed from sparticle production. For pragmatic reasons, efforts focused on two different modes. The first was on pair production and decay of stops via ${\tilde{t}}\to b\tau$ [@PhysRevLett.101.071802] by the CDF experiment with limits shown in Fig. \[fig\_rpv\](a). However, with the restriction of $R$–parity removed, sparticles are no longer required to be produced in pairs. An example signal of this type is single sneutrino production which decays via the $\tilde{\nu}\to e\mu,e\tau$ or $\mu\tau$ final states. A variety of searches were done at the CDF[@PhysRevLett.96.211802; @PhysRevLett.105.191801] and the D0[@PhysRevLett.100.241803; @PhysRevLett.105.191802] experiments. No new physics was observed and limits were set with the results shown in Fig. \[fig\_rpv\](b,c). Other searches for RPV[@Abazov2006441; @PhysRevLett.97.111801] also did not show any excess. We also note that these same results can be interpreted in terms of other models, for example lepton flavor violating $Z'$ boson production and decay, and are described in section \[sec\_reson1\]. ![(color online) Results from $R$–parity violation SUSY searches. (a) The 95% C.L. cross section upper limit on pair–production and decay of ${\tilde{t}}\to b\tau$ from the CDF experiment. Cross section limit results on (b) $\tilde{\nu}$ in the $\tilde{\nu}\to e\mu,e\tau,\mu\tau$ final states from the CDF experiment and (c) $\tilde{\nu}\to e\mu$ from the D0 experiment. \[fig\_rpv\]](stop_cs_limits_runII_322ipb_prl_simple.eps "fig:"){height="4.5cm"} ![(color online) Results from $R$–parity violation SUSY searches. (a) The 95% C.L. cross section upper limit on pair–production and decay of ${\tilde{t}}\to b\tau$ from the CDF experiment. Cross section limit results on (b) $\tilde{\nu}$ in the $\tilde{\nu}\to e\mu,e\tau,\mu\tau$ final states from the CDF experiment and (c) $\tilde{\nu}\to e\mu$ from the D0 experiment. \[fig\_rpv\]](CDF_R-par_vio_1.eps "fig:"){width="6cm"} ![(color online) Results from $R$–parity violation SUSY searches. (a) The 95% C.L. cross section upper limit on pair–production and decay of ${\tilde{t}}\to b\tau$ from the CDF experiment. Cross section limit results on (b) $\tilde{\nu}$ in the $\tilde{\nu}\to e\mu,e\tau,\mu\tau$ final states from the CDF experiment and (c) $\tilde{\nu}\to e\mu$ from the D0 experiment. \[fig\_rpv\]](D0_Rparity.eps "fig:"){height="4.5cm"} **Other BSM Searches** {#sec_nonsusy} ====================== We next present results on both the classic Tevatron searches as well as a number of new types of searches that originated after the beginning of Run II. These include resonances, hidden–valley model particles, long–lived particles, extra dimensions, dark matter, as well as signature–based and model–independent searches. Many of these models, like those containing an extended Higgs sector, are discussed in more detail in the Higgs boson chapter of this review[@HiggsReview], although some are referenced here for completeness. ***Resonances*** {#sec_reson} ---------------- One of the primary analysis techniques to search for new particles, which was developed long before the advent of colliders, is to look for resonances in the invariant mass distribution of two final state particles. This method can be used for a large number of different final states and a signature of this type can arise from new fermions and gauge bosons, excited fermions, leptoquarks, technicolor particles, and other models. These results are presented next. ### *New fermions and bosons* {#sec_reson1} While there are many different models that predict new fermions from extending the number of generations in the SM the experiments focused on searches for different types of heavy quarks that decay to a vector boson, $V=W,~Z$, and a SM quark. The CDF experiment searched for pair production and decay of fourth generation $b'$ quarks that decay exclusively via $b'\to bZ$[@PhysRevD.76.072006]. The analysis was done in the $\ell\ell+3~\rm{jets}$ final state. No significant excess is observed and $b'$ quarks are excluded with $m_{b'}<268$ GeV at 95% C.L.  (see Fig. \[fig-vq\](a)). Another analysis by the D0 experiment searches for vector–like quarks, $Q$, in single quark electroweak production in association with SM quarks[@PhysRevLett.106.081801]. At hadron colliders, electroweak production of vector–like quarks can be significant, but depends on $m_Q$ and the coupling strength between the $Q$ and SM quarks, $\tilde{\kappa}_{qQ}$. Single production and decay of $p\bar{p}\to qQ\to q(Vq)$ can produce an excess of events in the $V+2$ jet final state. Limits are set as a function of the various model parameters; for $\tilde{\kappa}_{qQ}=1$ the process $Qq\to Wqq$ is excluded for a mass $m_Q<693$ GeV (see Fig. \[fig-vq\](b)), and the process $Qq\to Zqq$ is excluded for a mass $m_Q<449$ GeV at 95% C.L. Other searches for fourth generation quark pair production include decays to top quarks [@PhysRevLett.104.091801; @PhysRevLett.106.141803; @PhysRevLett.107.082001; @PhysRevLett.108.211805; @PhysRevLett.107.261801; @PhysRevLett.106.191801; @PhysRevLett.100.161803]. These encompass $b'\to tW$, $t'\to Wb$ and $Wq$. Similar searches for a new heavy particle $T\to t+X$ where $X$ is an invisible particle found no evidence of new physics (see Fig. \[fig-vq\](c,d)). ![(color online) (a) The 95% C.L. cross section upper limits on pair production and decay of the $b'\to Zb$ as a function of $m_{b'}$ from the CDF experiment, (b) the limits on a vector–like quark, $Q\to W+\rm{jet}$ as a function of $m_Q$ and for different coupling strengths with SM quarks, $\tilde{\kappa}_{qQ}$, from the D0 experiment, (c) the limits on the $t'\to Wb$ as a function of $m_{t'}$, and (d) the limits in $m_T$ vs. $m_X$ in the search for a new heavy particle $T$ from the CDF experiment. \[fig-vq\]](CDF_bprime.eps "fig:"){height="5.2cm"} ![(color online) (a) The 95% C.L. cross section upper limits on pair production and decay of the $b'\to Zb$ as a function of $m_{b'}$ from the CDF experiment, (b) the limits on a vector–like quark, $Q\to W+\rm{jet}$ as a function of $m_Q$ and for different coupling strengths with SM quarks, $\tilde{\kappa}_{qQ}$, from the D0 experiment, (c) the limits on the $t'\to Wb$ as a function of $m_{t'}$, and (d) the limits in $m_T$ vs. $m_X$ in the search for a new heavy particle $T$ from the CDF experiment. \[fig-vq\]](D0_VQ_wj.eps "fig:"){height="4.4cm"} ![(color online) (a) The 95% C.L. cross section upper limits on pair production and decay of the $b'\to Zb$ as a function of $m_{b'}$ from the CDF experiment, (b) the limits on a vector–like quark, $Q\to W+\rm{jet}$ as a function of $m_Q$ and for different coupling strengths with SM quarks, $\tilde{\kappa}_{qQ}$, from the D0 experiment, (c) the limits on the $t'\to Wb$ as a function of $m_{t'}$, and (d) the limits in $m_T$ vs. $m_X$ in the search for a new heavy particle $T$ from the CDF experiment. \[fig-vq\]](CDF_limit-Wb-PRL-v2.eps "fig:"){height="4.4cm"} ![(color online) (a) The 95% C.L. cross section upper limits on pair production and decay of the $b'\to Zb$ as a function of $m_{b'}$ from the CDF experiment, (b) the limits on a vector–like quark, $Q\to W+\rm{jet}$ as a function of $m_Q$ and for different coupling strengths with SM quarks, $\tilde{\kappa}_{qQ}$, from the D0 experiment, (c) the limits on the $t'\to Wb$ as a function of $m_{t'}$, and (d) the limits in $m_T$ vs. $m_X$ in the search for a new heavy particle $T$ from the CDF experiment. \[fig-vq\]](CDF_topX.eps "fig:"){height="4.4cm"} The new gauge bosons predicted in left–right symmetric models ($SU(2)_L\times SU(2)_R$), grand unified theories (e.g. $E_6$), or by the introduction of gauge groups beyond the SM are typically referred to as the $W'$ or $Z'$ bosons. Both the CDF and the D0 experiments searched for $W'$ bosons in many different final states including $W'\to \ell\nu, tb$ and $WZ$. The most common searches are the $W'\to e\nu$[@PhysRevD.75.091101; @PhysRevLett.100.031804] and $W'\to\mu\nu$[@PhysRevD.83.031102] channels and no excess of events is observed. With the assumption that the [$W'\rightarrow WZ$]{}  mode is suppressed and that any additional generation of fermions can be ignored, the $W'$ boson is excluded for a mass $m_{W'}<1.12$ TeV; the results are shown in Fig. \[fig\_wpenu\](a,b). Additional searches for $W'\to tb$[@PhysRevLett.103.041801; @Abazov:2006aj; @PhysRevLett.100.211803; @Abazov:2011xs] show no hints of new physics (see Fig. \[fig\_wpenu\](c,d)). Searches in the diboson final state are described with other diboson results below. ![(color online) The 95% C.L. cross section upper limit on $W'\to e\nu$ process as a function of the $M_{W'}$ from (a) the D0 and (b) the CDF experiments. The 95% C.L. cross section upper limit on $W'\to tb$ process (c) as a function of the $M_{W'}$ from the D0 experiment, and (d) in $g_{W'}/g_W$ vs. $M_{W'}$ from the CDF experiment. \[fig\_wpenu\]](D0_Wp_lim.eps "fig:"){height="5cm"} ![(color online) The 95% C.L. cross section upper limit on $W'\to e\nu$ process as a function of the $M_{W'}$ from (a) the D0 and (b) the CDF experiments. The 95% C.L. cross section upper limit on $W'\to tb$ process (c) as a function of the $M_{W'}$ from the D0 experiment, and (d) in $g_{W'}/g_W$ vs. $M_{W'}$ from the CDF experiment. \[fig\_wpenu\]](CDF_wpenu_fig3.eps "fig:"){height="5.3cm"} ![(color online) The 95% C.L. cross section upper limit on $W'\to e\nu$ process as a function of the $M_{W'}$ from (a) the D0 and (b) the CDF experiments. The 95% C.L. cross section upper limit on $W'\to tb$ process (c) as a function of the $M_{W'}$ from the D0 experiment, and (d) in $g_{W'}/g_W$ vs. $M_{W'}$ from the CDF experiment. \[fig\_wpenu\]](D0_wptb.eps "fig:"){height="5cm"} ![(color online) The 95% C.L. cross section upper limit on $W'\to e\nu$ process as a function of the $M_{W'}$ from (a) the D0 and (b) the CDF experiments. The 95% C.L. cross section upper limit on $W'\to tb$ process (c) as a function of the $M_{W'}$ from the D0 experiment, and (d) in $g_{W'}/g_W$ vs. $M_{W'}$ from the CDF experiment. \[fig\_wpenu\]](CDF_wptb.eps "fig:"){height="5cm"} A new $Z'$ boson will occur in theories where BSM gauge groups have an additional $U(1)$ gauge group. The most common analysis is to search for a narrow resonance in the mass distribution for the $Z'\to\ell\ell, jj, t\bar{t}$ or $WW$. Both the D0[@Abazov:2010ti] and the CDF[@PhysRevLett.106.121801; @PhysRevLett.102.091805; @PhysRevLett.102.031801] experiment looked for these signatures in dilepton final states. Fig. \[fig\_zp\](a) shows the $M_{ee}$ distribution from the CDF experiment, exhibiting a modest excess of events in data around $M_{Z'}\sim 240$ GeV; if only SM physics is assumed in the search region, this excess has a significance of 2.5 s.d. The D0 experiment did not observe any significant excess as shown in Fig. \[fig\_zp\](b), and 95% C.L. upper limits on $\sigma\times BR(p\bar{p}\to Z'\to ee)$ for various models are set, varying between $M_{Z'}<772$ GeV and $M_{Z'}<1023$ GeV as shown in Fig. \[fig\_zp\](c). In both $Z'\to \mu\mu$ searches, no significant excess was observed, with limits on production of the $Z'$ boson assuming various models between $M_{Z'}<817$ GeV and $M_{Z'}<1071$ GeV, as shown in Fig. \[fig\_zp\](d). Other searches in $Z'\to jj$ and $t\bar{t}$ found no excesses [@PhysRevD.79.112002; @PhysRevLett.110.121802; @Abazov:2008ny; @PhysRevD.85.051101] (see Fig. \[fig\_zp\](e,f)). The decay $Z'\to WW$ is described with the other diboson searches below. Lepton flavor violating searches, for example $Z'\to e\mu, e\tau, \mu\tau$ are typically done in the context of $R$–parity violating SUSY, but have $Z'$ interpretations[@PhysRevLett.96.211802]. ![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](CDF_zpee_fig1.eps "fig:"){height="5cm"} ![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](D0_zpee_fig1.eps "fig:"){height="5.2cm"} ![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](D0_zp_lim_2d.eps "fig:"){height="5cm"}![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](cdf_zmumu_lim.eps "fig:"){height="5.2cm"} ![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](D0_zptt.eps "fig:"){height="5cm"}![(color online) The dielectron invariant mass in the search for $Z'\to ee$ from (a) the CDF experiment and (b) the D0 experiments. (c) The 95% C.L. upper limits on the Z’ couplings ratio ($g_{Z'}/g_{Z'_{\chi}}$) as a function of $M_{Z'}$ from the D0 experiment. (d) The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to\mu\mu)$ as a function of the $M_{Z'}$ from the CDF experiment. The 95% C.L. upper limits on the $\sigma\times{\cal BR}(Z'\to tt)$ as a function of the $M_{Z'}$ from (e) the D0 experiment and (f) the CDF experiments. \[fig\_zp\]](CDF_Mtt_limit_log.eps "fig:"){height="5.2cm"} The CDF and the D0 experiments also searched for resonances in the $WW$, $WZ$ and $ZZ$ decay modes. While these searches can be analyzed as $Z'\to WW$, $Z'\to ZZ$ and $W'\to WZ$, other interpretations are possible. These are done at the CDF experiment in $V'\to VV\to\ell+{\mbox{$\not\!\!E_T$}}+\rm{jets}$[@PhysRevLett.104.241801], and $X\to ZZ$[@PhysRevD.85.012008] with various $Z$ boson decays. The $X\to ZZ$ search showed a small excess in data in the low–sensitivity four–lepton channel, which was not observed in the other two more sensitive $\ell\ell jj$ and $\ell\ell+{\mbox{$\not\!\!E_T$}}$ final states[@PhysRevD.85.012008]; this result will be interpreted in the section \[sec-EDres\]. The D0 experiment[@PhysRevLett.104.061801; @PhysRevLett.107.011801] searched in a combined way for $W'$ decay to one, two or three leptons (assuming $W'\to WZ\to \ell\ell\ell{\mbox{$\not\!\!E_T$}}, \ell\ell jj, \ell\ell{\mbox{$\not\!\!E_T$}}$+jets). In addition to the standard methods, a novel method is used (now adopted by the LHC) to investigate the possibility that the heavy $W'$ or $Z'$ boson is so massive that the decay bosons are highly boosted. For hadronically decaying vector bosons the two light quarks could get merged and produce a single broad jet with an invariant mass close to the $W$ or $Z$ boson mass. No significant excess over data was found in either experiment in any of these modes. Limits on the $W'$ boson are between 180 GeV and 690 GeV, and on the $Z'$ boson between 242 GeV and 544 GeV as shown in Fig. \[f9\]. ![(color online) (a) The 95% C.L. upper limit on $\sigma\times{\cal BR}(W'\to WZ)$ from the D0 experiment and (b) the exclusion region in the $\xi$ vs. $M_{W'}$ plane in the [$W'\rightarrow WZ$]{} search from the CDF experiment, where the parameter $\xi$ sets the coupling strength between the SM $W$ boson and any new $W'$ boson. \[f9\]](D0_wpwz_lim.eps "fig:"){height="4.2cm"}![(color online) (a) The 95% C.L. upper limit on $\sigma\times{\cal BR}(W'\to WZ)$ from the D0 experiment and (b) the exclusion region in the $\xi$ vs. $M_{W'}$ plane in the [$W'\rightarrow WZ$]{} search from the CDF experiment, where the parameter $\xi$ sets the coupling strength between the SM $W$ boson and any new $W'$ boson. \[f9\]](CDF_wpwz_limit.eps "fig:"){height="4.cm"} Finally, the CDF experiment[@PhysRevLett.111.031802] searched for both resonant and non–resonant production of pairs of strongly interacting particles, each of which decays to a pair of jets, $p\bar{p}\to X\to (YY) \to (jj)(jj)$ and $p\bar{p}\to YY \to (jj)(jj)$. This search is particularly sensitive at lower masses, where the LHC experiments expect high background rates. No evidence of new particles is observed and results are interpreted as an exclusion of the $Y$ particle in both production scenarios with the results shown in Fig. \[fig-4jRes\]. These results are directly applicable in axigluon models. ![(color online) The 95% C.L. upper limits on (a) $\sigma(p\bar{p}\to YY\to jjjj)$ as a function of the $M_Y$ and (b) $\sigma(p\bar{p}\to X\to YY\to jjjj)$ in the $M_Y$ vs. $M_X$ plane from the CDF experiment. \[fig-4jRes\]](CDF_limit_fourjet_sys4.eps "fig:"){height="4.5cm"} ![(color online) The 95% C.L. upper limits on (a) $\sigma(p\bar{p}\to YY\to jjjj)$ as a function of the $M_Y$ and (b) $\sigma(p\bar{p}\to X\to YY\to jjjj)$ in the $M_Y$ vs. $M_X$ plane from the CDF experiment. \[fig-4jRes\]](CDF_limits_obs1_2d.eps "fig:"){height="4.5cm"} ### *Excited fermions* {#sec_reson2} The search for compositeness focuses on excited states of the SM fermions. Both excited electrons and excited muons, as well as excited quarks are searched for in the $e^*\to e\gamma$[@PhysRevD.77.091102; @PhysRevLett.94.101802] and $\mu^*\to\mu\gamma$[@PhysRevLett.97.191802; @PhysRevD.73.111102], and $q^*\to q\gamma$ or $qW$ [@PhysRevLett.72.3004], $q^*\to qZ$[@PhysRevD.74.011104], and $q^*\to qg$ modes[@PhysRevD.79.112002]. Results are interpreted as exclusion limits in contact interaction model for a mass $m<876(853)$ GeV for $e^*(\mu^*)$, and in a gauge–mediated model for a mass $m<430(410)$ GeV for $e^*(\mu^*)$. ### *Leptoquarks* {#sec_reson3} Leptoquarks can exist as either vectors or scalars and can be produced in either pair production or single production modes. In all cases, the $LQ$ can decay to $\ell q$ or $\nu q'$ where $\ell=e,\mu,\tau$, and the parameter $\beta$ defines the branching fraction for $LQ\to\ell q$. Due to experimental constraints on flavour changing neutral currents[@Beringer:1900zz], it is assumed that $LQ$s only couple to fermions of the same generation. Both the CDF[@Aaltonen:2007rb; @PhysRevD.73.051102; @PhysRevD.72.051107; @PhysRevD.71.112001] and the D0[@PhysRevD.71.071104; @Abazov:2009ab; @PhysRevD.84.071104; @Abazov:2006vc; @Abazov:2006ej; @Abazov:2008np; @PhysRevLett.99.061801; @PhysRevLett.101.241802; @Abazov:2010wq; @Abazov:2006wp; @Abazov:2008at] experiments focused on pair production of leptoquarks of all generations in the $\ell q\ell q$, $\ell q\nu q'$ and $\nu q\nu q$ final states. In all cases, no excesses were observed. Since vector and scalar production are similar, limits can be set on both using the same results, but we begin by reporting the results in the scalar $LQ$ case. For $\beta=0$, when both $LQ$ decay to $\nu q$, it is not possible to distinguish between the search for the first and second generation, and a common limit for a mass $m_{LQ}< 214$ GeV is set; $b$–tagging allows for a specific search for the third generation in $bb+{\mbox{$\not\!\!E_T$}}$, and the limit is $m_{LQ}<247$ GeV. Searches with charged leptons are done in the $\ell+\rm{jets}+{\mbox{$\not\!\!E_T$}}$ and $\ell\ell+\rm{jets}$ final states; the third generation search often requires the hadronic decays of $\tau$–lepton accompanied with $b$–jets. For $\beta=0.5$ the first generation $LQ$ is excluded for a mass $m_{LQ}< 326$ GeV, the second generation for a mass $m_{LQ}<270$ GeV, and the third generation for a mass $m_{LQ}<207$ GeV; for $\beta=1$ the first generation $LQ$ is excluded for a mass $m_{LQ}<199$ GeV, the second generation for a mass $m_{LQ}< 316$ GeV, and the third generation for a mass $m_{LQ}<210$ GeV. Figure \[lq\](a) shows the exclusion region in the $\beta$ vs. $M_{LQ}$ plane in the search for the first generation scalar $LQ$ pairs. Reinterpreting the data in terms of first generation vector $LQ$ model, Fig. \[lq\](b) shows the limits for three different assumptions about the couplings. For the third generation, with $\beta=1$, vector $LQ$s are excluded for a mass $m_{LQ}<317$ GeV and $m_{LQ}<251$ GeV at 95% C.L.with two different assumptions about couplings[@Aaltonen:2007rb]. ![(color online) (a) The exclusion region in the $\beta$ vs. $M_{LQ}$ plane from the search for first generation scalar $LQ$ pairs. (b) The exclusion regions in the same plane but for first generation vector $LQ$ pairs. Both results are from the D0 experiment.\[lq\]](sLQ_d0.eps "fig:"){height="4.8cm"}![(color online) (a) The exclusion region in the $\beta$ vs. $M_{LQ}$ plane from the search for first generation scalar $LQ$ pairs. (b) The exclusion regions in the same plane but for first generation vector $LQ$ pairs. Both results are from the D0 experiment.\[lq\]](VLQ_d0.eps "fig:"){height="4.8cm"} ### *Technicolor* {#sec_reson4} Much of technicolor phenomenology is driven by the technicolor strawman model[@Lane:2002sm]. In this model, the most promising signature is the production and decay of a technicolor $\rho_T$, which can decay via $\rho_T\to\pi_T+W\to bbW$ (where $\pi_T$ is a technipion) or $\rho_T\to WZ$ depending on the masses of the particles involved. Other new particles such as a techniomega, $\omega_T$, can be produced and decay via $\omega_T\to\gamma+\pi_T\to\gamma bb$. No evidence for new physics is observed[@PhysRevLett.98.221801; @PhysRevLett.104.111802; @PhysRevLett.104.061801]. Some of the expected and observed 95% C.L. excluded regions are shown in Fig. \[fig-TC\]. ![(color online) Results for technicolor models. (a) The exclusion region in the $M_{\pi_T}$ vs. $M_{\rho_T}$ plane from the $bb+W$ final state from the CDF experiment and (b) the results for the $\rho_T\to WZ$ final state from the D0 experiment. \[fig-TC\]](CDF_technicolor.eps "fig:"){height="4.7cm"}![(color online) Results for technicolor models. (a) The exclusion region in the $M_{\pi_T}$ vs. $M_{\rho_T}$ plane from the $bb+W$ final state from the CDF experiment and (b) the results for the $\rho_T\to WZ$ final state from the D0 experiment. \[fig-TC\]](D0_TC.eps "fig:"){height="4.8cm"} ### *Other resonance searches* {#sec_reson5} We next mention a few resonance searches that do not fall in to any of the above categories. The SM predicts the branching ratio $Z\to\pi^0\gamma$ to be between $10^{-12}$ and $10^{-9}$. The CDF experiment[@PhysRevLett.112.111803] searched for this rare process by looking for a narrow resonance with $m\sim 90$ GeV in the $\gamma\gamma$ invariant mass spectrum; This search is extended to include the quantum mechanically forbidden processes $Z\to\gamma\gamma$ and $Z\to\pi^0\pi^0$. No significant excess in the data was found, and 95% C.L.  upper bounds on the branching ratios are determined: ${\cal BR}(Z\to\pi^0\gamma)<2.01\times10^{-5}$, ${\cal BR}(Z\to\gamma\gamma)<1.46\times10^{-5}$, and ${\cal BR}(Z\to\pi^0\pi^0)<1.52\times10^{-5}$, which remain most stringent in the world. In 2011 the CDF experiment [@PhysRevLett.106.171801] created a world–wide stir when it reported an excess of events in the invariant mass distribution of jet pairs produced in association with a $W$ boson in the leptonic final state. The observed excess, with a dijet invariant mass between 120 and 160 GeV, was at the 3.2 s.d. level, appeared to have a Gaussian shape (as expected from a new particle due to mass resolution effects) and gave a production cross section at the 4 pb level as shown in Fig. \[fig-dijetbump\](a,b). Following this lead, the D0 experiment [@PhysRevLett.107.011804] investigated this final state, but no excess was found as shown in Fig. \[fig-dijetbump\](c); limits were set that excluded new particle production above 1.9 pb. The final resolution of this potential excess came when the CDF experiment published the result in the jets$+{\mbox{$\not\!\!E_T$}}$ final state[@PhysRevD.88.092004] and an updated version [@PhysRevD.89.092001] of the leptonic analysis, where a number of systematic effects were investigated and taken into account including improved understanding of the detector response to quarks and gluons separately, and modeling of instrumental backgrounds. In these searches there is no indication of an excess and the final results are shown in Fig. \[fig-dijetbump\](d). The 95% upper limit was set on the production cross section of the new particle at 0.9 pb. This story underscores the need for two experiments. ![(color online) The dijet invariant mass in $W$+dijet events in the $\ell+{\mbox{$\not\!\!E_T$}}$+2jet final state: (a) the data plotted on top of the known SM processes showing an excess around 140 GeV, and (b) the same data with the SM backgrounds (except $WW$ and $WZ$) subtracted, and using a Gaussian component fit for the excess region from the CDF experiment. (c) The results in the same final state from the D0 experiment. (d) The updated CDF analysis showing that the original excess was due to detector and analysis effects. \[fig-dijetbump\]](CDF_fit_nogauss.eps "fig:"){height="5.2cm"} ![(color online) The dijet invariant mass in $W$+dijet events in the $\ell+{\mbox{$\not\!\!E_T$}}$+2jet final state: (a) the data plotted on top of the known SM processes showing an excess around 140 GeV, and (b) the same data with the SM backgrounds (except $WW$ and $WZ$) subtracted, and using a Gaussian component fit for the excess region from the CDF experiment. (c) The results in the same final state from the D0 experiment. (d) The updated CDF analysis showing that the original excess was due to detector and analysis effects. \[fig-dijetbump\]](CDF_bkg_gauss.eps "fig:"){height="5.2cm"} ![(color online) The dijet invariant mass in $W$+dijet events in the $\ell+{\mbox{$\not\!\!E_T$}}$+2jet final state: (a) the data plotted on top of the known SM processes showing an excess around 140 GeV, and (b) the same data with the SM backgrounds (except $WW$ and $WZ$) subtracted, and using a Gaussian component fit for the excess region from the CDF experiment. (c) The results in the same final state from the D0 experiment. (d) The updated CDF analysis showing that the original excess was due to detector and analysis effects. \[fig-dijetbump\]](D0_dijetbump.eps "fig:"){height="4.2cm"} ![(color online) The dijet invariant mass in $W$+dijet events in the $\ell+{\mbox{$\not\!\!E_T$}}$+2jet final state: (a) the data plotted on top of the known SM processes showing an excess around 140 GeV, and (b) the same data with the SM backgrounds (except $WW$ and $WZ$) subtracted, and using a Gaussian component fit for the excess region from the CDF experiment. (c) The results in the same final state from the D0 experiment. (d) The updated CDF analysis showing that the original excess was due to detector and analysis effects. \[fig-dijetbump\]](CDF_dijetbump_fin.eps "fig:"){height="4.3cm"} ***Hidden–valley models, CHAMPS and other long–lived particles*** {#sec_LL} ----------------------------------------------------------------- There are many different types of long–lived particles predicted in new models. A few have already been described in the GMSB section, but there are others such as hidden–valley model particles, CHAMPS (typically in SUSY models), monopoles, stopped gluinos and quirks which are described next. ### *Hidden–valley/dark photons* {#sec_LL1} Hidden valley (HV) models provide a framework for studying the phenomenology of secluded sectors, but make no specific predictions. The D0 experiment performed variety of different searches. The first analysis[@PhysRevLett.103.071801] searched for Higgs boson production and decay into a pair of neutral long–lived HV particles that each decay to a $b\bar{b}$ pair. The search is for pairs of very–displaced vertices in the tracking detector, with radii in the range between 1.6–-20 cm from the beam axis. No excess is found and limits are set as shown in Fig \[fig\_hv\](a). HV models can also include SUSY. In one search gaugino pairs can be produced and decay into HV particles, in particular a new light gauge boson (known as a dark photon) which in turn decays via fermion pairs, and HV (or dark) neutralinos which escape the detector and produce large . This final state includes a photon, two spatially close leptons and large . Since there is no evidence of dark photons, limits are set [@PhysRevLett.103.081802] and the results are shown in Fig. \[fig\_hv\](b). A complementary search for gaugino pair production is done by searching for a pair of dark photons, a pair of dark neutralinos and other SM particles in the final state. These events have the unique final state of a pair of isolated “jets” of charged leptons, so–called leptonic jets, produced in association with a large amount of . Again, no evidence was found [@PhysRevLett.105.211802] and limits are shown in Fig. \[fig\_hv\](c). Finally, searches can be done for long–lived particles in HV with pairs of electrons or photons in the final state with results from $b'\to Zq\to eeq$[@PhysRevLett.101.111802] shown in Fig. \[fig\_hv\](d). ![(color online) Results from searches for the new particles from hidden–valley models from the D0 experiment. (a) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of the $H+X\to HV HV+X\to b\bar{b} b\bar{b}+X$ as a function of the decay length. (b) The excluded region in the chargino mass vs. the ${\cal BR}$ of the ${\tilde{\chi}_1^0}$ into a photon for different dark photon masses from the search with a photon in final state. (c) The 95% C.L. upper limit on $\sigma$ as a function on the dark photon mass in the search with leptonic–jets in final state. (d) The excluded region in the $c\tau_{b'}$ vs. $m_{b'}$ plane in the search for long–lived $b'\to Zq\to eeq$. \[fig\_hv\]](D0_ll_bb_3.eps "fig:"){height="4.3cm"} ![(color online) Results from searches for the new particles from hidden–valley models from the D0 experiment. (a) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of the $H+X\to HV HV+X\to b\bar{b} b\bar{b}+X$ as a function of the decay length. (b) The excluded region in the chargino mass vs. the ${\cal BR}$ of the ${\tilde{\chi}_1^0}$ into a photon for different dark photon masses from the search with a photon in final state. (c) The 95% C.L. upper limit on $\sigma$ as a function on the dark photon mass in the search with leptonic–jets in final state. (d) The excluded region in the $c\tau_{b'}$ vs. $m_{b'}$ plane in the search for long–lived $b'\to Zq\to eeq$. \[fig\_hv\]](D0_darkphoton.eps "fig:"){height="4.3cm"} ![(color online) Results from searches for the new particles from hidden–valley models from the D0 experiment. (a) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of the $H+X\to HV HV+X\to b\bar{b} b\bar{b}+X$ as a function of the decay length. (b) The excluded region in the chargino mass vs. the ${\cal BR}$ of the ${\tilde{\chi}_1^0}$ into a photon for different dark photon masses from the search with a photon in final state. (c) The 95% C.L. upper limit on $\sigma$ as a function on the dark photon mass in the search with leptonic–jets in final state. (d) The excluded region in the $c\tau_{b'}$ vs. $m_{b'}$ plane in the search for long–lived $b'\to Zq\to eeq$. \[fig\_hv\]](D0_lepjets.eps "fig:"){height="4.25cm"} ![(color online) Results from searches for the new particles from hidden–valley models from the D0 experiment. (a) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of the $H+X\to HV HV+X\to b\bar{b} b\bar{b}+X$ as a function of the decay length. (b) The excluded region in the chargino mass vs. the ${\cal BR}$ of the ${\tilde{\chi}_1^0}$ into a photon for different dark photon masses from the search with a photon in final state. (c) The 95% C.L. upper limit on $\sigma$ as a function on the dark photon mass in the search with leptonic–jets in final state. (d) The excluded region in the $c\tau_{b'}$ vs. $m_{b'}$ plane in the search for long–lived $b'\to Zq\to eeq$. \[fig\_hv\]](D0_HV_ll.eps "fig:"){height="4.25cm"} ### *Charged massive stable particles* {#sec_LL2} Searches for CHAMPS are typically done by examining events for the presence of a single charged particle that behaves like a “heavy muon” in that it only interacts as a minimum ionizing particle as it traverses the detector. These particles can be produced directly (often as pairs) or as decay products of other particles. Both the CDF[@PhysRevLett.103.021802] and the D0 experiments found no evidence for CHAMPS. Results are typically interpreted in SUSY models with limits as shown in Fig. \[fig\_ch1\](a,b) where the production mechanisms are gaugino–like, stop–like , or $\tilde{\tau}$–like CHAMPS. ![(color online) Results for long–lived particle searches. The 95% C.L. cross section upper limits for (a) gaugino–like charginos CHAMPS from the D0 experiment and for (b) top–squark CHAMPS from the CDF experiment. (c) The 95% C.L. cross section upper limit as a function of magnetic monopole mass from the CDF experiment. \[fig\_ch1\]](D0_comb_champ_1.eps "fig:"){height="5.2cm"} ![(color online) Results for long–lived particle searches. The 95% C.L. cross section upper limits for (a) gaugino–like charginos CHAMPS from the D0 experiment and for (b) top–squark CHAMPS from the CDF experiment. (c) The 95% C.L. cross section upper limit as a function of magnetic monopole mass from the CDF experiment. \[fig\_ch1\]](cdf_champ.eps "fig:"){height="4.8cm"} ![(color online) Results for long–lived particle searches. The 95% C.L. cross section upper limits for (a) gaugino–like charginos CHAMPS from the D0 experiment and for (b) top–squark CHAMPS from the CDF experiment. (c) The 95% C.L. cross section upper limit as a function of magnetic monopole mass from the CDF experiment. \[fig\_ch1\]](CDF_monopole.eps "fig:"){height="5.2cm"} ### *Other searches for long–lived particles – monopoles, stopped gluinos and quirks* Pairs of Dirac magnetic monopoles, if they exist in nature, are predicted to be directly produced in collisions. Because of their large mass and magnetic charge they will move differently through the magnetic field and can be identified by their late time–of–arrival at the outer parts of the detector. Searches for monopoles in Run II have been done at the CDF detector [@PhysRevLett.96.201801] with no evidence of new production. Monopoles, assuming simple models of Drell–Yan style production, are excluded at 95% C.L. for masses smaller than 360 GeV (see Fig \[fig\_ch1\](c)). Searches for stopped gluinos are done by looking for $R$–hadrons that get trapped in the calorimeter. They can then decay up to 100 hours after their production. The search is done at the D0 experiment by looking for deposits of energy in the calorimeter which are not synchronized with an accelerator bunch crossing. Results[@PhysRevLett.99.131801] are shown in the Fig. \[fig-Quirk\](a). The Fermilab neutrino experiment NuTeV observed an excess of dimuon events [@Adams:2001ska] that could be interpreted as SUSY models with R–parity violation [@Martin:1997ns] or HV models [@Strassler:2006im]. A follow–up analysis at the D0 experiment searched for pair-production of neutral particles each travelling for at least 5 cm before decaying into a pair of muons[@PhysRevLett.97.161802]. No evidence is found and limits are set with results shown in the Fig. \[fig-Quirk\](b). New particles known as quirks, $Q$ which are strongly interacting under their own $SU(N)$ force, can be pair produced at hadron colliders if they also carry SM charges. In addition to the quirk mass, the strength of the new $SU(N)$ gauge coupling, infracolor (which becomes strong at the scale $\Lambda$) is important phenomenologically. In a case when $\Lambda << m_Q\simeq 0.1 - 1$ TeV, breaking of the infracolor string is exponentially suppressed due to the large value of the ratio $m_Q/\Lambda$, and the quirk–antiquirk pair stays connected by the infracolor string like a rubber band that can stretch to macroscopic length proportional to $m_Q/\Lambda^2$. The D0 experiment [@PhysRevLett.105.211803] searched for cases where the extra gauge group is $SU(2), SU(3)$ or $SU(5)$. In these scenarios, we have the unusual signature that the individual quirks ionize atoms in the tracking chamber, but the macroscopic distance between the quirk and anti-quark provide a neutral charged object that does not change direction as it traverses the detector. Thus, it can be reconstructed as a slow, highly ionizing, high $p_T$ track that decays after a few cm. Since there could be a high $p_T$ jet from initial state radiation, the signature will consist of this special type of track, one jet, and large  aligned with the track. No evidence for quirks are found and the results are shown in Fig. \[fig-Quirk\](c). ![(color online) (a) The 95% C.L. cross section upper limits on stopped gluinos with the assumption that ${\cal BR}(\tilde{g}\to g{\tilde{\chi}_1^0})=1$ from the D0 experiment. (b) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of pair production of neutral long–lived particles decaying to pairs of muons as a function of their lifetime. (c) The 95% C.L. cross section upper limits for pair of quirks and a jet (from initial state radiation) as a function of the mass of the quirk. \[fig-Quirk\]](d0_stopglu.eps "fig:"){height="4.3cm"} ![(color online) (a) The 95% C.L. cross section upper limits on stopped gluinos with the assumption that ${\cal BR}(\tilde{g}\to g{\tilde{\chi}_1^0})=1$ from the D0 experiment. (b) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of pair production of neutral long–lived particles decaying to pairs of muons as a function of their lifetime. (c) The 95% C.L. cross section upper limits for pair of quirks and a jet (from initial state radiation) as a function of the mass of the quirk. \[fig-Quirk\]](D0_CH_1.eps "fig:"){height="4.1cm"} ![(color online) (a) The 95% C.L. cross section upper limits on stopped gluinos with the assumption that ${\cal BR}(\tilde{g}\to g{\tilde{\chi}_1^0})=1$ from the D0 experiment. (b) The 95% C.L. upper limits on the $\sigma\times{\cal BR}$ of pair production of neutral long–lived particles decaying to pairs of muons as a function of their lifetime. (c) The 95% C.L. cross section upper limits for pair of quirks and a jet (from initial state radiation) as a function of the mass of the quirk. \[fig-Quirk\]](d0_quirks.eps "fig:"){height="4.3cm"} ***Extra dimensions and dark matter*** {#sec-EDres} -------------------------------------- While many models of extra dimensions are constrained by experiment[@Beringer:1900zz], there is significant room to allow the possibility of new particles and interactions. On the one hand many LED and UED models predict new particles, which will “leave” the detector without interacting, or they will interfere with SM processes in various final states; evidence for the models in former case would show up in ways that are similar to dark matter searches. On the other hand, excited KK modes of the graviton which are localized on the SM brane, spin–2 particles $G^*$, could produce resonances in $ee$, $\gamma\gamma$, $WW$ and $ZZ$ final states that are readily searched for. The D0 and the CDF experiments searched for extra dimensions in a number of ways. We begin with a description of the searches for LED models in which Kaluza–Klein (KK) gravitons are directly produced but immediately disappear. In this case the gravitons are often produced with high transverse momentum and in association with a quark, a gluon, or a photon, giving rise to either monojet or monophoton final states with a large  due to the escaping graviton. No evidence of new physics is observed [@PhysRevLett.101.011601; @PhysRevLett.101.181602; @PhysRevLett.97.171802] and results in the $M_D$ vs. $N_D$ plane, where $M_D$ the fundamental Planck scale in the $(4 + n)$–dimensional space–time and $N_D$ is number of extra dimensions, are shown in Fig. \[fig-ED\](a). Other models indicate that evidence can be inferred in fermion and/or boson final states from the interference between KK gravitons and SM diagram terms. The D0 experiment [@PhysRevLett.102.051601; @PhysRevLett.95.161602] investigated these signatures in the $ee$, $\gamma\gamma$ and $\mu\mu$ final states by searching for deviations in the correlation between the invariant mass and the angular distribution of the pairs from SM–only predictions. Results from $ee$ and $\gamma\gamma$ search are shown in Fig. \[fig-ED\](b). The searches for UED processes typically focus on the production and decay of KK particles, denoted here with a \*. Typically production begins with KK gluons ($g^*$) or quarks ($q^*$) and decay via $q^* \rightarrow qZ^* \rightarrow q (l l^*) \rightarrow ql (l \gamma^*)$. In the case with only one extra dimension, minimal UED (mUED), the $\gamma^*$ is stable and is a dark matter candidate. This can result in a final state which includes two leptons (same–sign or opposite sign) as well as a SM jet and . The D0 experiment [@PhysRevLett.108.131802] searched for mUED in the same–sign lepton final state and excluded $R_c^{-1}$ up to 260 GeV, where $R_c$ is the radius of the compact dimension. This limit corresponds to a mass of 317 GeV of the lightest KK quark. When additional extra dimensions exist the $\gamma^*$ decays via $\gamma^*\to\gamma+G$, where $G$ is graviton, yielding the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state from pair production of $\gamma^*$. For the D0 search for UED [@PhysRevLett.105.221802], a model with six extra dimensions, a fundamental Planck scale of 5 TeV, and ${\cal BR}(\gamma^*\to\gamma G)\approx 1$ yielded a limit of $R_c^{-1}<477$ GeV along with the other results shown in Fig. \[fig-ED\](c). ![(color online) Limits on extra dimensions from the CDF and D0 experiments. (a) The excluded region in the $M_D$ vs. $N_D$ plane in a search for LED from the combined monophoton and monojet final states from the CDF experiment. (b)The limits from the $ee$ and $\gamma\gamma$ final states from the D0 experiment. (c) The 95% C.L. cross section upper limits for a UED model in the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state from the D0 experiment. \[fig-ED\]](CDF_LED.eps "fig:"){height="4.4cm"} ![(color online) Limits on extra dimensions from the CDF and D0 experiments. (a) The excluded region in the $M_D$ vs. $N_D$ plane in a search for LED from the combined monophoton and monojet final states from the CDF experiment. (b)The limits from the $ee$ and $\gamma\gamma$ final states from the D0 experiment. (c) The 95% C.L. cross section upper limits for a UED model in the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state from the D0 experiment. \[fig-ED\]](D0_LED_diem.eps "fig:"){height="4.3cm"} ![(color online) Limits on extra dimensions from the CDF and D0 experiments. (a) The excluded region in the $M_D$ vs. $N_D$ plane in a search for LED from the combined monophoton and monojet final states from the CDF experiment. (b)The limits from the $ee$ and $\gamma\gamma$ final states from the D0 experiment. (c) The 95% C.L. cross section upper limits for a UED model in the $\gamma\gamma+{\mbox{$\not\!\!E_T$}}$ final state from the D0 experiment. \[fig-ED\]](D0_ued_1.eps "fig:"){height="4.3cm"} Excited KK modes of the graviton, $G^*$, which are localized on the SM brane, are predicted in Randal–Sundrum (RS) models with a warped spacetime metric. Two parameters determine graviton couplings and widths: the constant $k/\overline{M}_{PL}$, where $k$ is the curvature scale of the extra dimension, and $\overline{M}_{PL}= M_{PL}/\sqrt{8}$ is the reduced Planck scale, and graviton excitation, $M_1$. The D0 [@PhysRevLett.95.091801; @PhysRevLett.100.091802; @PhysRevLett.104.241802] and the CDF experiments [@PhysRevLett.107.051801; @PhysRevLett.102.031801; @PhysRevD.83.011102] searched for evidence of single RS graviton production and decay via $G^*\to \ell\ell$ or $VV$. No significant excess of events in the dilepton ($e$ or $\mu$) or $\gamma\gamma$ was found and limits are shown in Fig. \[f8\](a,b). The results from the searches for the other diboson resonances described in the previous section were also interpreted as limits on the RS graviton production [@PhysRevD.85.012008; @PhysRevD.83.112008; @PhysRevLett.104.241801; @PhysRevLett.107.011801]. Limits from the $G^*\to WW$ are set for a mass $m_{G^*}<754$ GeV when $k/\overline{M}_{PL}=0.1$. In $G^*\to ZZ$ an excess of events is observed in low yield four–lepton channel at the $M_{G^*}=327$ GeV, but it was not confirmed in the more sensitive searches in the $\ell\ell jj$ and $\ell\ell{\mbox{$\not\!\!E_T$}}$ final states[@PhysRevD.85.012008]. Figure \[f8\](c) shows the results. ![(color online) The 95% C.L. excluded region in the $k/\overline{M}_{PL}$ vs. $M_G$ plane from a search for RS gravitons in the $ee$ and $\gamma\gamma$ final states from (a) the D0 and (b) the CDF experiments. (c)The 95% C.L. upper limits on the cross section of the $G^*\to ZZ$ as a function of the $M_{G^*}$ from the CDF experiment. \[f8\]](D0_geegg.eps "fig:"){height="4.2cm"}![(color online) The 95% C.L. excluded region in the $k/\overline{M}_{PL}$ vs. $M_G$ plane from a search for RS gravitons in the $ee$ and $\gamma\gamma$ final states from (a) the D0 and (b) the CDF experiments. (c)The 95% C.L. upper limits on the cross section of the $G^*\to ZZ$ as a function of the $M_{G^*}$ from the CDF experiment. \[f8\]](cdf_grav_eegg.eps "fig:"){height="4.3cm"} ![(color online) The 95% C.L. excluded region in the $k/\overline{M}_{PL}$ vs. $M_G$ plane from a search for RS gravitons in the $ee$ and $\gamma\gamma$ final states from (a) the D0 and (b) the CDF experiments. (c)The 95% C.L. upper limits on the cross section of the $G^*\to ZZ$ as a function of the $M_{G^*}$ from the CDF experiment. \[f8\]](CDF_grav_zz.eps "fig:"){height="5.5cm"} There are many ways to search for dark matter (DM) in high energy collisions depending on the potential production model. SUSY models, where the DM is the LSP and is produced in the cascade decays of other sparticles, were described in section \[sec\_susy\]. However, direct production is possible and can be observed if the DM particles are produced in association with a high energy photon or jet produced via initial state radiation. The process $p\bar{p}\to DM+DM+\rm{jet}\to\rm{jet}+{\mbox{$\not\!\!E_T$}}$ was investigated at the CDF experiment[@PhysRevLett.108.211804]. No significant deviations are found and the 90% C.L. cross section upper limits are set and converted into constraints on the DM–nucleon cross section. These results are shown together with several direct detection results in Fig. \[fig-DM\]. ![(color online) The results on the DM–nucleon scattering from the CDF experiment, done at 90% C.L., compared to the other direct dark matter experiments. For a detailed description, see Ref. . \[fig-DM\]](CDF_dm_limitsA.eps "fig:"){height="3.85cm"} ![(color online) The results on the DM–nucleon scattering from the CDF experiment, done at 90% C.L., compared to the other direct dark matter experiments. For a detailed description, see Ref. . \[fig-DM\]](CDF_dm_limitsB.eps "fig:"){height="3.85cm"} ***Signature–based searches and model–independent searches*** {#sec_sbmi} ------------------------------------------------------------- Following the early development of signature–based searches in Run I, as described in section \[sec\_sleuth\], both the CDF and the D0 experiments did model–independent searches for new physics looking for discrepancies between data and SM predictions in the events characterized with high transverse momentum. These were done using the [sleuth]{}, [bump hunter]{} and [vista]{} programs [@PhysRevD.78.012002; @PhysRevD.79.011101] at the CDF experiment and similar methods at the D0 experiment [@Abazov:2011ma]. Despite the huge number of final states considered, ([sleuth]{} considered 399 final states, [bump hunter]{} 5036 final states and [vista]{} considered 19650 final states), no true anomalies emerged (although the methods did serve to improve the MC simulation when discrepancies were noticed). The most discrepant final state contained $e{\mbox{$\not\!\!E_T$}}+b$, but was found to be consistent when taking into account the trials factor. In addition, the CDF experiment searched for new physics in a number of dedicated signature–based searches, specifically: (i) $\gamma\gamma$, $\ell\gamma+{\mbox{$\not\!\!E_T$}}$ and $\ell\ell\gamma$ events[@PhysRevD.75.112001; @PhysRevD.82.052005], where the famous [$ee\gamma\gamma{\mbox{$\not\!\!E_T$}}$]{} event from Run I would have been confirmed; (ii) $\gamma+\rm{jet}+b+{\mbox{$\not\!\!E_T$}}$ final state[@PhysRevD.80.052003]; (iii) two jets and large  events[@PhysRevLett.105.131801]; (iv) $ZZ+{\mbox{$\not\!\!E_T$}}\to\ell\ell qq+{\mbox{$\not\!\!E_T$}}$ events[@PhysRevD.85.011104]; and (v) $p\bar{p} \to (3jets)(3jets)$ [@PhysRevLett.107.042001]. In all of these searches data agreed with the SM prediction, and no new physics was found. **Summary and Conclusions** {#sec_sum} =========================== The legacy of the Fermilab Tevatron collider experiments in searches for new particles and interactions is a powerful and glorious one. The searches for new particles, such as SUSY, new fermions and bosons, excited fermions, leptoquarks, technicolor, hidden–valley model particles, long–lived particles, extra dimensions, dark matter particles, and a host of other interesting signatures was broad and deep, and produced interesting hints that changed the way we look at searches today. Indeed many new theoretical models and experimental techniques came into favor because of the CDF and D0 experiments, and are followed closely by the LHC which has taken over the high energy frontier. In its time the answers from the Tevatron, both in terms of long–established models and new important ones that cropped up, quickly responded to the best ideas in the field and and provided inspiration for new theoretical ideas. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Ray Culbertson, Michael Eads, Paul Grannis, Oscar Gonzalez Lopez, and Stephen Mrenna for useful comments and discussions. We thank the Fermilab staff and technical staffs of the participating institutions for their vital contributions. We acknowledge support from the DOE and NSF (USA), ARC (Australia), CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil), NSERC (Canada), NSC, CAS and CNSF (China), Colciencias (Colombia), MSMT and GACR (Czech Republic), the Academy of Finland, CEA and CNRS/IN2P3 (France), BMBF and DFG (Germany), DAE and DST (India), SFI (Ireland), INFN (Italy), MEXT (Japan), the KoreanWorld Class University Program and NRF (Korea), CONACyT (Mexico), FOM (Netherlands), MON, NRC KI and RFBR (Russia), the Slovak R&D Agency, the Ministerio de Ciencia e Innovaci´on, and Programa Consolider–Ingenio 2010 (Spain), The Swedish Research Council (Sweden), SNSF (Switzerland), STFC and the Royal Society (United Kingdom), the A.P. Sloan Foundation (USA), and the EU community Marie Curie Fellowship contract 302103. One author (L. Z.) is supported by Serbian Ministry of Education, Science and Technological development project 171004. One author (D. T.) is supported by the Mitchell Institute of Fundamental Physics and Astronomy. [^1]: Note that for simplicity, when we say we are looking for a model, like SUSY, we are looking for evidence of new particles and/or interactions. [^2]: $R=(-1)^{3B+L+2S}$, where $B$ is baryon number, $S$ is spin and $L$ is lepton number. [^3]: More details can be found in the top quark chapter of this review[@TopReview] [^4]: The four mSUGRA parameters are: (i) $m_0$ – common mass parameter of scalars (squarks, sleptons, Higgs bosons) at the GUT scale; (ii) $m_{1/2}$ – common mass of gauginos and higgsinos at the GUT scale; (iii) $A_0$ – common trilinear coupling; and (iv) [$\tan\beta$]{} – ratio of Higgs boson vacuum expectation values. [^5]: $\rm{sign}(\mu)=\pm1$ – sign of $\mu$ SUSY conserving higgsino mass parameter. [^6]: For large lifetimes, both neutralinos can leave the detector and are indistinguishable from mSUGRA scenarios.
--- abstract: 'A non-degenerate toric variety $X$ is called $S$-homogeneous if the subgroup of the automorphism group $\operatorname{Aut}(X)$ generated by root subgroups acts on $X$ transitively. We prove that maximal $S$-homogeneous toric varieties are in bijection with pairs $(P,\AAA)$, where $P$ is an abelian group and $\AAA$ is a finite collection of elements in $P$ such that $\AAA$ generates the group $P$ and for every $a\in\AAA$ the element $a$ is contained in the semigroup generated by $\AAA\setminus\{a\}$. We show that any non-degenerate homogeneous toric variety is a big open toric subset of a maximal $S$-homogeneous toric variety. In particular, every homogeneous toric variety is quasiprojective. We conjecture that any non-degenerate homogeneous toric variety is $S$-homogeneous.' address: 'National Research University Higher School of Economics, Faculty of Computer Science, Kochnovskiy Proezd 3, Moscow, 125319 Russia' author: - Ivan Arzhantsev title: Gale duality and homogeneous toric varieties --- [^1] Introduction ============ Let $X$ be an irreducible algebraic variety over an algebraically closed field $\KK$ of characteristic zero. The variety $X$ is said to be *toric* if $X$ is normal and admits an effective action $T\times X\to X$ of an algebraic torus $T$ with an open orbit. It is well known that toric varieties $X$ are characterized by fans of polyhedral cones $\Sigma_X$ in the vector space $N\otimes_{\ZZ}\QQ$, where $N$ is the lattice of one-parameter subgroups in $T$, see [@De; @Oda; @Fu; @CLS]. The linear Gale duality as defined in [@OP; @BH] (see also [@ADHL Section 2.2]) provides an alternative combinatorial language for toric varieties. It is developed systematically in [@BH] under the name of bunches of cones in the divisor class group. As is mentioned in the Introduction to [@BH], this approach gives very natural description of geometric phenomena connected with divisors. The aim of the present paper is to show that this language is natural also for toric varieties homogeneous under automorphism groups. An algebraic variety $X$ is said to be *homogeneous* if the automorphism group $\operatorname{Aut}(X)$ acts on $X$ transitively. The class of homogeneous varieties is wide. In particular, it includes all homogeneous spaces of algebraic groups. It is an important problem to classify homogeneous varieties among varieties of a given type. In this paper we are interested in homogeneous toric varieties. Let $X$ be a toric variety and $S(X)$ the subgroup of $\operatorname{Aut}(X)$ generated by all root subgroups in $\operatorname{Aut}(X)$. It is well known that root subgroups are in one-to-one correspondence with Demazure roots of the fan $\Sigma_X$, see [@De; @Oda; @Cox]. Nowadays Demazure roots and their generalizations became a central tool in many research projects, see [@Nill; @Li; @AKZ; @Ba; @AB; @AHHL; @AK]. For instance, Demazure roots and Gale duality was used in [@Ba] and [@AB] to describe orbits of the group $\operatorname{Aut}(X)$ on complete and affine toric varieties $X$, respectively. We say that a toric variety $X$ is *$S$-homogeneous* if the group $S(X)$ acts transitively on $X$. An $S$-homogeneous toric variety $X$ is said to be *maximal* if it does not admit a proper open toric embedding $X\subseteq X'$ into an $S$-homogeneous toric variety $X'$ with ${\operatorname{codim}_{X'}(X'\setminus X)\ge 2}$. Consider an abelian group $P$ and a collection $\AAA=\{a_1,\ldots,a_r\}$ of elements of $P$ (possibly with repetitions) that generates the group $P$. Denote by $A$ the semigroup generated by $\AAA$. We say that a collection $\AAA$ is *admissible* if for every $a_i\in\AAA$ the semigroup generated by $\AAA\setminus\{a_i\}$ coincides with $A$. A pair $(P,\AAA)$, where $P$ is an abelian group and $\AAA$ is an admissible collection of elements of $P$, is said to be *equivalent* to a pair $(P',\AAA')$, if there is an isomorphism of abelian groups $\gamma\colon P\to P'$ such that $\gamma(\AAA)=\AAA'$. Our main result provides an elementary description of maximal $S$-homogeneous toric varieties. \[tmain\] There is a one-to-one correspondence between maximal $S$-homogeneous toric varieties and equivalence classes of pairs $(P,\AAA)$, where $P$ is an abelian group and $\AAA$ is an admissible collection of elements in $P$. If $X$ is a toric variety corresponding to a pair $(P,\AAA)$, then the group $P$ is isomorphic to the divisor class group $\operatorname{Cl}(X)$ and the collection $\AAA$ coincides with the set of classes of $T$-invariant prime divisors $[D_1],\ldots,[D_r]$. In particular, the dimension of $X$ equals $r-\operatorname{rk}(P)$. Let us give an overview of the content of the paper. In Section \[s1\] we recall basic facts on toric varieties and associated fans. Section \[s2\] contains some background on Demazure roots and corresponding root subgroups. We define strongly regular fans and prove that a toric variety is $S$-homogeneous if and only if the associated fan is strongly regular (Proposition \[psh\]). In Section \[s3\] we collect basic properties of the linear Gale duality. Section \[s4\] provides a modification of this duality that takes into account a lattice containing a vector configuration. We call this modification the lattice Gale duality. In Section \[s5\] we prove Theorem \[tmain\]. Along with the proof we give an explicit description of maximal strongly regular fans, see Corollary \[cormsrf\]. Section \[s5-1\] contains first properties and examples of $S$-homogeneous varieties and strongly regular fans. Among others, we describe $S$-homogeneous toric varieties $X$ with $\operatorname{Cl}(X)=\ZZ$ and characterize strongly regular fans consisting of one-dimensional cones. Section \[s6\] describes non-maximal strongly regular fans in terms of corresponding admissible collections. A natural class of non-degenerate homogeneous toric varieties form toric varieties homogeneous under semisimple group. Such varieties are classified in [@AG]. In Section \[s7\] we characterize this class in terms of admissible collections. In Section \[s8\] we show that any non-degenerate homogeneous toric variety is a big open toric subset of a maximal $S$-homogeneous toric variety. This implies that every homogeneous toric variety is quasiprojective. We conjecture that any non-degenerate homogeneous toric variety is $S$-homogeneous. Polyhedral fans and toric varieties {#s1} =================================== In this section, we recall basic facts on the correspondence between rational polyhedral fans and toric varieties. For more details, we refer to [@CLS; @Fu; @Oda]. By a lattice $N$ we mean a free finitely generated abelian group. We consider the dual lattice $M:=\operatorname{Hom}(N,\ZZ)$ and the associated rational vector spaces $N_{\QQ}:=N\otimes_{\ZZ}\QQ$ and $M_{\QQ}:=M\otimes_{\ZZ}\QQ$. A cone in a lattice $N$ is a convex polyhedral cone in the space $N_{\QQ}$. If $\tau$ is a face of a cone $\sigma$, we write $\tau\preceq\sigma$. We denote by $\sigma(k)$ the set of all $k$-dimensional faces of $\sigma$. One-dimensional faces of a strictly convex cone are called rays. The primitive vectors of a strictly convex cone $\sigma$ in the lattice $N$ are the primitive lattice vectors on its rays. A strictly convex cone in $N$ is called *regular* if the set of its primitive vectors can be supplemented to a basis of the lattice $N$. A *fan* in a lattice $N$ is a finite collection $\Sigma$ of strictly convex cones in $N$ such that for every $\sigma\in\Sigma$ all faces of $\sigma$ belong to $\Sigma$, and for every $\sigma_i\in\Sigma$, $i=1,2$, we have $\sigma_1\cap\sigma_2\preceq\sigma_i$. We denote by $\Sigma(k)$ the set of all $k$-dimensional cones in $\Sigma$. Let $|\Sigma|$ denote the *support* of a fan $\Sigma$, that is the union of all cones in $\Sigma$. A fan is called *regular* if all its cones are regular. A *toric variety* is a normal algebraic variety $X$ containing an algebraic torus $T$ as an open subset such that the left multiplication on $T$ can be extended to a regular action $T\times X\to X$. Let $\Sigma$ be a fan in a lattice $N$. For every cone $\sigma\in\Sigma$ we define an affine toric variety $X_{\sigma}:=\operatorname{Spec}(\KK[\sigma^{\vee}\cap M])$, where $\sigma^{\vee}$ is the dual cone in $M_{\QQ}$ to the cone $\sigma$. Gluing together all varieties $X_{\sigma}$ along their isomorphic open subsets one obtains a toric variety $X_{\Sigma}$. Conversely, any toric variety comes from some fan $\Sigma$ is the lattice $N$ of one-parameter subgroups of the acting torus $T$. The dual lattice $M$ may be interpreted as the lattice of characters of the torus $T$. For every $m\in M$, we denote by $\chi^m$ the corresponding character $T\to\KK^{\times}$. It is well known that a toric variety $X_{\Sigma}$ is smooth if and only if the fan $\Sigma$ is regular. Further, $X_{\Sigma}$ is complete if and only if the fan $\Sigma$ is complete, that is $|\Sigma|=N_{\QQ}$. A toric variety $X_{\Sigma}$ is *degenerate* if it is equivariantly isomorphic to the product of a nontrivial torus $T_0$ and a toric variety of smaller dimension $X_0$. By [@CLS Proposition 3.3.9], $X_{\Sigma}$ is degenerate if and only if there is an invertible non-constant regular function on $X_{\Sigma}$ or, equivalently, the rays in $\Sigma(1)$ do not span the space $N_{\QQ}$. A variety $X_{\Sigma}$ is homogeneous if and only if $X_0$ is homogeneous. So we assume further that $X_{\Sigma}$ is non-degenerate. Demazure roots and strongly regular fans {#s2} ======================================== Let $\Sigma$ be a fan in the space $N_{\QQ}$. We denote by $n_{\rho}$ the primitive lattice vector on a ray $\rho\in\Sigma(1)$. Let $N\times M\to\ZZ$, $(n,e)\to\langle n,e\rangle$ be the pairing of the dual lattices $N$ and $M$. For $\rho\in\Sigma(1)$ we consider the set $\RRR_{\rho}$ of all vectors $e\in M$ such that 1. $\langle n_{\rho},e\rangle=-1\,\,\mbox{and}\,\, \langle n_{\rho'},e\rangle\geqslant0 \,\,\,\,\forall\,\rho'\in \sigma(1), \,\rho'\ne\rho$; 2. if $\sigma$ is a cone in $\Sigma$ and $\langle v,e\rangle=0$ for all $v\in\sigma$, then the cone generated by $\sigma$ and $\rho$ is in $\Sigma$ as well. Note that condition $(R1)$ implies condition $(R2)$ if the support $|\Sigma|$ is convex. The elements of the set $\RRR:=\bigsqcup\limits_{\rho\in\Sigma(1)}\RRR_{\rho}$ are called the *Demazure roots* of the fan $\Sigma$, cf. [@De Definition 4] and [@Oda Section 3.4]. If $e\in\RRR_{\rho}$ then $\rho$ is called the *distinguished ray* of a root $e$. Let $X=X_{\Sigma}$ be a toric variety corresponding to the fan $\Sigma$. Denote by $\GG_a$ the additive group of the ground field $\KK$. It is well known that elements of $\RRR$ are in bijection with $\GG_a$-actions on $X$ normalized by the acting torus $T$, see [@De Théoreme 3] and [@Oda Proposition 3.14]. Let us denote the $\GG_a$-subgroup of $\operatorname{Aut}(X)$ corresponding to a root $e$ by $H_e$. Let $\rho_e$ be the distinguished ray corresponding to a root $e$, $n_e$ the primitive lattice vector on $\rho_e$, and $R_e$ the one-parameter subgroup of $T$ corresponding to $n_e$. There is a bijection between cones $\sigma\in\Sigma$ and $T$-orbits $\OOO_{\sigma}$ on $X$ such that $\sigma_1\subseteq\sigma_2$ if and only if $\OOO_{\sigma_2}\subseteq\overline{\OOO_{\sigma_1}}$. Here $\dim\OOO_{\sigma}=\dim X -\dim\langle\sigma\rangle$. The following proposition describes the action of the group $H_e$ on $X$. The proof can be found, for example, in [@AK Proposition 5]. \[connect\] For every point $x\in X\setminus X^{H_e}$ the orbit $H_e\cdot x$ meets exactly two $T$-orbits $\OOO_1$ and $\OOO_2$ on $X$ with $\dim\OOO_1=\dim\OOO_2+1$. The intersection $\OOO_2\cap H_e\cdot x$ consists of a single point, while $$\OOO_1\cap H_e\cdot x=R_e\cdot y \quad \text{for any} \quad y\in\OOO_1\cap H_e\cdot x.$$ A pair of $T$-orbits $(\OOO_1,\OOO_2)$ on $X$ is said to be *$H_e$-connected* if $H_e\cdot x\subseteq \OOO_1\cup\OOO_2$ for some $x\in X\setminus X^{H_e}$. By Proposition \[connect\], we have $\OOO_2\subseteq\overline{\OOO_1}$ and $\dim\OOO_1=\dim\OOO_2+1$. Since the torus $T$ normalizes the subgroup $H_e$, any point of $\OOO_1\cup\OOO_2$ can actually serve as a point $x$. We say that a cone $\sigma_2$ in a fan $\Sigma$ is *connected* with its facet $\sigma_1$ by a root $e\in\RRR$ if $e|_{\sigma_2}\le 0$ and $\sigma_1$ is given by the equation $\langle \cdot,e\rangle=0$ in $\sigma_2$. \[ule\] [@AK Lemma 1] A pair of $\TT$-orbits $(\OOO_{\sigma_1},\OOO_{\sigma_2})$ is $H_e$-connected if and only if $\sigma_1$ is a facet of $\sigma_2$ and $\sigma_2$ is connected with $\sigma_1$ by the root $e$. A fan $\Sigma$ is called *strongly regular* if every nonzero cone $\sigma\in\Sigma$ is connected with some of its facets by a root. We denote by $S(X)$ the subgroup of $\operatorname{Aut}(X)$ generated by subgroups $H_e$, $e\in\RRR$. Let $G(X)$ be the subgroup of $\operatorname{Aut}(X)$ generated by $T$ and $S(X)$. A toric variety $X$ is said to be $S$-homogeneous if the group $S(X)$ acts on $X$ transitively. \[psh\] A non-degenerate toric variety $X_{\Sigma}$ is $S$-homogeneous if and only if the fan $\Sigma$ is strongly regular. Let us begin with a simpler observation. \[lll\] A fan $\Sigma$ is strongly regular if and only if the group $G(X)$ acts on $X$ transitively. Without loss of generality we may assume that $X$ is non-degenerate. Suppose that the fan $\Sigma$ is strongly regular. By Lemma \[ule\], every point from a non-open $T$-orbit in $X$ can be sent by some subgroup $H_e$ to a $T$-orbit of higher dimension. It shows that every point on $X$ can be sent by an element of $S(X)$ to an open $T$-orbit, and thus the group $G(X)$ acts transitively on $X$. Conversely, suppose that the fan $\Sigma$ is not strongly regular. Let $\sigma$ be a nonzero cone in $\Sigma$ which is not connected with any its facet by a root. By Lemma \[ule\], the image of the orbit $\OOO_{\sigma}$ under the action of any root subgroup $H_e$ is contained in its closure $\overline{\OOO_{\sigma}}$. Hence the closure is invariant under the group $S(X)$. This implies that $\overline{\OOO_{\sigma}}$ is a proper $G(X)$-invariant subset in $X$, a contradiction. It remains to show that the group $S(X)$ acts on $X$ transitively for every non-degenerate toric variety with a strongly regular fan $\Sigma$. Let $\rho_1,\ldots,\rho_r$ be the rays of $\Sigma$. We denote by $e_i$ a root connecting the ray $\rho_i$ with its (unique) facet $\{0\}$. By Proposition \[connect\], the orbits of the root subgroup $H_{e_i}$ intersected with the open $T$-orbit on $X$ coincide with the orbits of the one-parameter subtorus $R_{e_i}$ represented by the vectors $n_i:=n_{e_i}$ in the lattice $N$. Since $X$ is non-degenerate, the collection of vectors $n_1,\ldots,n_r$ has full rank in $N$. Thus the open $T$-orbit on $X$ is contained in one $S(X)$-orbit. Containing an open subset on $X$, this $S(X)$-orbit is $T$-invariant. Lemma \[lll\] implies that such an orbit coincides with $X$. Every strongly regular fan is regular. It follows from the fact that every homogeneous variety is smooth. \[ex1\] The only non-degenerate smooth affine toric variety is the affine space $\AA^n$. Clearly, this variety is $S$-homogeneous, so a regular cone together with all its faces is a strongly regular fan. \[ex2\] The automorphism group $\operatorname{Aut}(X)$ of a complete toric variety $X$ is a linear algebraic group, see [@De; @Cox; @Nill]. It implies that $X$ is homogeneous if and only if $X$ is $S$-homogeneous. It is well known that the only homogeneous complete toric varieties are products of projective spaces $\PP^{n_1}\times\ldots\times\PP^{n_m}$, cf. [@Ba Theorem 3.9]. This implies that complete strongly regular fans are precisely the products of fans of projective spaces. It turns out that properties of regular fans and strongly regular fans are rather different. For example, any subfan of a regular fan is regular, and for strongly regular fans this is not the case. At the same time, there exists at most one maximal strongly regular fan on a given set of rays (see Proposition \[propsuit\] below), while some sets of rays can give rise to several maximal (e.g. complete) regular fans. For an $S$-homogeneous toric variety $X$, the groups $S(X)$ and $G(X)$ may coincide and may not. For example, for the toric variety $X=\PP^n$ the full automorphism group $\PGL(n+1)$ is generated by root subgroups, while for $X=\AA^n$ root subgroups preserve the volume form on $\AA^n$ and the acting torus $T$ does not. In general, the subgroup $S(X)$ of the automorphism group $\operatorname{Aut}(X)$ generated by root subgroups may be relatively small. Following [@AKZ], let us denote by $\operatorname{SAut}(X)$ the subgroup of $\operatorname{Aut}(X)$ generated by all $\GG_a$-subgroups in $\operatorname{Aut}(X)$. The following (non-toric) example shows that the groups $\operatorname{SAut}(X)$ and $S(X)$ may not coincide. Let $X$ be an affine variety $\operatorname{Spec}(A)$, where $$A=\KK[x,y,z,u,w]/(x+x^2y+z^2+u^3).$$ Consider a one-dimensional torus action $$t\cdot(x,y,z,u,w)=(x,y,z,u,tw).$$ Denote by $S(X)$ the subgroup of $\operatorname{Aut}(X)$ generated by $\GG_a$-subgroups normalized by the torus. It is shown in [@Li Example 3.2] that any $S(X)$-orbit on $X$ is contained in a subvariety $x=\text{const}$. At the same time, the result of [@Du] implies that there is no non-constant invariant regular function for the action of $\operatorname{SAut}(X)$ on $X$. Linear Gale duality {#s3} =================== In this section we follow the presentation in [@ADHL Section 2.2.1], see also [@OP]. By a *vector configuration* in a vector space $V$ we mean a finite collection of vectors $v_1,\ldots,v_r\in V$ (possibly with repetitions) that spans the space $V$. A vector configuration $\VVV=\{v_1,\ldots,v_r\}$ in a rational vector space $V$ and a vector configuration $\WWW=\{w_1,\ldots,w_r\}$ in a rational vector space $W$ are *Gale dual* to each other if the following conditions hold: 1. We have $v_1\otimes w_1+\ldots+v_r\otimes w_r=0$ in $V\otimes W$. 2. For any rational vector space $U$ and any vectors $u_1,\ldots,u_r\in U$ with $v_1\otimes u_1+\ldots+v_r\otimes u_r=0$ in $V\otimes U$, there is a unique linear map $\psi\colon W\to U$ with $\psi(w_i)=u_i$ for $i=1,\ldots,r$. 3. For any rational vector space $U$ and any vectors $u_1,\ldots,u_r\in U$ with $u_1\otimes w_1+\ldots+u_r\otimes w_r=0$ in $U\otimes W$, there is a unique linear map $\phi\colon V\to U$ with $\psi(v_i)=u_i$ for $i=1,\ldots,r$. If we fix the first configuration in a Gale dual pair, then the second one is determined up to isomorphism. Therefore one configuration is called the *Gale transform* of the other. Consider vector configurations $\VVV=\{v_1,\ldots,v_r\}$ and $\WWW=\{w_1,\ldots,w_r\}$ in vector spaces $V$ and $W$ respectively, and let $V^*$ be the dual vector space of $V$. Then Gale duality of $\VVV$ and $\WWW$ is characterized by the following property: For any tuple $(a_1,\ldots,a_r)\in\QQ^r$ one has $$a_1w_1+\ldots+a_rw_r=0 \ \Longleftrightarrow \ l(v_i)=a_i \ \text{for} \ i=1,\ldots,r \ \text{with some} \ l\in V^*.$$ Let us present a construction which produces the Gale dual for a configuration $\VVV=\{v_1,\ldots,v_r\}$ in a space $V$. Take the vector space $\QQ^r$ and consider the surjective linear map $\alpha\colon\QQ^r\to V$ given on the standard basis $e_1,\ldots,e_r$ in $\QQ^r$ by $\alpha(e_i)=v_i$, $i=1,\ldots,r$. Consider two mutually dual short exact sequences $$\xymatrix{ 0 \ar@{->}[rr] && \Ker(\alpha) \ar@{->}[rr] && \QQ^r \ar@{->}[rr]^{\alpha} && V \ar@{->}[rr] && 0 \\ 0 \ar@{<-}[rr] && (\Ker(\alpha))^* \ar@{<-}[rr]^{\beta} && (\QQ^r)^* \ar@{<-}[rr] && V^* \ar@{<-}[rr] && 0 }$$ Let $e_1^*,\ldots,e_r^*$ be the dual basis in $(\QQ^r)^*$. Setting $W=(\Ker(\alpha))^*$ and $w_i=\beta(e_i^*)$ for $i=1,\ldots,r$, we obtain the Gale dual configuration $\WWW=\{w_1,\ldots,w_r\}$. We finish this section with a variant of the separation lemma, cf. [@BH Lemma 4.3] or [@ADHL Lemma 2.2.3.2]. Let $\VVV$ be a vector configuration in a rational vector space $V$. Denote by $\rho_i$ the ray in $V$ spanned by the vector $v_i$ from $\VVV$. Consider two strictly convex polyhedral cones $\sigma$ and $\sigma'$ in $V$ with $$\sigma(1)=\{\rho_i, i\in I\} \quad \text{and} \quad \sigma'(1)=\{\rho_j, j\in J\}.$$ \[sep\] Let $(W,\WWW)$ be the linear Gale transform of $(V,\VVV)$. Then the intersection of the cones $\sigma$ and $\sigma'$ is a face of each of them if and only if the cones $\operatorname{cone}(w_k, k\notin I)$ and $\operatorname{cone}(w_s, s\notin J)$ in the space $W$ have a common interior point. The intersection of the cones $\sigma$ and $\sigma'$ is a face of each of them if and only if there is a linear function $l\in V^*$ such that $$l(v_i)\ge 0 \quad \text{for all} \quad i\in I, \quad l(v_j)\le 0 \quad \text{for all} \quad j\in J,$$ and for any $s\in I\cup J$ we have $l(v_s)=0$ if and only if $s\in I\cap J$. This condition means that there is a relation $$\sum_{i\in I\setminus J} \alpha_iw_i - \sum_{j\in J\setminus I} \beta_j w_j +\sum_{t\notin I\cup J} \gamma_tw_t=0$$ with some positive rational coefficients $\alpha_i, \beta_j$ and some rational coefficients $\gamma_t$. This relation is equivalent to $$\sum_{k\notin I} \mu_kw_k=\sum_{s\notin J} \nu_sw_s$$ with some positive rational coefficients $\mu_k$ and $\nu_s$. The latter relation means that the cones $\operatorname{cone}(w_k, k\notin I)$ and $\operatorname{cone}(w_s, s\notin J)$ have a common interior point. Lattice Gale transform {#s4} ====================== A *vector configuration* $\NNN$ in a lattice $N$ is a finite collection of vectors $n_1,\ldots,n_r\in N$ that spans the vector space $N_{\QQ}$. Consider the lattice $\ZZ^r$ with the standard basis $e_1,\ldots,e_r$ and the exact sequence $$\xymatrix{ 0 \ar@{->}[rr] && L \ar@{->}[rr] && \ZZ^r \ar@{->}[rr]^{\alpha} && N }$$ defined by $\alpha(e_i)=n_i$, $i=1,\ldots,r$. Let us identify the dual lattice of $\ZZ^r$ with $\ZZ^r$ using the dual basis $e_1^*,\ldots,e_r^*$. Let $M:=\operatorname{Hom}(N,\ZZ)$. The homomorphism $M\to\ZZ^r$ dual to $\alpha$ gives rise to the short exact sequence of abelian groups $$\xymatrix{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \ar@{<-}[rr] && P \ar@{<-}[rr]^{\beta} && \ZZ^r \ar@{<-}[rr] && M \ar@{<-}[rr] && 0 \ \ \ \ \ \ \ \ \ \ \ \ \ (*) }$$ Let $a_i=\beta(e_i^*)$ with $i=1,\ldots,r$. By construction, the vectors $a_1,\ldots,a_r$ generate the group $P$. We call the collection $\AAA=\{a_1,\ldots,a_r\}$ the *lattice Gale transform* of the configuration $\NNN$. Replacing all groups in these sequences by their tensor products with $\QQ$, we obtain the linear Gale duality considered above. Conversely, given elements $a_1,\ldots,a_r$ that generate a group $P$, we can reconstruct sequence $(*)$, the lattice $N=\operatorname{Hom}(M,\ZZ)$, the dual homomorphism $\ZZ^r\to N$ and thus the vectors $n_1,\ldots,n_r$. \[remus\] Let $\NNN=\{n_1,\ldots,n_r\}$ be a vector configuration in a lattice $N$. The vector $n_1$ is a primitive vector in $N$ if and only if the vectors $a_2,\ldots,a_r$ generate the group $P$. Indeed, $n_1$ is primitive if and only if there is an element $e\in M$ such that $\langle n_1,e\rangle=\pm 1$, or, equivalently, there is a relation $a_1+\alpha_2a_2+\ldots+\alpha_ra_r=0$ with some integer $\alpha_2,\ldots,\alpha_r$. More generally, a subset $n_i, i\in I$ can be supplemented to a basis of $N$ if and only if for any $i\in I$ the element $a_i$ lies in the subgroup generated by $a_j, j\notin I$. The lattice Gale transform of the configuration $\NNN=\{n_1,n_2\}$ in $N=\ZZ^2$ with $n_1=(1,0)$ and $n_2=(1,2)$ is the collection $\AAA=\{a_1,a_2\}$ in the group $P=\ZZ/2\ZZ$ with $a_1=a_2=\overline{1}$. At the same time, the linear Gale transform of the configuration $\VVV=\{v_1,v_2\}$ in $V=\QQ^2$ with $v_1=(1,0)$ and $v_2=(1,2)$ is the collection $\WWW=\{0,0\}$ in the space $W=\{0\}$. Now we are going to establish a relation between the lattice Gale duality and Demazure roots. \[defsuit\] A vector configuration $\NNN=\{n_1,\ldots,n_r\}$ in a lattice $N$ is called *suitable* if for any $i=1,\ldots,r$ there exists a vector $e_i\in\operatorname{Hom}(N,\ZZ)$ such that $\langle n_i,e_i\rangle=-1$ and $\langle n_j,e_i\rangle\ge 0$ for all $j\ne i$. We recall that a collection $\AAA=\{a_1,\ldots,a_r\}$ of elements (possibly with repetitions) of an abelian group $P$ is *admissible* if $\AAA$ generates the group $P$ and for any $a_i\in\AAA$ the element $a_i$ is contained in the semigroup generated by $\AAA\setminus\{a_i\}$. \[suit-adm\] A vector configuration $\NNN=\{n_1,\ldots,n_r\}$ in a lattice $N$ is suitable if and only if its lattice Gale transform $\AAA$ in $P$ is an admissible collection. An element $a_i\in\AAA$ is contained in the semigroup generated by $\AAA\setminus\{a_i\}$ if and only if we have $a_i=\sum_{j\ne i}\alpha_ja_j$ for some non-negative integers $\alpha_j$. The latter condition means that there exists an element $e_i\in\operatorname{Hom}(N,\ZZ)$ with $$\langle n_i,e_i\rangle=-1 \quad \text{and} \quad \langle n_j,e_i\rangle=\alpha_j \quad \text{for all} \quad j\ne i.$$ Proof of Theorem \[tmain\] {#s5} ========================== We begin this section with some preliminary results. A collection of rays $\rho_1,\ldots,\rho_r$ in the space $N_{\QQ}$ is said to be *suitable* if the set of primitive lattice vectors on these rays is a suitable vector configuration. \[lemsuit\] For a strongly regular fan $\Sigma$, the collection of rays $\Sigma(1)$ is suitable. By definition of a strongly regular fan, every ray $\rho_i$ is connected with its facet $\{0\}$ by a root $e_i$. Then the vector $e_i$ satisfies the conditions of Definition \[defsuit\]. A strongly regular fan $\Sigma$ is *maximal* if it cannot be realized as a proper subfan of a strongly regular fan $\Sigma'$ with $\Sigma'(1)=\Sigma(1)$. \[propsuit\] For every suitable collection of rays $\rho_1,\ldots,\rho_r$ in $N_{\QQ}$ there exists a unique maximal strongly regular fan $\Sigma$ with $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$. Let $\Omega$ be the set of strictly convex polyhedral cones $\sigma$ in $N_{\QQ}$ with $\sigma(1)\subseteq\{\rho_1,\ldots,\rho_r\}$. With every $\sigma\in\Omega$ one associates a subset $I\subseteq\{1,\ldots,r\}$ as $\sigma(1)=\{\rho_i, i\in I\}$. Let $\AAA=\{a_1,\ldots,a_r\}$ be the lattice Gale transform of the vector configuration $\NNN=\{n_1,\ldots,n_r\}$. Denote by $\Gamma(\sigma)$ the semigroup in $P$ generated by $a_j$, $j\notin I$. In particular, we have $\Gamma(\{0\})=A$, where $A$ is the semigroup generated by $\AAA$. Let $$\Sigma=\Sigma(P,\AAA):=\{\sigma \in\Omega \ ; \ \Gamma(\sigma)=A\}.$$ We have to check four assertions. 1. $\Sigma$ is a fan and $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$. 2. The fan $\Sigma$ is strongly regular. 3. The fan $\Sigma$ is maximal. 4. Every strongly regular fan $\hat{\Sigma}$ with $\hat{\Sigma}(1)=\{\rho_1,\ldots,\rho_r\}$ is a subfan of $\Sigma$. We start with (A1). By definition, if $\tau$ is a face of a cone $\sigma$ from $\Omega$, then $\tau$ is in $\Omega$ and $\Gamma(\sigma)$ is contained in $\Gamma(\tau)$. In particular, if $\Gamma(\sigma)=A$ then $\Gamma(\tau)=A$ as well. This shows that a face of a cone from $\Sigma$ is contained in $\Sigma$. We have to check that the intersection of two cones from $\Sigma$ is a face of each of them. This follows from Lemma \[sep\]. We proceed with (A2). Let $\sigma\in\Sigma$ and $\tau$ be a facet of $\sigma$. We take $\rho_i\in\sigma(1)\setminus\tau(1)$. Assume that $\sigma(1)=\{\rho_k, k\in I\}$ for a subset $I$ in $\{1,\ldots,r\}$. Since $\Gamma(\sigma)=A$, we have $$a_i=\sum_{j\in\{1,\ldots,r\}\setminus I} \alpha_ja_j \quad \text{with some} \quad \alpha_j\in\ZZ_{\ge 0}.$$ It means that there is a vector $e\in\operatorname{Hom}(N,\ZZ)$ with $$\langle n_i,e\rangle=-1, \quad \langle n_j, e\rangle\ge 0 \quad \text{for all} \quad j\ne i, \quad \text{and} \quad \langle n_k, e\rangle=0 \quad \text{for all} \quad k\in I\setminus\{i\}.$$ In particular, all rays of the cone $\sigma$ except for $\rho_i$ lie in the hyperplane $\langle\cdot,e\rangle=0$ and thus we have $\sigma(1)=\tau(1)\cup\{\rho_i\}$. We still have to prove that the element $e$ is a Demazure root of the fan $\Sigma$. Condition $(R1)$ obviously holds. Let us check condition $(R2)$. Let $\sigma'\in\Sigma$ and $e|_{\sigma'}=0$. We have to show that the cone $\sigma''=\operatorname{cone}(\sigma',\rho_i)$ is in $\Sigma$. The condition $\sigma'\in\Sigma$ means that the elements $a_s$ with $\rho_s\notin\sigma'(1)$ generate the semigroup $A$. The condition $e|_{\sigma'}=0$ implies that the element $a_i$ is a non-negative integer linear combination of the elements $a_k$ with $\rho_k\notin\sigma'(1)$ and $k\ne i$. This shows that the elements $a_k$ generate the semigroup $A$ as well, thus $\Gamma(\sigma'')=A$ and $\sigma''\in\Sigma$. We conclude that any nonzero cone in $\Sigma$ is connected by a root with any its facet, and the fan $\Sigma$ is strongly regular. We come to (A3). Assume that we can add to the fan $\Sigma$ some cones $\sigma_1,\ldots,\sigma_m$ from $\Omega$ and obtain a strongly regular fan $\Sigma'$. For every $\sigma_i$ there is a chain of facets $\{0\}\preceq\ldots\preceq\sigma_i'\preceq\sigma_i$ connected by roots of the fan $\Sigma'$. Hence we have $$\Gamma(\sigma_i)=\Gamma(\sigma_i')=\ldots=\Gamma(\{0\})=A$$ and $\sigma_i\in\Sigma$, a contradiction. Finally we prove assertion (A4). Let $\Sigma'$ be a strongly regular fan with $\Sigma'(1)=\{\rho_1,\ldots,\rho_r\}$. Then for any $\sigma\in\Sigma'$ we again have a chain of facets $\{0\}\preceq\ldots\preceq\sigma'\preceq\sigma$ connected by roots of the fan $\Sigma'$. This implies $\Gamma(\sigma)=A$ and thus $\Sigma'$ is contained in $\Sigma$. This completes the proof of Proposition \[propsuit\]. \[cormsrf\] Every maximal strongly regular fan has the form $$\Sigma(P,\AAA):=\{\sigma \in\Omega \ ; \ \Gamma(\sigma)=A\}$$ for some abelian group $P$ and some admissible collection $\AAA$ of elements in $P$. Let $\Sigma$ be a maximal strongly regular fan and $\sigma$ a nonzero cone in $\Sigma$. Then $\sigma$ is connected with any its facet by a root of the fan $\Sigma$. The statement follows from Corollary \[cormsrf\] and the proof of (A2) in the proof of Proposition \[propsuit\]. By Proposition \[psh\], $S$-homogeneous toric varieties correspond to strongly regular fans. In turn, maximal $S$-homogeneous toric varieties correspond to maximal strongly regular fans. Lemma \[lemsuit\] and Proposition \[propsuit\] show that maximal strongly regular fans are in bijection with suitable collections of rays or, equivalently, with suitable vector configurations $\NNN$ in a lattice $N$. By Lemma \[suit-adm\], the lattice Gale transform establishes a bijection between suitable vector configurations $(N,\NNN)$ and admissible collections $(P,\AAA)$. It remains to notice that all fans, vector configurations and collections above are defined up to isomorphism of the lattice $N$ and of the group $P$, respectively. So maximal $S$-homogeneous toric varieties correspond to equivalence classes of pairs $(P,\AAA)$. Let us recall why for an $S$-homogeneous toric variety $X$ the group $P$ constructed above can be interpreted as the divisor class group $\operatorname{Cl}(X)$ and the collection $\AAA$ is the collection of classes $[D_1],\ldots,[D_r]$ of $T$-invariant prime divisors on $X$. It is well known that $T$-invariant prime divisors on $X$ are in bijection with rays of the fan $\Sigma_X$, their classes generate the group $\operatorname{Cl}(X)$, and the defining relations for this generating system are of the form $$\langle n_1,e\rangle[D_1]+\ldots+\langle n_r,e\rangle[D_r]=0,$$ where $e$ runs through the lattice $M$, see e.g. [@Fu Section 3.4]. This coincides with the definition of the lattice Gale transform of the vector configuration $\{n_1,\ldots,n_r\}$. Moreover, since any effective Weil divisor on a toric variety is linearly equivalent to a $T$-invariant effective Weil divisor, the semigroup $A$ is the semigroup of classes of effective Weil divisors on $X$. First properties and examples of strongly regular fans {#s5-1} ====================================================== Let us list some basic observations on maximal strongly regular fans $\Sigma(P,\AAA)$ and maximal $S$-homogeneous toric varieties $X(P,\AAA)$ corresponding to an admissible collection $\AAA$ in an abelian group $P$. 1. The variety $X(P,\AAA)$ is affine if and only if $P=0$ and $\AAA$ is the element $0$ taken $n$ times for some $n\ge 0$. This follows from Example \[ex1\]. 2. The variety $X(P,\AAA)$ is complete if and only if $P$ is a lattice and there are a basis $e_1,\ldots,e_m$ of $P$ and integers $n_1\ge 2,\ldots,n_m\ge 2$ such that $\AAA=\{a_1 (n_1 \ \text{times}),\ldots,a_m (n_m \ \text{times})\}$. This follows from Example \[ex2\]. 3. The variety $X(P,\AAA)$ is quasiaffine if and only if the cone in the space $P_{\QQ}=P\otimes_{\ZZ}\QQ$ generated by the vectors $a\otimes 1$, $a\in\AAA$, coincides with $P_{\QQ}$. Such a variety is the regular locus $X^{\operatorname{reg}}$ of a non-degenerate affine toric variety $X$, cf. [@AKZ Theorem 2.1]. 4. If $P=P_1\oplus P_2$ and $\AAA=\AAA_1\oplus\AAA_2$, then $X(P,\AAA)\cong X(P_1,\AAA_1)\times X(P_2,\AAA_2)$. Let $P=\ZZ/3\ZZ$ and $\AAA=\{\overline{1},\overline{1}\}$. Then $X(P,\AAA)=X(\sigma)^{\operatorname{reg}}$ with $\sigma=\operatorname{cone}((1,0),(2,3))$. If $\AAA=\{\overline{1},\overline{2}\}$ then $X(P,\AAA)=X(\sigma)^{\operatorname{reg}}$ with $\sigma=\operatorname{cone}((1,0),(1,3))$. The technique developed in this paper allows to obtain explicit classification results. As an illustration, let us classify maximal $S$-homogeneous toric varieties $X$ with $\dim X=d$ and $\operatorname{Cl}(X)=\ZZ$. To do this, we need to find all admissible collections $\AAA$ in the group $\ZZ$. We divide all such collections into three types. [*Type 1*]{}. The collection $\AAA$ contains both positive and negative elements. Here we have $X=X(\sigma)^{\operatorname{reg}}$, where $\sigma$ is a strictly convex polyhedral cone with $d+1$ rays in $\AA^d$. [*Type 2*]{}. All elements in $\AAA$ are positive. Consider the weighted projective space $Z=\PP(a_1,\ldots,a_r)$, see [@CLS Section 2.0] for precise definition. Clearly, the variety $X$ is a smooth open toric subset in $Z$. Using Remark \[remus\], one can check that $X$ coincides with $Z^{\operatorname{reg}}$ if and only if for every subcollection $\AAA'\subseteq\AAA$ that generates the group $\ZZ$, the semigroup generated by $\AAA'$ equals $A$. [*Type 3*]{}. All elements in $\AAA$ are non-negative and $\AAA$ contains $0$. In this case $X$ is a direct product of an affine space and a variety of Type 2 with smaller dimension. Let us classify varieties of Type 2 for $d=3$. We have two possibility for the variety $Z$. 1. $Z=\PP(1,1,a_3,a_4)$ with some $a_3,a_4\in\ZZ_{>0}$. 2. $Z=\PP(a_1,a_1,a_2,a_2)$ with $1<a_1<a_2$ and $(a_1,a_2)=1$. In the second case we have $X=Z^{\operatorname{reg}}$, while in the first one this is not always true. For instance, with $Z=\PP(1,1,2,3)$ the subset $Z^{\operatorname{reg}}\setminus X$ is an irreducible curve. The last observation concerns strongly regular fans composed of one-dimensional cones. \[1-sceleton\] Let $\Sigma$ be a fan with $\Sigma=\Sigma(1)\cup\{0\}$. Then $\Sigma$ is strongly regular if and only if either $\Sigma=\sigma(1)\cup\{0\}$ for a strictly convex polyhedral cone $\sigma$ or $\Sigma=\Sigma_{\PP^1}$. Let $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$ and $\NNN=\{n_1,\ldots,n_r\}$ be the corresponding vector configuration in $N$. The fan $\Sigma$ is strongly regular if and only if every $\rho_i$ is connected with $\{0\}$ by a root of $\Sigma$, i.e., there exists an element $e_i\in\operatorname{Hom}(N,\ZZ)$ such that $\langle n_i, e_i\rangle=-1$ and $\langle n_j, e_i\rangle>0$ for all $j\ne i$. Note that the case $\langle n_j, e_i\rangle=0$ for some $j$ is excluded because $\Sigma$ does not contain the cone $\operatorname{cone}(\rho_i,\rho_j)$. Clearly, the fan $\Sigma_{\PP^1}$ is strongly regular. If $\Sigma=\sigma(1)\cup\{0\}$, then for every $\rho_i$ there is a linear function which is zero on $\rho_i$ and positive on all other rays. It shows that the desired functions $e_i$ exist. Conversely, assume that $\Sigma$ is strongly regular. The case $r\le 2$ is obvious. So we suppose that $r\ge 3$. Then the linear function $e_1+\ldots+e_r$ is positive on all rays and thus the rays generate a strictly convex cone $\sigma$. Existence of the functions $e_i$ implies that every $\rho_i$ is a ray of $\sigma$. Non-maximal $S$-homogeneous toric varieties {#s6} =========================================== In Section \[s5\] we gave an explicit description of maximal strongly regular fans. The aim of this section is to develop our combinatorial language further and to describe strongly regular subfans of a given maximal strongly regular fan. Let $P$ be an abelian group, $\AAA=\{a_1,\ldots,a_r\}$ an admissible collection of elements in $P$, and $A$ the semigroup in $P$ generated by $\AAA$. A *link* is a pair $(a,\AAA')$, where $\AAA'$ is a subcollection of $\AAA$, $a\in\AAA\setminus\AAA'$, and there exists an expression $a=\sum_j \alpha_ja_j$, where $a_j$ runs through $\AAA'$ and $\alpha_j\in\ZZ_{>0}$. We say that a subcollection $\BBB\subseteq\AAA$ is *generating*, if the elements of $\BBB$ generate the semigroup $A$. Let $\GG$ be a set of generating collections in $\AAA$. A link $(a,\AAA')$ is called a *$\GG$-link* if for any $\BBB\in\GG$ the condition $\AAA'\cup\{a\}\subseteq\BBB$ implies $\BBB\setminus\{a\}\in\GG$. A set $\GG$ of generating collections in $\AAA$ is called *connected* if the following conditions hold: 1. $\AAA\setminus\{a_i\}\in\GG$ for any $i=1,\ldots,r$; 2. $\BBB\in\GG$ and $\BBB\subseteq\BBB'\subseteq\AAA$ implies $\BBB'\in\GG$; 3. if $\BBB\in\GG$ and $\BBB\ne\AAA$ then there is a $\GG$-link $(a,\AAA')$ with $\AAA'\subseteq\BBB$ and $a\notin\BBB$. Let $\{\rho_1,\ldots,\rho_r\}$ be a suitable collection of rays in a space $N_{\QQ}$ and $\NNN=\{n_1,\ldots,n_r\}$ the corresponding suitable vector configuration in $N$. Consider the lattice Gale transform $(P,\AAA)$ of $(N,\NNN)$. \[propnm\] Strongly regular fans $\Sigma$ with $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$ are in bijection with connected sets $\GG$ of generating collections in $\AAA$. With any subcollection $\BBB\subseteq\AAA$ we associate a cone $\sigma(\BBB)=\operatorname{cone}(\rho_j, a_j\notin\BBB)$. By Corollary \[cormsrf\], the maximal strongly regular fan $\Sigma(P,\AAA)$ is the set of cones associated with all generating subcollections in $\AAA$, and any strongly regular fan $\Sigma$ with $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$ is a subfan of $\Sigma(P,\AAA)$. Let $\Sigma^{\GG}$ be the set of cones $\sigma(\BBB)$, $\BBB\in\GG$. We are going to check that conditions $(C1)$-$(C3)$ are equivalent to the fact that $\Sigma^{\GG}$ is a strongly regular subfan of $\Sigma(P,\AAA)$. Condition $(C1)$ means that $\Sigma^{\GG}(1)=\{\rho_1,\ldots,\rho_r\}$. Since all cones in $\Sigma(P,\AAA)$ are regular, condition $(C2)$ means that with any cone $\sigma(\BBB)$ the fan $\Sigma$ contains all its faces $\sigma(\BBB')$. We know that in $\Sigma(P,\AAA)$ every two cones meet at a face. Hence conditions $(C1)$-$(C2)$ mean that $\Sigma^{\GG}$ is a subfan of $\Sigma(P,\AAA)$ with $\Sigma^{\GG}(1)=\{\rho_1,\ldots,\rho_r\}$. Let us show that condition $(C3)$ means that the fan $\Sigma^{\GG}$ is strongly regular. Existence of a $\GG$-link $(a_i,\AAA')$ expresses the fact that there is a root $e$ of the fan $\Sigma^{\GG}$ such that $\langle n_i,e\rangle=-1$ and the condition $\langle n_j,e\rangle>0$ is equivalent to $a_j\in\AAA'$. Then $(C3)$ means that every nonzero cone $\sigma(\BBB)$ in $\Sigma^{\GG}$ is connected with its facet $\sigma(\BBB\cup\{a\})$ by a root associated with the corresponding $\GG$-link $(a,\AAA')$. Let $P=\ZZ$ and $\AAA=\{a_1=a_2=a_3=1\}$. Here $\Sigma(P,\AAA)=\Sigma_{\PP^2}$. The set $\GG=\{\AAA,\AAA\setminus\{a_1\},\AAA\setminus\{a_2\},\AAA\setminus\{a_3\}\}$ corresponds to the subfan $\Sigma=\Sigma(P,\AAA)(1)\cup\{0\}$. Links in this case are precisely the pairs $(a_i,\{a_j\})$, $i\ne j$. None of these links is a $\GG$-link. Thus the fan $\Sigma$ is not strongly regular. In general, Proposition \[1-sceleton\] and Property $(P3)$ provide a criterion for the set $\GG=\{\AAA,\AAA\setminus\{a_1\},\ldots,\AAA\setminus\{a_r\}\}$ to be connected. Toric varieties homogeneous under semisimple group {#s7} ================================================== In [@AG], a classification of toric varieties that are homogeneous under an action of a semisimple linear algebraic group is obtained. Let us present this classification in terms of Proposition \[propnm\]. Consider a quasiaffine variety $${\mathcal{X}}={\mathcal{X}}(n_1,\dots,n_m) := (\KK^{n_1}\setminus \{0\} )\times\dots\times(\KK^{n_m}\setminus\{0\})$$ with $n_i\geq 2$. The group $G=G_1\times\ldots\times G_m$, where every component $G_i$ is either $\operatorname{SL}(n_i)$ or $\operatorname{Sp}(n_i)$, and $n_i$ is even in the second case, acts on ${\mathcal{X}}$ transitively and effectively. Let $\SS=(\KK^{\times})^m$ be an algebraic torus acting on ${\mathcal{X}}$ by componentwise scalar multiplication, and $$p \, \colon \, {\mathcal{X}}\, \to \ {\mathcal{Y}}\ := \ \PP^{n_1-1}\times\dots\times\PP^{n_m-1}$$ be the quotient morphism. Fix a closed subgroup $S\subseteq\SS$. The action of the group $S$ on ${\mathcal{X}}$ admits a geometric quotient $p_X \colon{\mathcal{X}}\to X := {\mathcal{X}}/S$. The variety $X$ is toric, it carries the induced action of the quotient group $\SS/S$, and there is a quotient morphism $p^X \colon X \to {\mathcal{Y}}$ for this action closing the commutative diagram $$\xymatrix{ {\mathcal{X}}\ar[dr]_p \ar[rr]^{p_X} & & X \ar[dl]^{p^X} \\ & {\mathcal{Y}}& }$$ The induced action of the group $G$ on $X$ is transitive and locally effective. We say that the $G$-variety $X$ is obtained from ${\mathcal{X}}$ by *central factorization*. By [@AG Theorem 1.1], every toric variety with a transitive action of a semisimple group can be obtained this way. The above diagram of quotient morphisms of homogeneous spaces gives rise to the diagram of homomorphisms of divisor class groups $$\xymatrix{ \{0\} & & P \ar[ll] \\ & \ZZ^m \ar[ul] \ar[ur] & }$$ It shows that an admissible collection corresponding to the variety $X$ is obtained as the projection of the collection corresponding to the variety ${\mathcal{Y}}$. This way we obtain \[prophomss\] Toric varieties $X$ homogeneous under an action of a semisimple linear algebraic group are in bijection with pairs $(P,\AAA)$, where 1. $P$ is an abelian group; 2. $\AAA=\{a_1\,(n_1 \, \text{times}),\ldots,a_m\,(n_m \, \text{times})\}$ with some $n_1\ge 2,\ldots,n_m\ge 2$, and the elements $a_1,\ldots,a_m$ generate the group $P$. A variety $X$ represented by a pair $(P,\AAA)$ is an open toric subset in the variety $X(P,\AAA)$ corresponding to the connected set $\GG$ of generating collections in $\AAA$ such that every collection $\BBB$ in $\GG$ contains at least one element from each of the $m$ groups of elements in $\AAA$. It is easy to see that the variety $X$ coincides with $X(P,\AAA)$ if and only if the elements $\{a_1,\ldots,a_m\}\setminus\{a_i\}$ do not generate the semigroup $A$ generated by $\{a_1,\ldots,a_m\}$ for any $i=1,\ldots,m$. Let $P=\ZZ/2\ZZ\oplus\ldots\oplus\ZZ/2\ZZ$ ($m$ times) and $a_i=(\overline{0},\ldots,\overline{1},\ldots,\overline{0})$ with $\overline{1}$ at the $i$th place. Then the variety $X$ is quasiaffine and coincides with $X(P,\AAA)$ for any $n_1,\ldots,n_m$. In [@AG Proposition 4.5], one can find a description of the fan $\Sigma^{\GG}$, where $\GG$ is as in Proposition \[prophomss\]. Toric varieties $X_{\Sigma}$, where a fan $\Sigma$ contains some fan $\Sigma^{\GG}$ as a subfan, provide examples of embedding with small boundary of homogeneous spaces of semisimple groups, see [@AH] for details. Classify toric varieties homogeneous under linear algebraic groups. Homogeneous toric varieties {#s8} =========================== We recall that a toric variety $X$ is called homogeneous if the automorphism group $\operatorname{Aut}(X)$ acts on $X$ transitively. \[thom\] Let $X$ be a non-degenerate homogeneous toric variety. Then there exists an open toric embedding $X\subseteq X'$ into a maximal $S$-homogeneous toric variety $X'$ with ${\operatorname{codim}_{X'}(X'\setminus X)\ge 2}$. Every homogeneous toric variety is quasiprojective. It suffices to show that every maximal $S$-homogeneous toric variety is quasiprojective. This follows from Corollary \[cormsrf\], [@BH Corollary 10.3] and [@ADHL Theorem 2.2.2.6]. We begin the proof of Theorem \[thom\] with a preliminary result. \[tlin\] Let $X$ be a non-degenerate smooth toric variety and $x$ a point on $X$. Consider an effective divisor $D$ on $X$ whose support does not contain $x$. Then there is a $T$-invariant effective divisor $D'$ which is linearly equivalent to $D$ and whose support does not contain $x$. The divisor $D$ defines a line bundle $L\to X$ and $L$ admits a $T$-linearization, see e.g. [@ADHL Section 4.2.2] for details. In particular, the space of global sections $H^0(X,L)$ carries a structure of a rational $T$-module, and any vector in $H^0(X,L)$ is a sum of $T$-eigenvectors. Sections that represent effective divisors with support not passing through $x$ form a subspace $U$ in $H^0(X,L)$. By assumption, the subspace $U$ is proper. Hence there is a $T$-eigenvector $v$ in $H^0(X,L)\setminus U$. An effective $T$-invariant divisor on $X$ represented by $v$ is the desired divisor $D'$. Let $X$ be a non-degenerate homogeneous toric variety and $\Sigma$ the associated fan. Consider the set of rays $\Sigma(1)=\{\rho_1,\ldots,\rho_r\}$, the corresponding vector configuration $\{n_1,\ldots,n_r\}$ in the lattice $N$, and the lattice Gale transform $(P,\AAA)$. With any point $x\in X$ one associates the semigroup $C(x)$ in $\operatorname{Cl}(X)$ of classes of effective divisors on $X$ whose support does not contain $x$. For a point $x$ in the open $T$-orbit on $X$ the semigroup $C(x)$ coincides with the semigroup $C$ of all classes of effective divisors on $X$. Since $X$ is homogeneous, we have $C(x)=C$ for all points $x\in X$. By Lemma \[tlin\], the semigroup $C(x)$ equals the semigroup generated by classes of $T$-invariant prime divisors on $X$ which do not pass through $x$. Under identification of the group $P$ with $\operatorname{Cl}(X)$, the semigroup $C(x)$ coincides with the semigroup $\Gamma(\sigma)$, where $\sigma$ is a cone in $\Sigma$ associated with the $T$-orbit of the point $x$. It shows that for a homogeneous toric variety $X$ we have $\Gamma(\sigma)=A$ for every $\sigma\in\Sigma$, where $A$ is the semigroup generated by the collection $\AAA$. In particular, $\Gamma(\rho_i)=A$ for any $i=1,\ldots,r$. It means that the set $\{\rho_1,\ldots,\rho_r\}$ is suitable or, equivalently, the collection $\AAA$ is admissible. Finally, the condition $\Gamma(\sigma)=A$ implies that the fan $\Sigma$ is a subfan of the fan $\Sigma(P,\AAA)$. Equivalently, $X$ is an open toric subset of the maximal $S$-homogeneous toric variety $X'=X(P,\AAA)$. The condition $\Sigma(1)=\Sigma(P,\AAA)(1)$ implies ${\operatorname{codim}_{X'}(X'\setminus X)\ge 2}$. \[con\] Every non-degenerate homogeneous toric variety is $S$-homogeneous. In view of Theorem \[thom\], Conjecture \[con\] means that every toric variety $X_{\Sigma}$, where $\Sigma$ is a non-strongly regular subfan of a maximal strongly regular fan, is not homogeneous. Computations with low-dimensional toric varieties confirm the conjecture. [99]{} Ivan Arzhantsev and Ivan Bazhov. On orbits of the automorphism group on an affine toric variety. Central Eur. J. Math. 11 (2013), no. 10, 1713-1724 Ivan Arzhantsev, Ulrich Derenthal, Jürgen Hausen, and Antonio Laface. [*Cox rings*]{}. Cambridge Studies in Adv. Math. 144, Cambridge University Press, New York, 2015 Ivan Arzhantsev and Sergey Gaifullin. Homogeneous toric varieties. J. Lie Theory 20 (2010), no. 2, 283-293 Ivan Arzhantsev and Jürgen Hausen. On embeddings of homogeneous spaces with small boundary. J. Algebra 304 (2006), no. 2, 950-988 Ivan Arzhantsev, Jürgen Hausen, Elaine Herppich, and Alvaro Liendo. The automorphism group of a variety with torus action of complexity one. Moscow Math. J. 14 (2014), no. 3, 429-471 Ivan Arzhantsev and Polina Kotenkova. Equivariant embeddings of commutative linear algebraic groups of corank one. Documenta Math. 20 (2015), 1039-1053 Ivan Arzhantsev, Karine Kuyumzhiyan, and Mikhail Zaidenberg. Flag varieties, toric varieties, and suspensions: three instances of infinite transitivity. Sbornik: Math. 203 (2012), no. 7, 923-949 Ivan Bazhov. On orbits of the automorphism group on a complete toric variety. Beiträge zur Algebra und Geometrie 54 (2013), no. 2, 471-481 Florian Berchtold and Jürgen Hausen. Bunches of cones in the divisor class group - A new combinatorial language for toric varieties. Int. Math. Res. Notices 2004 (2004), no. 6, 261-302 David Cox. The homogeneous coordinate ring of a toric variety. J. Alg. Geom. 4 (1995), no. 1, 17-50 David Cox, John Little, and Henry Schenck. *Toric varieties*. Graduate Studies in Math., Vol. 124, American Mathematical Society, Providence, RI, 2011 Michel Demazure. Sous-groupes algebriques de rang maximum du groupe de Cremona. Ann. Sci. Ecole Norm. Sup. 3 (1970), 507–588 Adrien Dubouloz. The cylinder over the Koras-Russell cubic threefold has a trivial Makar-Limanov invariant. Transform. Groups 14 (2009), no. 3, 531-539 William Fulton. *Introduction to toric varieties*. Annals of Mathematics Studies, Vol. 131, Princeton University Press, Princeton, NJ, 1993 Alvaro Liendo. $\GG_a$-actions of fiber type on affine $\TT$-varieties. J. Algebra 324 (2010), no. 12, 3653-3665 Benjamin Nill. Complete toric varieties with reductive automorphism group. Math. Z. 252 (2006), no. 4, 767-786 Tadao Oda. *Convex bodies and algebraic geometry: an introduction to toric varieties*. A Series of Modern Surveys in Math. 15, Springer Verlag, Berlin, 1988 Tadao Oda and Hye Sook Park. Linear Gale transforms and Gel’fand-Kapranov-Zelevinskij decompositions. Tohoku Math. J. (2) 43 (1991), no. 3, 375-399 [^1]: The research was supported by the grant RSF-DFG 16-41-01013
--- abstract: 'We propose an experiment based on a Bose-Einstein condensate interferometer for strongly constraining fifth-force models. Additional scalar fields from modified gravity or higher dimensional theories may account for dark energy and the accelerating expansion of the Universe. These theories have led to proposed screening mechanisms to fit within the tight experimental bounds on fifth-force searches. We show that our proposed experiment would greatly improve the existing constraints on these screening models by many orders of magnitude, entirely eliminating the remaining parameter space of the simplest of these models.' author: - Daniel Hartley - 'Christian K[ä]{}ding' - Richard Howl - Ivette Fuentes title: 'Quantum-enhanced screened dark energy detection' --- General relativity (GR) has remained a tremendously successful theory, producing accurate physical predictions consistent with the barrage of experiments and observations conducted over the last century. Despite this success, there are still many open problems within GR and apparent limitations of the theory itself. Amongst modified theories of gravity aiming to address these problems, scalar-tensor theories (e.g. Brans-Dicke theory [@Brans1961], see also [@Fujii2003]) are some of the most widely studied. Modified theories of gravity like $f(R)$-gravity can additionally be shown to be equivalent to scalar-tensor theories, and higher dimensional theories (e.g. string theory) predict the existence of effective scalar field modes in 4-dimensional spacetime due to compactifications of the extra dimensions [@Wehus2002]. Modifications of gravity gained even greater attention after the accelerated expansion of the Universe was discovered [@Perlmutter1998; @Riess1998] and the puzzle of dark energy (DE) - the energy that supposedly drives this expansion - arose. Consequently, there have been several proposed explanations for the nature of DE based on scalar-tensor theories (see e.g. [@Clifton2011; @Joyce2014] for an overview of models). Some of these models are predicted to cause a fifth force, which is in contradiction with observations and experiments [@Dickey1994; @Adelberger2003; @Kapner2007]. Consequently, some of these models have already been ruled out by observations [@Ishak2018]. However, models with a so-called “screening mechanism” [@Burrage2017] have features that supress the effects of the additional scalar fields in regions of high matter density. A screening mechanism would allow additional scalar fields to contribute to dark energy while the coupling to matter as a fifth force still evades experimental constraints. Cold atom systems have proven to be invaluable tools in precision metrology. From practical applications such as ultra-high precision clocks [@Ludlow2015] to more fundamental experiments searching e.g. for deviation from the equivalence principle [@Schlippert2015; @Overstreet2018; @Becker2018], the high degree of control and low internal noise afforded by cold atom systems makes them an ideal testing ground. Many scalar-tensor theories assume a conformal coupling between the metric tensor and the scalar field, and cold atom systems have been found to be well suited to studying these particular models in experiments (e.g. in atom interferometers [@Copeland2014; @Jaffe2017]) and analogue gravity simulations [@Hartley2019]. Atom interferometry experiments currently provide the tightest constraints on some of these models [@Sakstein2016; @Burrage2017]. In this Letter, we propose using a guided Bose-Einstein condensate (BEC) interferometer scheme to further constrain these conformally coupled screened scalar field models. Guided is used in this context to refer to atoms held in a trap for all or most of the interferometer scheme, rather than being in free fall. For this scheme, we consider a guided BEC interferometer as currently demonstrated in experiments. The main advantage of this scheme is a longer integration time: a trapped BEC can be held near a source object for much longer than atoms in a ballistic trajectory. We show that the constraints on the above screened scalar field models could be improved by many orders of magnitude. The models we consider here come from scalar-tensor theories of gravity [@Fujii2003]. As stated above, an additional scalar field $\Phi$ may be coupled to the metric tensor conformally in these theories, such that ordinary matter fields evolve according to the conformal metric $$\tilde{g}_{\mu\nu}=A^2\left(\Phi\right)g_{\mu\nu}$$ for some conformal factor $A^2\left(\Phi\right)$, where $g_{\mu\nu}$ is the normal GR metric. The equilibrium state of the $\Phi$ field is determined by minimising an effective potential [@Joyce2014; @Burrage2017; @Khoury2003] $$V_\text{eff}\left(\Phi\right)=V\left(\Phi\right)+A\left(\Phi\right)\rho,$$ where $V\left(\Phi\right)$ is the self-interaction potential of the model and $\rho$ is the ordinary matter density. We specifically consider two prominent examples of fifth force models with screening mechanism, namely the chameleon field [@Khoury20032; @Khoury2003] and the symmetron field (first described in [@Dehnen1992; @Gessner1992; @Damour1994; @Pietroni2005; @Olive2008; @Brax2010] and introduced with its current name in [@Hinterbichler2010; @Hinterbichler2011]). These models have been investigated in atom interferometry experiments as the thick wall of a vacuum chamber can shield its interior from outside effects [@Copeland2014; @Burrage2016], allowing the ultra-high vacuum to simulate the low density conditions of empty space resulting in long range (and thus measurable) chameleon or symmetron forces. The chameleon field model is described by the conformal coupling [@Khoury2003] $$A^2\left(\Phi\right)=\exp\left[\Phi/M_c\right]$$ and the potential $$V\left(\Phi\right)=\Lambda^4\exp\left[\Lambda^{n}/\Phi^n\right].$$ The parameter $M_c$ determines the strength of the chameleon-matter coupling. This parameter is essentially unconstrained but is plausibly below the reduced Planck mass $M_{Pl}\approx 2.4\times10^{18}$ GeV/$c^2$. The self-interaction strength $\Lambda$ determines the contribution of the chameleon field to the energy density of the Universe, as the potential can be expanded as $V\approx\Lambda^4+\Lambda^{4+n}/\Phi^n$. This energy density can drive the accelerated expansion of the Universe observed today if $\Lambda=\Lambda_{DE}\approx 2.4$ meV. Finally, different choices of the parameter $n$ define different models, where $n\in\mathbb{Z}^+\cup\left\{x:-1<x<0\right\}\cup2\mathbb{Z}^-\backslash\left\{-2\right\}$ produces valid models with screening mechanisms. The two most commonly studied chameleon models are those where $n=1$ or $-4$ [@Burrage2017]. The effective mass of the chameleon field in equilibrium is determined by the minimum of its effective potential, i.e. $m_c^2=\left|\partial^2 V_\text{eff}/\partial\Phi^2\right|_{\Phi=\Phi_\text{min}}$. The position of the effective potential minimum (and thus effective mass) depends on the ordinary matter density $\rho$ (see Appendix \[app:veff\] for a detailed demonstration). In regions of low density, e.g. the intergalactic vacuum, the chameleon is light and mediates a long range force. In regions of high density, e.g. in a laboratory, the chameleon becomes massive and the force becomes short-ranged, making it challenging to detect with fifth force tests. The symmetron model has a conformal coupling and a potential given by [@Hinterbichler2010] $$A^2\left(\Phi\right)=\exp\left[\Phi^2/2M_s^2\right]$$ and $$V\left(\Phi\right)=-\frac{\mu^2}{2}\Phi^2+\frac{\lambda_s}{4}\Phi^4$$ respectively. As for the chameleon, $M_s$ gives the symmetron-matter coupling and $\lambda_s$ determines the self-interaction strength. Unlike the chameleon, the symmetron effective potential has a $\mathbb{Z}_2$ symmetry which can be spontaneously broken in environments of low matter density (see Appendix \[app:veff\]). This allows the symmetron to obtain a non-vanishing effective mass in regions where the ambient matter density is below the critical density $\rho^*=\mu^2M_s^2$. The symmetron field has a vanishing vacuum expectation value in high density regions ($\rho>\rho^*$) and thus a vanishing force. Consequently, the parameter $\mu$ determines the scale of the symmetron-matter decoupling. ![A schematic diagram of the vacuum chamber overlaid on the field profile of a chameleon field around a spherical source object. The separation of the BEC components is greatly exaggerated.[]{data-label="Fig:setup"}](setupdiagram_test2.png){width="70mm"} We propose to use a BEC interferometer held near some source mass to constrain the chameleon and symmetron models (Fig. \[Fig:setup\]). The lowest order gravitational effect of the source mass is a gravitational redshift, which manifests as a position dependent global phase. The lowest order potential fifth force effect is a modification of this global phase by a position dependent value. This total global phase $\theta$ is given by (see Appendix \[app:chphase\]) $$\theta\left(r\right)=\frac{mc^{2}T}{2\hbar}\left[\frac{r_{s}}{r}-2\log{A\left(\Phi\left(r\right)\right)}\right]$$ where $r_s$ is the Schwarzschild radius of the source object, $m$ is the mass of each atom in the BEC, $T$ is the time and $A$ is the conformal factor defined as above. A BEC coherently split into parts would measure the phase gradient and thus field gradient in an interference measurement. The other contributions from the environment (e.g. the gravity of the Earth) could be subtracted with differential measurements or a dual interferometer scheme where measurements are performed near to and far from the source object. BEC-based interferometers are not a new concept, and have already been proposed and demonstrated (e.g. [@Shin2004; @Debs2011; @Berrada2013; @Muntinga2013; @McDonald2014], see also [@Cronin2009] and references therein). Coherent splitting of a BEC into spatially separated clouds has been implemented both with atom chips [@Shin2004; @Albiez2005; @Berrada2013] (chips printed with an electrode structure allowing for the generation of magnetic and radio-frequency fields very close to an atom cloud) and in free space [@Naik2018]. Recombination and interference of the separated clouds is typically achieved by turning the trapping potential off and letting the clouds expand into each other as they fall [@Cronin2009]. An alternative scheme has been recently realised, where the two condensate parts are brought into contact via Josephson tunnelling through a low potential barrier [@Berrada2016]. This acts as a beam splitting operation, and the interference contrast is projected onto a mean atom number difference between the two wells. The optimal sensitivity of a measurement maximised over all possible measurement schemes is given by the quantum Fisher information (QFI) through the quantum Cramér-Rao bound (QCRB) [@Braunstein1994; @Paris2009; @Safranek2016]. Since the QFI gives the best possible sensitivity in estimating a parameter, optimised over all possible forms of measurement, the QCRB trivially follows as $$\left(\Delta\kappa\right)^2\ge\frac{1}{N H\left(\kappa\right)}$$ where $\Delta\kappa$ is the absolute error in estimating the parameter $\kappa$ due to some measurement, $H\left(\kappa\right)$ is the QFI for estimating the parameter $\kappa$ and $N$ is the number of measurements performed. Gaussian states cover the majority of easily experimentally accessible states such as coherent states, thermal states and squeezed states. Calculating the QFI for Gaussian states is simple as Gaussian states have a straightforward description in terms of their first and second moments [@Monras2006; @Pinel2013; @Safranek2015]. Let $\theta_-$ be the accumulated phase difference between two arms of a BEC interferometer. The QFI for estimating $\theta_-$ with a fully condensed $\mathcal{N}_0$-atom BEC is given by $$H\left(\theta_-\right)=\mathcal{N}_0$$ which scales with the standard quantum limit (SQL) [@KokLovettQIP]. ![Constraints for the parameter space of the chameleon model $n=1$. The brown area corresponds to constraints from atom interferometry and the green area to those from Eöt-Wash experiments [@Sakstein2016; @Burrage2017]. The dotted line indicates the DE scale $\Lambda = 2.4 ~\text{meV}$. New constraints predicted in this work are coloured in blue, where dark blue corresponds to 1000 runs and light blue corresponds to 10000 runs.[]{data-label="Fig:chameleon_n1"}](chameleon_n1_newer_text.png){width="80mm"} We now consider some experimental limitations to the schemes proposed in this Letter and use these to calculate the expected sensitivity of our schemes to constraining the chameleon and symmetron models. Typical BEC experiments condense clouds consisting of $10^4-10^6$ atoms, although condensates of up to $10^8$ atoms have been demonstrated with sodium [@vdStam2007], and up to $10^9$ atoms has been demonstrated with hydrogen [@Fried1998; @Greytak2000]. For estimating the sensitivity of this detector, we assume an initial BEC with $10^6$ atoms. The maximum integration time of our proposed detector is set by the mutual coherence time of the components of the split BEC. Mutual coherence times up to 500 ms have been demonstrated with atom chips [@Jo2007; @Zhou2016], so we will estimate the integration time of our detector to be 500 ms. ![Constraints for the value of $M_c$ for positive $n$ chameleon models at $\Lambda = 2.4 ~\text{meV}$. The brown area corresponds to constraints from atom interferometry, the green area to those from Eöt-Wash experiments, and the violet area represents constraints from astrophysics [@Sakstein2016; @Burrage2017]. New constraints predicted in this work are coloured in blue, where dark blue corresponds to 1000 runs and light blue corresponds to 10000 runs.[]{data-label="Fig:chameleon_varn"}](chameleon_varn_newer.png){width="80mm"} We do not consider the effects of technical noise in the trapping potential or other sources of experimental noise. While we expect that these sources of noise will contribute substantially to the achievable sensitivity of any detector, any substantive analysis will strongly depend on the details of the experimental implementation which we also leave to future work. The expected new bounds on the chameleon and symmetron models from an implementation of our proposed schemes are presented in Figs. \[Fig:chameleon\_n1\] - \[Fig:symmetron\_exclusion\]. Together with the numbers given above, we use the same experimental dimensions as in [@Hamilton2015] for ease of comparison. Specifically, we consider a spherical vacuum chamber of radius $L=5$ cm and vacuum pressure $6\times10^{-10}$ Torr. The source object is an aluminium sphere with a radius of $R=9.5$ mm. The effective distance between the object and the BEC is 8.8 mm, and we assume that the two parts of the BEC are split by 100 $\mu$m. With clever trap positioning, the distance between the object and the BEC may eventually be limited by the strength of the van der Waals or Casimir-Polder forces, but these are not relevant at the 10 mm scale. Fig. \[Fig:chameleon\_n1\] shows the predicted new constraints for one of the most popular screening models - the chameleon with $n=1$. There it can be seen that the BEC interferometry scheme would be able to improve existing constraints for this model by up to 3 orders of magnitude and close the gap between former interferometry and Eöt-Wash experiment constraints on the DE scale $\Lambda = 2.4 ~\text{meV}$. This amounts to ruling out the simplest chameleon model as a model of dark energy. ![Constraints for the parameter space of the symmetron model. The brown area corresponds to constraints from atom interferometry, the green area to those from Eöt-Wash experiments, and the violet area represents constraints from exoplanet astrophysics [@Sakstein2016; @Burrage2017]. New constraints predicted in this work for a BEC interferometer are coloured in blue. Different shades of blue correspond to $\mu=10^{-4}$, $10^{-4.5}$, $10^{-5}$ and $10^{-5.5}$ eV in natural units respectively.[]{data-label="Fig:symmetron_exclusion"}](symmetron_exclusion_onlyground_text.png){width="80mm"} Figure \[Fig:chameleon\_varn\] shows constraints for the value of $M_c$ over different values of positive $n$ chameleon models and for $\Lambda=\Lambda_\text{DE}=2.4 ~\text{meV}$. Our scheme would improve existing interferometry constraints by more than 2 orders of magnitude and close the gap to Eöt-Wash for $n\le5$. The predicted constraints on the parameter space of the symmetron model are shown in Fig. \[Fig:symmetron\_exclusion\]. We expect that our proposed experiment would improve the existing constraints by between 16 and 26 orders of magnitude in $\lambda$ across the entire accessible range of $M_s$. For a summary of the accessible areas of and detailed constraints on the parameter space for both the chameleon and symmetron models, see Appendix \[app:plotanalysis\]. While we have only considered chameleon and symmetron screening models, it should be stressed that constraints for any other type of conformally coupled scalar field could be obtained in a similar manner, e.g. for galileons [@Dvali2000] or dilatons [@StringDilaton; @Dilaton]. To bring this proposal into reality, future work will focus on optimising the experimental implementation. Any subsequent implementation of our proposal will either discover $n=1$ chameleon fields at the cosmological energy density or completely rule them out, along with greatly improving the bounds on other screened scalar models. The authors thank C. Burrage, B. Elder and A. L. Báez-Camargo for helpful comments, and C. Burrage and J. Sakstein for providing their constraint plot files. The authors acknowledge financial support from the Austrian Science Fund (FWF) through project code W 1210-N25, the University of Nottingham, and the John Templeton Foundation through Grant No. 58745. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. Effective potentials for the chameleon and symmetron models {#app:veff} =========================================================== [0.45]{} ![$n=1$ chameleon effective potential for high (left) and low (right) ordinary matter densities plotted in green, with its components plotted in blue and orange.[]{data-label="Fig:veff_chameleon"}](ChameleonPotential_v2.jpeg "fig:"){width="\textwidth"} [0.45]{} ![$n=1$ chameleon effective potential for high (left) and low (right) ordinary matter densities plotted in green, with its components plotted in blue and orange.[]{data-label="Fig:veff_chameleon"}](ChameleonPotentialHigh.jpeg "fig:"){width="\textwidth"} ![Symmetron effective potential for high (blue) and low (orange) ordinary matter densities.[]{data-label="Fig:veff_symmetron"}](SymmetronPotential_v2.jpeg){width="45.00000%"} Fig. \[Fig:veff\_chameleon\] compares the chameleon effective potential $V_\text{eff}$ (in green) in high and low density environments for the $n=1$ chameleon. The effective potential is given to lowest order by $$V_\text{eff}\left(\Phi\right)=\frac{\Lambda^{4+n}}{\Phi^n}+\frac{\rho}{2M_c}\Phi+\mathcal{O}\left(\frac{\Phi^2}{M_c^2}\right).$$ These first two components are plotted in blue and orange respectively for low (Fig. \[subFig:chameleonlow\]) and high (Fig. \[subFig:chameleonhigh\]) values of $\rho$. It is reasonable to ignore higher order terms in $\Phi/M_c$ as any fifth force effect measured on or near the Earth must be perturbative to be consistent with experimental observations. The effective mass of excitations of the chameleon field is given by $$m_c^2=\left|\frac{\partial^2V_\text{eff}}{\partial\Phi^2}\right|_{\Phi=\Phi_\text{min}}=2\Lambda^{5}\left(\frac{\rho}{2M_{c}\Lambda^{5}}\right)^{3/2}$$ for $n=1$, which clearly scales with $\rho$. Therefore, any chameleon force will be screened in high density environments. Fig. \[Fig:veff\_symmetron\] compares the symmetron effective potential for high (orange) and low (blue) density environments. The effective potential is given by $$V_\text{eff}\left(\Phi\right)=\frac{1}{2}\left(\frac{\rho}{M_s^2}-\mu_s^2\right)\Phi^2+\frac{\lambda_s}{4}\Phi^4$$ from which it is obvious that the $\mathbb{Z}_2$ symmetry ($\Phi\rightarrow-\Phi$) is broken when the coefficient of the quadratic term in $\Phi$ is negative. This occurs at the critical density $\rho^*=\mu_s^2M_s^2$. Above this density (Fig. \[Fig:veff\_symmetron\], orange curve), the minima of $V_\text{eff}$ are degenerate at $\Phi=0$ so there is no fifth force. Below this density (Fig. \[Fig:veff\_symmetron\], blue curve), the minima become non-degenerate and non-zero at an effective mass of $$m_s^2=2\left(\mu_s^{2}-\frac{\rho}{M_{s}^{2}}\right).$$ Derivation of the lowest order effect of a conformally coupled screened scalar effect on a bosonic field {#app:chphase} ======================================================================================================== We begin by modelling our BEC as an interacting massive scalar Bose field $\hat\Phi\left(\boldsymbol{x},t\right)$ in a covariant formalism to introduce the background metric in a natural way, following the approach of [@Fagnocchi2010; @Hartley2018a; @Hartley2019]. Note that we do not assume that the BEC has “relativistic” properties such as large excitation energies (i.e. mass energy), high flow velocities (i.e. speed of light) or a strong interaction strength etc., and will later explicitly make “non-relativistic” restrictions. Following the above references, we describe the evolution of the field operator $\hat\Phi$ with the Lagrangian density $$\label{eq:genlag} \mathcal{L}=-\sqrt{-g}\left\{\partial^{\mu}\hat{\Psi}^{\dagger}\partial_{\mu}\hat{\Psi}+\left(\frac{m^{2}c^{2}}{\hbar^{2}}+V\right)\hat{\Psi}^{\dagger}\hat{\Psi}+U\right\},$$ where $V$ is the external potential, $U$ is the interaction potential and $g_{\mu\nu}$ is the metric of the background (in general curved) spacetime with determinant $g$. As is standard in BEC literature [@PitStrBEC; @PethSmBEC], we consider only the leading order $2$-particle contact interactions and approximate the interaction strength as $$U=\frac{\lambda}{2}\hat{\Psi}^\dagger\hat{\Psi}^\dagger\hat{\Psi}\hat{\Psi}.$$ The interaction strength $\lambda$ can be related to the s-wave scattering length $a_s$ by $$\lambda=8\pi a_s.$$ We can rewrite the field operator $\hat\Psi$ as $$\label{eq:nonrelfield} \hat\Psi=\hat\phi e^{imc^2t/\hbar}.$$ We will later make the assumption that time derivatives of $\hat\phi$ are small, i.e. the excitations described by $\hat\phi$ have non-relativistic energies. The appropriate background metric near a sphere of radius $R$ and mass $M$ sourcing screening for the assumed screened scalar field has the line element $$\label{eq:lineelement} ds^2=e^{\zeta^2(r)}\left[-f\left(r\right)dt^2+f^{-1}\left(r\right)dr^2+r^2d\Omega^2\right]$$ where $f\left(r\right)=1-r_s/r$, $r_s=2GM/c^2$ is the Schwarzschild radius of the object and the conformal factor $A$ has been rewritten as $A^2\left(\Phi\right)=\exp\left[\zeta^2\left(\Phi\right)\right]$ for notational convenience. Eq. (\[eq:lineelement\]) reduces to the Schwarzschild metric when $\zeta^2\rightarrow0$. The gravitational effect of the Earth is ignored; it is assumed that this can be accounted for either with differential measurements with and without the mass or through a dual interferometer scheme or simply by splitting the interferometer horizontally. We now convert this Lagrangian to a Hamiltonian density (for readability and ease of interpretation) and make the following assumptions: 1. $\left|\zeta^{2}\right|\ll1$, 2. $r_{s}\ll r$, 3. $\left|\partial_{t}\hat{\phi}\right|/c\ll\left|\partial_{i}\hat{\phi}\right|$ and 4. $\hbar^2\left|\partial_i\hat\phi^\dagger\partial_i\hat\phi\right|\ll m^2c^2\hat\phi^\dagger\hat\phi$, where $i$ runs over spatial indices. To lowest order in $\zeta^2$ and $r_s/r$, the resulting Hamiltonian density is $$\label{eq:hamdens} \mathcal{H}=\frac{\hbar^{2}}{2m}\sum_{i}\partial_{i}\phi^{\dagger}\partial_{i}\phi+V_{eff}\hat{\phi}^{\dagger}\hat{\phi}+\frac{1}{2}\lambda_{NR}\hat{\phi}^{\dagger}\hat{\phi}^{\dagger}\hat{\phi}\hat{\phi},$$ where $$V_{eff}=V_{NR}+\frac{1}{2}mc^{2}\left[\zeta^{2}-\frac{r_{s}}{r_{avg}}\right].$$ The potentials have been rescaled in the form $$V_{NR}=\frac{V}{2m}\,,\,\lambda_{NR}=\frac{\lambda}{2m}$$ as these are the form of the external potential and interaction strength that usually appear in the Gross-Pitaevskii equation (GPE) [@PitStrBEC; @PethSmBEC]. The interaction strength $\lambda_{NR}$ is usually written as $g$ (e.g. in [@PitStrBEC]), but we avoid this notation here to avoid confusion with the background spacetime metric. We have defined $r_{avg}$ as the mean distance of the BEC from the center of the source mass, and expanded $r_s/r$ as $$\frac{r_s}{r}=\frac{r_s}{r_{avg}+r'}=\frac{r_s}{r_{avg}}\left(1-\frac{r'}{r_{avg}}+\cdots\right).$$ We can neglect all terms except the first if the BEC trap geometry has a negligible extent in the direction perpendicular to the source mass field gradient. For example, the BEC could be trapped with a cigar-shaped trapping potential oriented perpendicularly to the source mass. The total field $\hat\phi$ can be written in terms of momentum eigenmodes as [@PitStrBEC] $$\hat{\phi}\left(\boldsymbol{r},t\right)=\left[\Psi_{0}\left(\boldsymbol{r}\right)+\hat{\vartheta}\left(\boldsymbol{r},t\right)\right]e^{-i\mu t/\hbar},$$ where $\Psi_{0}$ corresponds to the momentum ground state, $\mu$ is the chemical potential and $\hat{\vartheta}$ contains all higher order modes. We make the Bogoliubov approximation and also assume that the excited modes of the field are negligibly occupied. If the potentials $V_{NR}$ and $\lambda_{NR}$ are stationary, then the equation of motion for $\Psi_{0}$ derived from the above Hamiltonian density is $$\left[-\frac{\hbar^{2}}{2m}\nabla^{2}+V_{eff}-\mu+\lambda_{NR}\left|\Psi_{0}\left(\boldsymbol{r}\right)\right|^{2}\right]\Psi_{0}\left(\boldsymbol{r}\right)=0.$$ This is the time-independent GPE with the potential replaced by the effective potential $V_{eff}$. Since the screened scalar field contribution to $V_{eff}$ is approximately constant across the width of the BEC, this GPE can be solved by splitting the chemical potential into $\mu=\mu_0+\mu_I$ where $\mu_0$ is the chemical potential when $V_{eff}\rightarrow V_{NR}$. The extra term is then given by $$\mu_I=\frac{1}{2}mc^{2}\left[\zeta^{2}-\frac{r_{s}}{r_{avg}}\right].$$ Thus, the lowest order effect on the BEC ground state is a shift in the chemical potential, i.e. a phase shift. Physically, this phase is the gravitational red-shift due to the source mass, and the lowest order contribution of the screened scalar field is a modification of this red-shift. It is also worth noting that this phase shift appears in both the ground state and all excited modes of the BEC in a basis independent way. Analysis of the constraint plots in the main text {#app:plotanalysis} ================================================= The constraints plotted in Figures 2-4 in the main text are derived from the quantum Cramer-Rao bound for estimating the phase difference in an interferometer. This bound is given by $$\left(\Delta\theta_{-}\right)^{2}\ge\frac{1}{\sqrt{NH\left(\theta_{-}\right)}}.$$ Assuming a null measurement, the bounds on the screening models are given by $$\frac{1}{\sqrt{NH\left(\theta_-\right)}}\ge\frac{mc^2T}{2\hbar}\left(\zeta^2\left(r_1\right)-\zeta^2\left(r_0\right)\right)$$ for phases measured at $r_0$ and $r_1$. Chameleon constraints, Figures 2 and 3 in the main text ------------------------------------------------------- There are three major sections in the BEC interferometer constraints in Fig. 4; the negatively sloped section where $M_c/M_\text{pl}<10^{-10}$, the vertical lines bounding the region on the large $M_c$ side of the figure, and the positively sloped section between them. In the limit of an infinitely wide vacuum chamber, the $n=1$ chameleon field in the (non-perfect) vacuum has an effective mass $$m_{\infty}^{2}=2\Lambda^{5}\left(\frac{\rho_{\infty}}{2M_{c}\Lambda^{5}}\right)^{3/2}$$ where $\rho_\infty$ is the matter density of the vacuum. When the chameleon field is screened within the source mass, the constraint resulting from a null measurement is given by $$\frac{1}{\sqrt{NH\left(\theta_{-}\right)}}>\frac{mc^{2}T}{2\hbar}\sqrt{\frac{2\Lambda^{5}}{M_{c}}}\left(\frac{1}{\sqrt{\rho_{\infty}}}-\frac{1}{\sqrt{\rho_{obj}}}\right)Re^{m_{\infty}R/\hbar}\left|\frac{e^{-m_{\infty}r_{1}/\hbar}}{r_{1}}-\frac{e^{-m_{\infty}r_{0}/\hbar}}{r_{0}}\right|$$ where $\rho_\text{obj}$ is the density of the source object; this corresponds to the negatively sloped section of the Figure 2 constraints. For larger values of $M_c$, the infinite vacuum chamber approximation does not hold as the Compton wavelength of the equilibrium chameleon becomes larger than the size of the vacuum chamber. In this case, the field equilibrium inside the vacuum chamber is instead described by [@Elder2016] $$\begin{aligned} \label{eqn:ChamBackground} \varphi_\infty\rightarrow\xi\left(n(n+1)\Lambda^{4+n}R^2\right)^{\frac{1}{n+2}} ,\end{aligned}$$ where $\xi=0.55$ is a fudge factor given by the chamber’s spherical geometry and vacuum density. The effective mass is set to the radius of the vacuum chamber $m_\infty\rightarrow\hbar/R_\text{vac}$ and the relevant constraint from a null measurement is $$\frac{1}{\sqrt{NH\left(\theta_{-}\right)}}>\frac{mc^{2}T}{2\hbar}\left(\xi\left[\frac{2\Lambda^{5}R_{\text{vac}}^{2}}{M_{c}^{3}}\right]^{1/3}-\sqrt{\frac{2\Lambda^{5}}{M_{c}\rho_{obj}}}\right)Re^{R/R_{\text{vac}}}\left|\frac{e^{-r_{1}/R_{\text{vac}}}}{r_{1}}-\frac{e^{-r_{0}/R_{\text{vac}}}}{r_{0}}\right|.$$ This corresponds to the positively sloped section of Figure 2. The vertical section occurs when both the massive source and the vacuum in the chamber are screened, which leads to a $\Lambda$-independent field profile and $$\frac{1}{\sqrt{NH\left(\theta_{-}\right)}}>\frac{mc^{2}T}{2\hbar}\left(\frac{\rho_{obj}R^{3}}{3M_{c}^{2}}\right)e^{R/R_{\text{vac}}}\left|\frac{e^{-r_{1}/R_{\text{vac}}}}{r_{1}}-\frac{e^{-r_{0}/R_{\text{vac}}}}{r_{0}}\right|.$$ The horizontal boundaries in Figure 3 for early values of $n$ result from the source and the vacuum both being screened. For larger values of $n$, the background field profile given in Eq. (\[eqn:ChamBackground\]) is used. Symmetron constraints, Figure 4 in the main text ------------------------------------------------ The value of $\mu_s$ to which these constraints apply is limited by the geometry of the proposed experiment, as the Compton wavelength in low density regions is approximately $1/\mu_s$. For the field to evolve to its vacuum minimum within the chamber, the Compton wavelength must be smaller than the vacuum chamber radius. However, if the Compton wavelength is too small then the field is Yukawa supressed. The value of $\mu_s$ that this proposed experiment can constrain is restricted by these two conditions to $$10^{-5.5}\text{ eV}\lesssim\mu_s\lesssim10^{-4}\text{ eV}$$ in natural units. An object is screened from the symmetron force when its density is above the critical density $\rho_*=\mu_s^2M_s^2$. The region in $M_s$ that our proposed experiment would constrain is the region where this critical density is between the densities of the source object and the surrouding vacuum. These two density restrictions cause the sharp sides of the excluded regions in both our predicted excluded regions and the atom interferometry exclusion regions in Figure 4. The curve in the high $M_s$ section of the predicted excluded regions is the area where the critical density and the source object density become comparable. The peak in the low $M_s$ section of the $\mu_s=10^{-4}$ eV excluded region is caused by a resonance where the Compton wavelength of the symmetron field matches the distance from the object to the BEC. The wavefunctions of atoms in a BEC are (in the ideal case) spread over the width of the BEC and all overlap. As a first approximation, we consider the BEC to be a region of uniform density as opposed to a collection of discrete objects. The density of the BEC is between that of the vacuum and the source object. When the BEC density is below the critical density, it does not substantially modify the symmetron field profile. When the BEC density is above the critical density, there should in principle be a dip in the symmetron field profile. However, the Compton wavelength of the symmetron is far greater than the width of the BEC in this entire section of the parameter space, so the BEC again does not substantially effect the symmetron field profile. Hence, whether or not the BEC is screened does not play a role in determining the region of parameter space excluded by our proposed experiment. The full bound is given by $$\begin{split}\frac{1}{\sqrt{NH\left(\theta_{-}\right)}}>\frac{mc^{2}T}{2\hbar} & \frac{\mu_s^{2}}{\lambda M_{s}^{2}}\left(1-\frac{\rho_{\infty}}{\mu_s^{2}M_{s}^{2}}\right)\times\\ & \left(2\Gamma\left[\frac{e^{-m_{out}r_{0}}}{r_{0}}-\frac{e^{-m_{out}r_{1}}}{r_{1}}\right]+\Gamma^{2}\left[\frac{e^{-2m_{out}r_{1}}}{r_{1}^{2}}-\frac{e^{-2m_{out}r_{0}}}{r_{0}^{2}}\right]\right) \end{split}$$ where $$\Gamma=Re^{m_{out}R}\frac{m_{in}R-\tanh\left(m_{in}R\right)}{m_{in}R+m_{out}R\tanh\left(m_{in}R\right)}$$ and $$m_{i}^{2}=2\left(\mu_s^{2}-\frac{\rho_{i}}{M_{s}^{2}}\right).$$ [61]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRev.124.925) [**](\doibase 10.1017/CBO9780511535093), Cambridge Monographs on Mathematical Physics (, ) [****,  ()](\doibase 10.1142/S0217751X04020609),  [****,  ()](\doibase 10.1086/307221),  [****,  ()](\doibase 10.1086/300499),  [****,  ()](\doibase 10.1016/j.physrep.2012.01.001),  [****,  ()](\doibase 10.1016/j.physrep.2014.12.002),  [****,  ()](\doibase 10.1126/science.265.5171.482) [****,  ()](\doibase 10.1146/annurev.nucl.53.041002.110503) [****,  ()](\doibase 10.1103/PhysRevLett.98.021101) [****, ()](\doibase 10.1007/s41114-018-0017-4),  [****,  ()](\doibase 10.1007/s41114-018-0011-x),  [****,  ()](\doibase 10.1103/RevModPhys.87.637) @noop [ ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.120.183604) [****,  ()](\doibase 10.1038/s41586-018-0605-1) [****, ()](\doibase 10.1088/1475-7516/2015/03/042),  [****,  ()](\doibase 10.1038/nphys4189) [****,  ()](\doibase 10.1103/PhysRevD.99.105002) [****,  ()](http://stacks.iop.org/1475-7516/2016/i=11/a=045) [****,  ()](\doibase 10.1103/PhysRevD.69.044026), [****,  ()](\doibase 10.1103/PhysRevLett.93.171104),  [****,  ()](\doibase 10.1007/BF00674344) [****,  ()](\doibase 10.1007/BF00645239) [****,  ()](\doibase 10.1016/0550-3213(94)90143-0), [****,  ()](\doibase 10.1103/PhysRevD.72.043535) [****,  ()](\doibase 10.1103/PhysRevD.77.043524) [****,  ()](\doibase 10.1103/PhysRevD.82.063519) [****,  ()](\doibase 10.1103/PhysRevLett.104.231301), [****,  ()](\doibase 10.1103/PhysRevD.84.103521),  [****, ()](\doibase 10.1088/1475-7516/2016/12/041),  [****,  ()](\doibase 10.1103/PhysRevLett.92.050405) [****,  ()](\doibase 10.1103/PhysRevA.84.033610) [****,  ()](\doibase 10.1038/ncomms3077) [****,  ()](\doibase 10.1103/PhysRevLett.110.093602) [****,  ()](\doibase 10.1103/PhysRevLett.113.013002) [****,  ()](\doibase 10.1103/RevModPhys.81.1051) [****,  ()](\doibase 10.1103/PhysRevLett.95.010402) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.93.063620) [****,  ()](\doibase 10.1103/PhysRevLett.72.3439) [****,  ()](\doibase 10.1142/S0219749909004839) [****,  ()](\doibase 10.1103/PhysRevA.94.062313) [****,  ()](\doibase 10.1103/PhysRevA.73.033821) @noop [****,  ()]{} [****,  ()](http://stacks.iop.org/1367-2630/17/i=7/a=073016) @noop [**]{} (, ) [****,  ()](\doibase 10.1063/1.2424439),  [****,  ()](\doibase 10.1103/PhysRevLett.81.3811) [****,  ()](\doibase 10.1016/S0921-4526(99)01415-5) [****,  ()](\doibase 10.1103/PhysRevLett.98.030407) [****,  ()](\doibase 10.1103/PhysRevA.93.063615) [****,  ()](\doibase 10.1126/science.aaa8883) [****,  ()](\doibase 10.1016/S0370-2693(00)00669-9),  [****,  ()](\doibase 10.1016/0550-3213(94)90143-0), [****,  ()](\doibase 10.1103/PhysRevD.83.104026),  [****,  ()](http://stacks.iop.org/1367-2630/12/i=9/a=095012) [****,  ()](\doibase 10.1103/PhysRevD.98.025011) @noop [**]{} (, ) @noop [**]{} (, ) [****,  ()](\doibase 10.1103/PhysRevD.94.044051),
--- abstract: 'We present high resolution numerical simulations of the colliding wind system $\eta$ Carinae, showing accretion onto the secondary star close to periastron passage. Our hydrodynamical simulations include self gravity and radiative cooling. The smooth stellar winds collide and develop instabilities, mainly the non-linear thin shell instability, and form filaments and clumps. We find that a few days before periastron passage the dense filaments and clumps flow towards the secondary as a result of its gravitational attraction, and reach the zone where we inject the secondary wind. We run our simulations for the conventional stellar masses, $M_1=120 \rmModot$ and $M_2=30 \rmModot$, and for a high mass model, $M_1=170 \rmModot$ and $M_2=80 \rmModot$, that was proposed to better fit the history of giant eruptions. As expected, the simulations results show that the accretion processes is more pronounced for a more massive secondary star.' author: - | Amit Kashi$^{1,2,3}$[^1]\ $^{1}$Physics Department, Ariel University, Ariel, POB 3, 40700, Israel\ $^{2}$Physics Department, Technion – Israel Institute of Technology, Technion City, Haifa 3200003, Israel\ $^{3}$Minnesota Institute for Astrophysics, University of Minnesota, 116 Church St. SE. Minneapolis, MN 55455, USA\ date: 'Accepted 2016 September 07. Received 2016 September 07; in original form 2016 July 21' title: Accretion at the periastron passage of Eta Carinae --- \[firstpage\] accretion, accretion discs — stars: winds, outflows — stars: individual ($\eta$ Car) — binaries: general — hydrodynamics INTRODUCTION {#sec:intro} ============ The binary system $\eta$ Carinae is composed of a very massive star, hereafter – the primary (@Damineli1996 [@DavidsonHumphreys1997]) and a hotter and less luminous evolved main sequence star (hereafter – the secondary). The system is unique in several aspects, such as a highly eccentric orbit (@Daminelietal1997; @Smithetal2004), and strong winds (@PittardCorcoran2002; @Akashietal2006), that together leads to a strong interaction every 5.54 years during periastron passage, known as the spectroscopic event. During the event many bands and spectral lines show fast variability (e.g., @Smithetal2000; @DuncanWhite2003; @Whitelocketal2004; @Stahletal2005; @Nielsenetal2007, @Daminelietal2008a,[-@Daminelietal2008b]; @Martinetal2010; @Mehneretal2010,[-@Mehneretal2011],[-@Mehneretal2015]; @Davidson2012; @Hamaguchietal2007,[-@Hamaguchietal2016]), and the x-ray intensity drops for a duration of a few weeks (@Corcoranetal2015 and references therein). Observations of spectral lines across the 2014.6 event indicate weaker accretion onto the secondary close to periastron passage compared to previous events, hinting at a decrease in the mass-loss rate from the primary star [@Mehneretal2015]. [@Soker2005b] suggested that clumps of size of $>0.1$ per cent the binary separation will be accreted onto the secondary near periastron passages. Accretion was then used to model the spectroscopic events (@Akashietal2006; @KashiSoker2009a). [@KashiSoker2009b] performed a more detailed calculation, integrating over time and volume of the density within the Bondi-Hoyle-Lyttleton accretion radius around the secondary, and found that accretion should take place close to periastron and the secondary should accrete $\sim 2 \times 10^{-6} \rmModot$ each cycle. Other papers referred to a “collapse” of the colliding winds region at the spectroscopic event. This term remained ambiguous since it was first suggested by [@Daminelietal2008a], and could be interpreted either as accretion, shell-ejection event [@Falcetaetal2005], or other possibilities (see @Teodoroetal2012). [@Parkinetal2009] did however consider a collapse on to the surface of the secondary star, and developed a model that gives accretion of $\sim 7 \times 10^{-8} \rmModot$ per cycle. [@Parkinetal2011] performed AMR simulations of the colliding winds, but did not obtain accretion. However, when performing stationary colliding winds simulations at the time of periastron, their results showed unstable wind, mainly as a result of the non-linear thin shell instability [@Vishniac1994], and clumps were formed and reached up to a very close distance from the secondary. They also suggested that obtaining clumps that fall towards the secondary is resolution-dependent. [@Akashietal2013] conducted 3D hydrodynamical numerical simulations using the `VH-1` code to study accretion in $\eta$ Car. They found that a few days before periastron passage clumps of gas are formed due to instabilities in the colliding winds structure, and some of these clumps flow towards the secondary. The clumps came as close as one grid cell from the secondary wind injection zone, implying accretion. In their simulations, however, although the gravity of the secondary star was included, self-gravity of the wind was not included, and the resolution was too low to see the accretion itself. [@Maduraetal2013] used SPH simulation to model the colliding winds. Though suggesting that a collapse may occur, their results never showed any collapse or accretion. Recent numerical simulation of the periastron passages (e.g., @Maduraetal2015; @Clementeletal2015a, [-@Clementeletal2015b]) were interested in other aspects, and did not find accretion to take place near periastron passages. In this work we take a step forward, and use one of the best numerical tools available and run advances simulations in order to test whether accretion takes place, and to what extent. In section \[sec:simulation\] we describe the numerical simulation. Our results, showing accretion, are presented in section \[sec:results\] followed by a summary and discussion in section \[sec:summary\]. THE NUMERICAL SIMULATIONS {#sec:simulation} ========================= We use version 4.3 of the hydrodynamic code `FLASH`, originally described by [@Fryxell2000]. Our 3D Cartesian grid is extended over $(x,y,z)= \pm 8 \AU$, with the secondary fixed at the center, orbited by the primary. Our initial conditions are set $50$ days before periastron. We place the secondary in the center of the grid and send the primary on a Keplerian orbit orbit with eccentricity $e=0.9$. We use five levels of refinement with better resolution closer to the center. The length of the smallest cell is $1.18 \times 10^{11} \rm{cm}$ ($\simeq 1.7\rmRodot$). This finest resolution covers a sphere of a radius of $\simeq 82 \rmRodot$ centered at $(0,0,0)$. The next level (half the finest resolution) continues up to a radius of $\simeq 320 \rmRodot$. This level of resolution covers the apex of the colliding winds from $\simeq 20 \days$ before periastron and on. As shown below, the instabilities that lead to accretion start only a few days before periastron, namely within this level of resolution. The highest resolution allows to follow in great detail the gas as it reaches the injection zone of the secondary wind and being accreted onto the secondary. Our resolution here is the same resolution as the detailed periastron simulation of [@Parkinetal2011], but we simulate the periastron passage rather than only stationary stars at periastron. To solve the hydrodynamic equations we use the `FLASH` version of the split PPM solver [@ColellaWoodward1984]. As there are different arguments in the literature regarding the masses of the two stars, we use two sets of stellar masses: 1. A *conventional mass model*, where the primary and secondary masses are $M_1=120 \rmModot$ and $M_2=30 \rmModot$, respectively [@Hillieretal2001]. 2. A *high mass model* with $M_1=170 \rmModot$ and $M_2=80 \rmModot$ (@KashiSoker2010, where the model is referred to as the ‘MTz model’; @KashiSoker2015) The orbital period is $P=2023$ days, therefore the semi-major axis is $a=16.64 \AU$ for the conventional mass model, and $a=19.73 \AU$ for the high mass model,. For both models the stellar radii are taken to be $R_1=180 \rmModot$ and $R_2=20 \rmRodot$. The stars are being modeled by an approximation of an $n=3$ polytrope that is summed to their respective masses (this is done mainly for visualization purposes). Gravity is being modeled using the Multigrid Poisson solver. The mass loss rates and wind velocities are $\dot{M}_1=6 \times 10^{-4} \msyr$, $v_1=500 \kms$ and $\dot{M}_2=10^{-5} \msyr$, $v_2=3\,000 \kms$, respectively. The wind is being injected radially at its terminal speed from a narrow sphere around each star. In the process of injecting the winds we neglect the spins of the stars, but the orbital motion of the primary relative to the fixed grid is taken into account. For the wind the adiabatic index is set to $\gamma=5/3$. Our initial conditions at $t=-50 \days$ set the entire grid (except the stars themselves) filled with the smooth undisturbed primary wind. We let the secondary wind blow for $8$ days while the system is stationary, to allow the secondary wind to propagate and to create the colliding winds structure on one side, and fill the grid on the other side. We then let the primary proceed on its Keplerian trajectory around the secondary. Table \[table:parameters\] summarizes the values of the model properties we use. ------------- -------------------------- -------------------------- -------------------------- Parameter Meaning Conventional High mass model mass model $P$ Orbital period $2023 \days$ $2023 \days$ $e$ Eccentricity $0.9$ $0.9$ $a$ Semi-major axis $16.64 \AU$ $19.73 \AU$ $M_1$ Primary mass $120 \rmModot$ $170 \rmModot$ $M_2$ Secondary mass $30 \rmModot$ $80 \rmModot$ $R_1$ Primary radius $180 \rmRodot$ $180 \rmRodot$ $R_2$ Secondary radius $20 \rmRodot$ $20 \rmRodot$ $v_1$ Primary wind velocity $500 \kms$ $500 \kms$ $v_2$ Secondary wind velocity $3\,000 \kms$ $3\,000 \kms$ $\dot{M}_1$ Primary mass loss rate $6 \times 10^{-4} \msyr$ $6 \times 10^{-4} \msyr$ $\dot{M}_2$ Secondary mass loss rate $10^{-5} \msyr$ $10^{-5} \msyr$ ------------- -------------------------- -------------------------- -------------------------- \[table:parameters\] We include radiative cooling based on solar composition from [@SutherlandDopita1993]. The problem with radiative cooling is that it limits the time step to be considerably smaller than the hydrodynamic time step limit imposed by the Courant condition. The post-shocked primary wind is cooler and denser than the post-shocked secondary wind. Therefore, its cooling time is much shorter and takes only a few seconds. At the contact discontinuity the primary and secondary winds mix and the combined properties of the gas are closer to those of the post-shocked primary wind as it is much denser. This causes the cells of the contact discontinuity to cool radiatively very fast. Then a new layer of cool and dense gas is formed which in turn mixes with the next layer of post-shocked secondary wind causing it to cool to few$\times 10^6 \K$, where the cooling function reaches high values. The numerical effect then propagates until the entire secondary wind cools. Some codes have developed delicate treatments for this problem (e.g., @Blondin1994). To avoid a runaway numerical cooling we therefore limit the cooling of the gas in a single time step and in each grid cell to be no more than $0.3$ per cent of the thermal energy. We found that this value allows the post shocked secondary wind to be in a temperature of $\approx 10^8 \K$ as inferred from x-ray observations. When we changed the limit from $0.3$ per cent to $0.03$ per cent there were no large changes in the resulted wind collision region, but increasing the value to $3$ per cent resulted in too much numerical cooling. Neither the cooling time nor the hydrodynamic time are constant in space or time. Our method is a simplification that allows to run the code in reasonable time steps that still allows different cooling in different locations and times according to the cooling function. RESULTS {#sec:results} ======= The colliding winds structure develops instabilities, even during the $8$ days when the system is stationary. The shear flow shows the Kelvin-Helmholtz instability, creating its characteristic waves. Close to the apex, where the two winds hit each other head-on, the the non-linear thin shell instability is obtained. The instabilities are not the result of injected large perturbations, but rather seeded by the numerical noise and grow with time. As the distance between the stars decreases the secondary gravity affects more the colliding winds region. As can be seen in Figure \[fig:density\_slices\] the colliding winds region becomes highly unstable [@Akashietal2013], and dense clumps and filaments form. Some of these clumps and filaments flows towards the secondary. Some of them enter the injection zone of the secondary wind at $t \approx - 4 \days$, (marked by a black circle in Figure \[fig:density\_slices\]); this starts the accretion phase. Figure \[fig:density\_slices\] shows density maps in the orbital plane ($z=0$), for the conventional mass model ($M_1=120 \rmModot$ and $M_2=30 \rmModot$), at different times of the simulation. Times are given with respect to periastron. The secondary is at the center of the grid, and the primary orbits it from the upper part of the figure to the bottom-left until periastron, and then bottom-right. At periastron the primary is exactly to the left of the secondary. The white circle shows the radius of the secondary. The secondary wind is being injected between the white and black circles at its terminal velocity. We take the accretion condition to be that the dense primary wind reaches the injection region of the secondary wind. It may well be sufficient that the dense primary wind reaches a somewhat larger radius as the acceleration zone of the secondary wind is not modeled here. According to our condition, accretion starts at $t \approx - 4$ days (4 days before periastron). The accretion process is expected to substantially disturb the acceleration of the secondary wind. However, as in this run we do not treat the response of the secondary star to the accreted gas, we just let the injected secondary wind keep pushing the gas away. For that, after the dense primary wind reaches the acceleration zone of the secondary wind, this run is not an adequate presentation of the system (we discuss accretion treatment below). We present the flow after accretion takes place for illustrative purposes. In Figure \[fig:density\_slices\_zoom\] we show a closer view of the accretion flow structure from two different directions. It can be seen that filaments flow from different directions. Figure \[fig:density\_3d\] shows a 3D view of the accretion for the the conventional mass model at four different times before periastron. It can be seen that the flow is not at all smooth but rather the shocked primary wind forms many clumps and filaments. At $t = -8 \days$ the colliding winds structure can still be seen in the back in yellow. At this time non-linear filaments and clumps are starting to form. The four panels show how the instabilities progress until dense clumps form. The Plateau–Rayleigh instability might also takes place and create clumps out of the filaments, though our resolution is not enough to clearly show it. The secondary wind cannot reverse the inflow of some clumps and filaments, and these are accreted on to the injection zone of the secondary wind. We repeat the simulation for the high mass model (i.e., $M_1=170 \rmModot$ and $M_2=80 \rmModot$), keeping the orbital period and winds properties the same as in the previous simulation. The orbit has the same period and eccentricity, but the semi-major axis is larger, and consequently the periastron distance. We show the results in Figure \[fig:density\_slices\_high\_mass\]. As the gravitational potential of the secondary is deeper for the high mass model, it more easily attracts the filaments and clumps. Accretion therefore starts $\approx 7$ days before periastron, about 3 days earlier than for the conventional mass model. An interesting feature obtained in both simulations studied here is the direction of the initial accretion filament. It could have been anticipated that the accretion would either come from the front, i.e., the direction of the primary, where material “climbs” over the saddle between the potential wells of the two stars, and falls into the potential well of the secondary, or come from the back, in a Bondi-Hoyle-Lyttleton style. Contrary to those expectations, the gas approaches the secondary from the sides, in random directions as can be seen in Figure \[fig:density\_spherical\]. This serves as an indication that the clump and filament formation is the dominant process that facilitates the accretion and determines the direction of accretion, and not only the shape of the gravitational potential or the initial velocities of the flow. Even though our simulation does not model the reaction of the secondary and its wind to the accreted mass and its likely feedback on the accreted gas, we measured the amount of mass that reaches the secondary and find it to be in the order of $\approx 10^{-6} \msyr$, in agreement with the calculations of [@KashiSoker2009b]. SUMMARY AND DISCUSSION {#sec:summary} ====================== We performed detailed `FLASH` simulations of the colliding winds system close to periastron passage. The colliding wind region is prone to instabilities that lead to a non-linear formation of clumps and filaments that were accreted onto the secondary star. The formation of filaments and clumps can occur without self-gravity, as a result of e.g., thermal instability. The free-fall (collapse) time of each clump as a result of self-gravity is much longer than the duration of the clump formation, indicating self-gravity does not have a significant role in the formation of the clumps. Comparing to the simulations of [@Akashietal2013] we get a more clumpy flow. This is a result of better resolution and modeling of radiative cooling. It is important to emphasize that solving the colliding winds and accretion problem requires high resolution to resolve the colliding wind structure and the clumps that form and then flow towards the accretor (the secondary star in our case). A delicate treatment of the runaway numerical cooling is essential, otherwise there is a risk of a runaway cooling (see section \[sec:simulation\]). The periastron-accretion model advocates for the formation of dense blobs (clumps) in the post-shock primary wind layer of the winds colliding region [@Soker2005a; @Soker2005b]. Based on these early suggestions, there have been a few interpretations of observations of spectral lines accross the periastron passage of as being emitted or absorbed by blobs in the winds colliding region, (e.g, @KashiSoker2009c; @Richardsonetal2016). Our simulations confirmed that the dense clumps are crucial to the onset of the accretion process. X-ray observations of show flares each cycle when the two stars approach to periastron passage (@Davidson2002; @Corcoran2005; @Corcoranetal2010,[-@Corcoranetal2015]). [@MoffatCorcoran2009] suggested that the flares are the result of that clump formation in the post-shocked primary wind, interacting with the colliding winds region, and compressing the hot gas in the post-shocked secondary wind. We find that accretion occurs even for smooth primary (and secondary) wind, without creating artificial clumps numerically. The colliding winds region is compressed in some regions by the instabilities, and that may be the cause for the flares. Seeding clumps in the primary wind would have also made accretion occur easier. However, we showed here that even for ‘rough’ conditions – smooth primary wind and no artificial shut-down of the secondary wind – accretion does occur. Accretion is obtained both for the high mass model $(M_1,M_2)=(170 \rmModot, 80 \rmModot)$ and the conventional mass model $(M_1,M_2)=(120 \rmModot, 30 \rmModot)$. For the high mass model it is easier to obtain accretion, as the stronger secondary gravity attracts the clumps more easily, and does not let the secondary wind drive them away. We note that had we turned off the secondary wind in response to clumps reaching the secondary wind injection cells, accretion rate would have increased, until long after periastron passage. As the orbital separation increases, accretion rate decreases [@KashiSoker2009b], and the secondary wind is expected to resume. Since accretion occurs for the parameters present-day , it let alone occurred during the 1840’s Great Eruption, when the mass loss rate from the primary was larger by orders of magnitude. The results therefore strengthen the accretion model for the Great Eruption (@Soker2001 [@KashiSoker2010]). When running preliminary further simulations for the conventional mass model, we found that from the beginning of the accretion phase, up to 50 days after periastron $\approx 10^{-7} \rmModot$ reach the injection zone of the secondary wind. The amount of mass that is accreted is difficult to estimate, as it requires *modeling* the response of the secondary to the mass that is being accreted. Specifically, how the secondary wind is affected by accretion. The number stated above for the accreted mass was derived assuming minimal response. Obviously, as we know from observations that the x-ray radiation shuts down, this means that the secondary wind is reduced significantly. We would therefore expect the accreted mass to be larger than the value above, and possibly closer to the results of [@KashiSoker2009b]. In this paper, however, we do not assume anything about the response of the secondary to accretion, and present pure hydrodynamical results which by themselves show that accretion does occur. In a future paper we intend to use simulations to model the response of the secondary to the gas accreted onto it. Doing so will allow us to quantify the accreted mass, and its dependence on the primary mass loss rate and other parameters. This will hopefully lead to a better understanding of observations of during the spectroscopic event, and the differences between the last events. Acknowledgements {#acknowledgements .unnumbered} ================ I thank Kris Davidson and Noam Soker for very helpful discussions and suggestions. I also thank an anonymous referee for comments that improved the paper. I acknowledge support provided by National Science Foundation through grant AST-1109394. This work used computing resources at the University of Minnesota Supercomputing Institute (MSI), and the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Akashi, M. S., Kashi, A., & Soker, N. 2013, , 18, 23 Akashi, M., Soker, N., & Behar, E. 2006, , 644, 451 Blondin, J. M. 1994, , 435, 756 Clementel, N., Madura, T. I., Kruip, C. J. H., & Paardekooper, J.-P. 2015a, , 450, 1388 Clementel, N., Madura, T. I., Kruip, C. J. H., Paardekooper, J.-P., & Gull, T. R. 2015b, , 447, 2445 Colella, P., & Woodward, P. R. 1984, Journal of Computational Physics, 54, 174 Corcoran, M. F. 2005, , 129, 2018 Corcoran, M. F., Hamaguchi, K., Liburd, J. K., et al. 2015, arXiv:1507.07961 Corcoran, M. F., Hamaguchi, K., Pittard, J. M., Russell, C. M. P., Owocki, S. P., Parkin, E. R., & Okazaki, A. 2010, , 725, 1528 Damineli, A. 1996, , 460, L49 Damineli, A., Conti, P. S., & Lopes, D. F. 1997, , 2, 107 Damineli, A., Hillier, D. J., Corcoran, M. F., et al. 2008a, , 384, 1649 Damineli, A., Hillier, D. J., Corcoran, M. F., Stahl O., Groh J. H., Arias J., Teodoro M., & Morrell N. 2008b, , 386, 2330 Davidson, K., Dufour, R. J., Walborn, N. R., & Gull, T. R. 1986, , 305, 867 Davidson, K. 2002, The High Energy Universe at Sharp Focus: Chandra Science, 262, 267 Davidson, K. 2012, Eta Carinae and the Supernova Impostors, 384, 43 Davidson, K., Martin, J., Humphreys, R. M., et al. 2005, , 129, 900 Davidson, K., & Humphreys, R. M. 1997, , 35, 1 Duncan, R. A., & White, S. M. 2003, , 338, 425 Falceta-Gon[ç]{}alves, D., Jatenco-Pereira, V., & Abraham, Z. 2005, , 357, 895 Fryxell, B., Olson, K., Ricker, P., et al. 2000, , 131, 273 Hamaguchi, K., Corcoran, M. F., Gull, T., et al. 2007, , 663, 522 Hamaguchi, K., Corcoran, M. F., Gull, T. R., et al. 2016, , 817, 23 Hillier, D. J., Davidson, K., Ishibashi, K., & Gull, T. 2001, , 553, 837 Kashi, A., & Soker, N. 2009a, , 397, 1426 Kashi, A., & Soker, N. 2009b, , 14, 11 Kashi, A., & Soker, N. 2009c, , 394, 923 Kashi, A., & Soker, N. 2010a, , 723, 602 Kashi, A., & Soker, N. 2016, Research in Astronomy and Astrophysics, 16, 014 Madura, T. I., Clementel, N., Gull, T. R., Kruip, C. J. H., & Paardekooper, J.-P. 2015, , 449, 3780 Madura, T. I., Gull, T. R., Okazaki, A. T., et al. 2013, , 436, 3820 Martin, J. C., Davidson, K., Humphreys, R. M., & Mehner, A. 2010, , 139, 2056 Mehner, A., Davidson, K., Ferland, G. J., Humphreys, R. M. 2010, ApJ, 710, 729 Mehner, A., Davidson, K., & Ferland, G. J. 2011, , 737, 70 Mehner, A., Davidson, K., Humphreys, R. M., et al. 2012, , 751, 73 Mehner, A., Davidson, K., Humphreys, R. M., et al. 2015, , 578, A122 Moffat, A. F. J., & Corcoran, M. F. 2009, , 707, 693 Nielsen, K. E., Corcoran, M. F., Gull, T. R., Hillier, D. J., Hamaguchi, K., Ivarsson, S., & Lindler, D. J. 2007, , 660, 669 Parkin, E. R., Pittard, J. M., Corcoran, M. F., Hamaguchi, K., & Stevens, I. R. 2009, , 394, 1758 Parkin, E. R., Pittard, J. M., Corcoran, M. F., & Hamaguchi, K. 2011, , 726, 105 Pittard, J. M., & Corcoran, M. F. 2002, , 383, 636 Richardson, N. D., Madura, T. I., St-Jean, L., et al. 2016, , 461, 2540 Smith, N., Morse, J. A., Collins, N. R., & Gull, T. R. 2004, , 610, L105 Smith, N., Morse, J. A., Davidson, K., & Humphreys, R. M. 2000, , 120, 920 Soker, N. 2005a, , 619, 1064 Soker, N. 2005b, , 635, 540 Soker, N. 2001, , 325, 584 Stahl, O., Weis, K., Bomans, D. J., et al. 2005, , 435, 303 Sutherland, R. S., & Dopita, M. A. 1993, , 88, 253 Teodoro, M., Damineli, A., Arias, J. I., et al. 2012, , 746, 73 Vishniac, E. T. 1994, , 428, 186 Whitelock, P. A., Feast, M. W., Marang, F., & Breedt, E. 2004, , 352, 447 \[lastpage\] [^1]: E-mail: <kashi@ariel.ac.il>
--- abstract: | We compare the expressiveness of two extensions of monadic second-order logic (MSO) over the class of finite structures. The first, counting monadic second-order logic (CMSO), extends MSO with first-order modulo-counting quantifiers, allowing the expression of queries like “the number of elements in the structure is even”. The second extension allows the use of an additional binary predicate, not contained in the signature of the queried structure, that must be interpreted as an arbitrary linear order on its universe, obtaining order-invariant MSO. While it is straightforward that every CMSO formula can be translated into an equivalent order-invariant MSO formula, the converse had not yet been settled. Courcelle showed that for restricted classes of structures both order-invariant MSO and CMSO are equally expressive, but conjectured that, in general, order-invariant MSO is stronger than CMSO. We affirm this conjecture by presenting a class of structures that is order-invariantly definable in MSO but not definable in CMSO. address: - 'Mathematische Grundlagen der Informatik, RWTH Aachen, Germany' - 'Department of Computer Science, University of Auckland, New Zealand' author: - 'T. Ganzow' - 'S. Rubin' title: 'Order-Invariant MSO is Stronger than Counting MSO in the Finite' --- [Tobias Ganzow]{} [Sasha Rubin]{} Introduction ============ Linear orders play an important role in descriptive complexity theory since certain results relating the expressive power of logics to complexity classes, , the Immerman-Vardi Theorem that captures , only hold for classes of linearly ordered structures. Usually, the order only serves to systematically access all elements of the structure, and consequently to encode the configurations of a step-wise advancing computation of a Turing machine by tuples of elements of the structure. In these situations we do not actually want to make statements about the properties of the order, but merely want to have an arbitrary linear order available to express the respective coding techniques. Furthermore, when actually working with finite structures in an algorithmic context, , when evaluating queries in a relational database, we are in fact working on an implicitly ordered structure since, although relations in a database are modelled as *sets* of tuples, the relations are nevertheless stored as *ordered sequences* of tuples in memory or on a disk. As this linear order is always available (though, as in the case of databases, it is implementation-dependent and may even change over time as tuples are inserted or deleted), we could allow queries to make use of an additional binary predicate that is interpreted as a linear order on the universe of the structure, but require the outcome of the query not to depend on the actual ordering, but to be *order-invariant*. Precisely, given a $\tau$-structure $\StrA$, we allow queries built over an expanded vocabulary $\tau \dunion \{<\}$, and say that a query $\phi$ is *order-invariant* if $(\StrA,<_1) \models \phi \ \Longleftrightarrow\ (\StrA,<_2) \models \phi$ for all possible relations $<_1$ and $<_2$ linearly ordering $A$. Using -games for MSO, one can see that MSO on sets (, structures over an empty vocabulary) is too weak to express that the universe contains an even number of elements. However, this is possible if the universe is linearly ordered: simply use the MSO sentence stating that the maximal element should be contained in the set of elements on even positions in the ordering. Obviously, such a sentence is order-invariant since rearranging the elements does not affect its truth value. Gurevich uses this observation to show that the property of Boolean algebras having an even number of atoms, although not definable in FO, is order-invariantly definable in FO (simulating the necessary MSO-quantification over sets of atoms by FO-quantification over the elements of the Boolean algebra). If we explicitly add modulo-counting to MSO, , via modulo-counting first-order quantifiers such as “there exists an even number of elements $x$ such that …”, we obtain *counting monadic second-order logic* (CMSO), and the question naturally arises as to whether there are properties not expressible in CMSO that can be expressed order-invariantly in MSO. In fact, a second separation example due to Otto gives a hint in that direction. The class of structures presented in [@Otto:epsilon-invariance] even separates order-invariant FO from FO extended by arbitrary unary generalised quantifiers, , especially modulo-counting quantifiers, and exploits the idea of a part of the structure such that it is only meaningfully usable for queries in presence of a linear order (or, as actually proven in the paper, in presence of an arbitrary choice function). The expressiveness of CMSO has been studied, , in [@Courcelle:MSO-I], where it is mainly compared to , and in [@Courcelle:MSO-X] it is shown that, on the class of forests, order-invariant MSO is no more expressive than CMSO. As pointed out in [@BenSeg:TameStructs], this can be generalised using results in [@Lapoire98] to classes of structures of bounded tree-width. But still, this left open Courcelle’s conjecture: that order-invariant MSO is strictly stronger than CMSO for general graphs [@Courcelle:MSO-X Conjecture 7.3]. In this paper, we present a suitable characterisation of CMSO-definability in terms of an game, and later, as the main contribution, we present a separating example showing that a special class of graphs is indeed definable by an order-invariant MSO sentence but not by a counting MSO sentence. Preliminaries ============= Throughout the paper $\setN$ denotes the set of non-negative integers and $\setN^+ := \setN-\{0\}$. Given a non-empty finite set $M = \{\seq[k]m\} \finsubseteq \setN^+$, let $\lcm(M) := \lcm(\seq[k]m)$ denote the least common multiple of all elements in $M$; additionally, we define $\lcm(\emptyset) = 1$. For sets $X$ and $Y$ as well as $M$ as before, we abbreviate that $|X| \equiv |Y| \pmod m$ for all $m \in M$ by using the shorthand $|X| \equiv |Y| \pmod M$. We restrict our attention to finite $\tau$-structures with a nonempty universe over a countable relational vocabulary $\tau$, possibly with constants, and we will mainly deal with monadic second-order logic and some of its extensions. For more details concerning finite model theory, we refer to [@EF:FMT] or [@Libkin:FMT]. When comparing the expressiveness of two logics $\Logic$ and $\Logic'$, we say that *$\Logic'$ is at least as expressive as $\Logic$*, denoted $\Logic \subseteq \Logic'$, if for every $\phi\in\Logic[\tau]$ there exists a $\phi'\in\Logic'[\tau]$ such that $\Mod(\phi) = \Mod(\phi')$, where $\Mod(\phi)$ denotes the class of all finite $\tau$-structures satisfying $\phi$. Counting MSO ------------ The notion of (modulo-)counting monadic second-order logic () can be introduced in two different, but nonetheless equivalent, ways. The first view of is via an extension of by modulo-counting first-order quantifiers. Let $\tau$ be a signature and $M \subseteq \setN^+$ a set of moduli, then - every formula $\phi \in \MSO[\tau]$ is also a formula in $\CMSO^{(M)}[\tau]$, and - if $\phi(x) \in \CMSO^{(M)}[\tau]$ and $m\in M$, then $\exists^{(m)}x. \phi(x) \in \CMSO^{(M)}[\tau]$. If we do not restrict the set of modulo-counting quantifiers being used, we get the full language $\CMSO[\tau] = \CMSO^{(\setN^+)}[\tau]$. The semantics of formulae is as expected, and we have $\StrA \models \exists^{(m)}x.\phi(x)$ if and only if $\card{\{a\in A : \StrA\models\phi(a)\}} \equiv 0 \pmod m$. The quantifier rank $\qr(\psi)$ of a $\CMSO[\tau]$ formula $\psi$ is defined as for -formulae with the additional rule that $\qr\big(\exists^{(m)}x.\phi(x)\big) = 1+\qr(\phi)$, , we do not distinguish between different kinds of quantifiers. In this paper we use an alternative but equivalent definition of , namely the extension of the language by monadic second-order predicates $C^{(m)}$ which hold true of a set $X$ if and only if $|X| \equiv 0 \pmod m$. As in the definition above, formulae of the fragment $\CMSO^{(M)}[\tau]$ may only use predicates $C^{(m)}$ where $m \in M$. The back-and-forth translation can be carried out along the following equivalences which increase the quantifier rank by at most one in each step: $$\begin{aligned} \exists^{(m)}x.\phi(x)\ &\equiv\ \exists X(C^{(m)}(X) \land \forall x(Xx \liff \phi(x)))\quad\text{and}\\ C^{(m)}(X) &\equiv\ \exists^{(m)}x. Xx\,.\end{aligned}$$ Furthermore, the introduction of additional predicates $C^{(m,r)}$ (or, equivalently, additional modulo-counting quantifiers $\exists^{(m,r)}$) stating for a set $X$ that $|X|\equiv r \pmod m$ does not increase the expressive power since they can be simulated as follows (with only a constant increase of quantifier rank): $$\begin{aligned} C^{(m,r)}(X)\ & \equiv\ \exists X_0 (\text{\qq{$X_0 \subseteq X$}} \land \text{\qq{$|X_0| = r$}} \land \text{\qq{$C^{(m)}(X\setminus X_0)$}})\,,\end{aligned}$$ where all subformulae are easily expressible in . Later, we will introduce an game capturing the expressiveness of with this extended set of second-order predicates. Order-invariance ---------------- Let $\tau$ be a relational vocabulary and $\phi \in \MSO[\tau \dunion \{<\}]$, , $\phi$ may contain an additional relation symbol $<$. Then $\phi$ is called *order-invariant on a class $\ClsC$ of $\tau$-structures* if, and only if, $(\StrA,<_1) \models \phi\ \Longleftrightarrow\ (\StrA,<_2) \models \phi$ for all $\StrA \in \ClsC$ and all linear orders $<_1$ and $<_2$ on $A$. Although, in general, it is undecidable whether a given -formula is order-invariant in the finite, we will speak of the *order-invariant fragment of* , denoted by $\iMSO{<}$, that contains all formulae that are order-invariant on the class of all finite structures. It is an easy observation that every $\CMSO$ formula is equivalent over the class of all finite structures to an order-invariant formula by translating counting quantifiers in the following way: $$\begin{aligned} \cntexists{q}x.\phi(x) &\ :=\ \exists X \exists X_0 \ldots \exists X_{q-1}\\ & \qquad\left( \begin{aligned} & \forall x \left(Xx \liff \phi(x)\right) \land\ \text{``$\{X_0,\ldots,X_{q-1}\}$ is a partition of $X$''\ }\\ \land\ & \exists x\big(X_0x \land \forall y(Xy \limp x \leq y)\big) \land\ \exists x\big(X_{q-1}x \land \forall y(Xy \limp x \geq y)\big)\\ \land\ & \forall x \forall y \left(S_{\phi,<}(x,y) \limp \left( \bigland_{i=0}^{q-1}X_i x \liff X_{i+1\!\!\pmod q} y \right)\right) \end{aligned}\right)\end{aligned}$$ where $S_{\phi,<}$ defines the successor relation induced by an arbitrary order $<$ on the universe of the structure restricted to the set $X$ of elements for which $\phi$ holds. Note that the quantifier rank of the translated formula is not constant but bounded by the parameter in the counting quantifier. An game for CMSO ================ The game capturing expressiveness of parameterised by the quantifier-rank (cf. [@EF:FMT; @Libkin:FMT]) can be naturally extended to a game capturing the expressiveness of parameterised by the quantifier rank and the set of moduli being used in the cardinality predicates or counting quantifiers. Viewing as with additional quantifiers $\exists^{(m)}x.\phi(x)$ for all $m$ in a fixed set $M$ leads to a new type of move described, , in the context of extending by modulo-counting quantifiers in [@Nurmonen:mod-quant]. Since a modulo-counting quantifier actually combines notions of a first-order and a monadic second-order quantifier in the sense that it makes a statement about the cardinality of a certain *set* of elements, but on the other hand, it behaves like a first-order quantifier binding an *element* variable and making a statement about that particular element, the move capturing modulo-counting quantification consists of two phases. First, Spoiler and Duplicator select sets of elements $S$ and $D$ in the structures such that $|S| \equiv |D| \pmod M$, and in the second phase, Spoiler and Duplicator select elements $a$ and $b$ such that $a\in S$ if and only if $b\in D$. After the move, reflecting the first-order nature of the quantifier, only the two selected elements $a$ and $b$ are remembered and contribute to the next position in the game, whereas the information about the chosen sets is discarded. We prefer viewing CMSO via second-order cardinality predicates, yielding an game that allows a much clearer description of winning strategies. Since we do not have additional quantifiers, we have exactly the same types of moves as in the game for , and we merely modify the winning condition to take the new predicates into account. Towards this end, we first introduce a suitable concept of partial isomorphisms between structures. With any structure $\StrA$ and any set $M\finsubseteq\setN^+$ we associate the (first-order) power set structure $\StrA^M := \big(\Pot{A},(C^{(m,r)})_{\substack{m\in M\\0\leq r < m}}\big)$, where the predicates $C^{(m,r)}$ are interpreted in the obvious way. (Note that first-order predicates in the power set structure $\StrA^M$ naturally correspond to second-order predicates in $\StrA$.) Let $\StrA$ and $\StrB$ be $\tau$-structures, and let $M \finsubseteq \setN^+$ be a fixed set of moduli. Then the mapping $(\seq[s]{A},\seq[t]{a}) \mapsto (\seq[s]{B},\seq[t]{b})$ is called a *twofold partial isomorphism between $\StrA$ and $\StrB$ with respect to $M$* if 1. $(\seq[t]a) \mapsto (\seq[t]b)$ is a partial isomorphism between $(\StrA,\seq[s]{A})$ and $(\StrB,\seq[s]{B})$ and 2. $(\seq[s]A) \mapsto (\seq[s]B)$ is a partial isomorphism between $\StrA^M$ and $\StrB^M$. We propose the following game to capture the expressiveness of CMSO where the use of moduli is restricted to a (finite) set $M$ and formulae of quantifier rank at most $r$. Let $M \finsubseteq \setN^+$ and $r \in \setN$. The $r$-round (mod $M$) game $\Game_r^M(\StrA,\StrB)$ is played by Spoiler and Duplicator on $\tau$-structures $\StrA$ and $\StrB$. In each turn, Spoiler can choose between the following types of moves: - *point move:* Spoiler selects an element in one of the structures, and Duplicator answers by selecting an element in the other structure. - *set move:* Spoiler selects a set of elements $X$ in one of the structures, and Duplicator responds by choosing a set of elements $Y$ in the other structure. After $r = s+t$ rounds, when the players have chosen sets $\seq[s]A$ and $\seq[s]B$ as well as elements $\seq[t]a$ and $\seq[t]b$ in an arbitrary order, Duplicator wins the game if, and only if, $(\seq[s]{A},\seq[t]{a}) \mapsto (\seq[s]{B},\seq[t]{b})$ is a twofold partial isomorphism between $\StrA$ and $\StrB$ with respect to $M$. First note that, although Duplicator is required to answer a set move $X$ by a set $Y$ such that $|X| \equiv |Y| \pmod M$ in order to win, we do not have to make this explicit in the rules of the moves since these cardinality constraints are already imposed by the winning condition ($X$ and $Y$ would not define a twofold partial isomorphism if they did not satisfy the same cardinality predicates). Furthermore, for $M = \emptyset$ or $M=\{1\}$, the resulting game $\Game_r^M(\StrA,\StrB)$ corresponds exactly to the usual game for . Let $\StrA$ and $\StrB$ be $\tau$-structures, $r\in\setN$, and $M \finsubseteq \setN$. Then the following are equivalent: (i) $\StrA \equiv_r^M \StrB$, , $\StrA \models \phi$ if and only if $\StrB \models \phi$ for all $\phi \in \CMSO^{(M)}[\tau]$ with $\qr(\phi) \leq r$. (ii) Duplicator has a winning strategy in the $r$-round (mod $M$) game $\Game_r^M(\StrA,\StrB)$. To prove non-definability results, we can make use of the following standard argument. A class $\ClsC$ of $\tau$-structures is not definable in CMSO if, for every $r \in \setN$ and every $M \finsubseteq \setN^+$, there are $\tau$-structures $\StrA_{M,r}$ and $\StrB_{M,r}$ such that $\StrA_{M,r} \in \ClsC$, $\StrB_{M,r} \not\in \ClsC$, and $\StrA_{M,r} \equiv_r^M \StrB_{M,r}$. The following lemma, stating that the CMSO-theory of disjoint unions can be deduced from the CMSO-theories of the components, can either be proved, as carried out in [@Courcelle:MSO-I Lemma 4.5], by giving an effective translation of sentences talking about the disjoint union of two structures into a Boolean combination of sentences each talking about the individual structures, or by using a game-oriented view showing that winning strategies for Duplicator in the games on two pairs of structures can be combined into a winning strategy on the pair of disjoint unions of the structures. \[lem:cmso-disjoint-union\] Let $\StrA_1, \StrA_2, \StrB_1,$ and $\StrB_2$ be $\tau$-structures such that $\StrA_1 \equiv_r^M \StrB_1$ and $\StrA_2 \equiv_r^M \StrB_2$. Then $\StrA_1 \dunion \StrA_2 \equiv_r^M \StrB_1 \dunion \StrB_2$. Consider the game on $\StrA := \StrA_1 \dunion \StrA_2$ and $\StrB := \StrB_1 \dunion \StrB_2$. A Spoiler’s point move in $\StrA$ (resp., in $\StrB$) is answered by Duplicator according to her winning strategy in either $\Game^M_r(\StrA_1,\StrB_1)$ or $\Game^M_r(\StrA_2,\StrB_2)$. A set move $S \subseteq A$ (analogous for $S \subseteq B$) is decomposed into two subsets $S_1 := S \cap A_1$ and $S_2 := S \cap A_2$, and is answered by Duplicator by the set $D := D_1 \cup D_2$ consisting of the sets $D_1$ and $D_2$ chosen according to her winning strategies as responses to $S_1$ and $S_2$ in the respective games $\Game_r^M(\StrA_1,\StrB_1)$ and $\Game_r^M(\StrA_2,\StrB_2)$. Since $A_1$ and $A_2$ as well as $B_1$ and $B_2$ are disjoint, we have $\card{S} = \card{S_1} + \card{S_2}$ and $\card{D} = \card{D_1} + \card{D_2}$. Furthermore, $\card{S_1} \equiv \card{D_1} \pmod M$ and $\card{S_2} \equiv \card{D_2} \pmod M$ as the sets $D_1$ and $D_2$ are chosen according to Duplicator’s winning strategies in the games on $\StrA_1$ and $\StrB_1$, and $\StrA_2$ and $\StrB_2$, respectively. Since $\equiv\pmod M$ is a congruence relation with respect to addition, we have that $\card{S} \equiv \card{D} \pmod M$. It is easily verified that the sets and elements chosen according to this strategy indeed define a twofold partial isomorphism between $\StrA$ and $\StrB$. As a direct corollary we obtain the following result that will be used in the inductive step in the forthcoming proofs. \[cor:cmso-disjoint-union-exp\] Let $\StrA_1, \StrA_2, \StrB_1,$ and $\StrB_2$ be $\tau$-structures, such that $\StrA_1 \equiv_r^M \StrB_1$ and $\StrA_2 \equiv_r^M \StrB_2$. Then $(\StrA_1 \dunion \StrA_2,A_1) \equiv_r^M (\StrB_1 \dunion \StrB_2, B_1)$. We consider the following $\tau \dunion \{P\}$-expansions of the given structures: $\StrA'_1 := (\StrA_1,A_1)$, $\StrB'_1 := (\StrB_1,B_1)$, $\StrA'_2 := (\StrA_2,\emptyset)$, and $\StrB'_2 := (\StrB_2,\emptyset)$. It is immediate that (i) $\StrA_1 \equiv_r^M \StrB_1$ implies $(\StrA_1,A_1) \equiv_r^M (\StrB_1,B_1)$, and (ii) $\StrA_2 \equiv_r^M \StrB_2$ implies $(\StrA_2,\emptyset) \equiv_r^M (\StrB_2,\emptyset)$ since Duplicator can obviously win the respective games on the expanded structures using the same strategies as in the games proving the equivalences on the left-hand side. The claim follows by applying the previous lemma to the $\tau \dunion \{P\}$-expansions. It is well known that exhibits a certain weakness regarding the ability to specify cardinality constraints on sets, , structures over an empty vocabulary. A proof of this fact using games can be found in [@Libkin:FMT]. By adapting this proof, we show that this is still the case for . \[lem:cmso-set-equiv\] Let $\StrA$ and $\StrB$ be $\emptyset$-structures, $M\finsubseteq\setN^+$, and $r\in\setN$. Then $\StrA \equiv_r^M \StrB$ if $|A|,|B| \geq (2^{r+1}-4)\lcm(M)$ and $|A| \equiv |B| \pmod M$. We prove by induction on the number of rounds that Duplicator wins the (mod $M$) $r$-round game $\Game_r^M(\StrA,\StrB)$. For $r=0$ and $r=1$ the claim is obviously true. Let $r>1$, assume that the claim holds for $r-1$, and consider the first move of the $r$-round game. We assume that Spoiler makes his move in $\StrA$ since the reasoning in the other case is completely symmetric. If Spoiler makes a set move $S\subseteq A$, we consider the following cases: (1) $|S| < (2^r-4)\cdot\lcm(M)$ (or $|A-S| < (2^r-4)\cdot\lcm(M)$). Then Duplicator selects a set $D \subseteq B$ such that $|D| = |S|$ (or $|B-D|=|A-S|$), and hence $S \isom D$ and $A-S \equiv_{r-1}^M B-D$ (or $A-S \isom B-D$ and $S \equiv_{r-1}^M D$). (2) $|S|,|A-S| \geq (2^r-4)\cdot\lcm(M)$. Then Duplicator selects a set $D \subseteq B$ such that $|D| \equiv |S| \pmod M$ and $|D|,|B-D| \geq (2^r-2)\cdot\lcm(M)$. In fact, she chooses for $D$ half of the elements and chooses $\ell < \lcm(M)$ additional ones to fulfil the cardinality constraints $|D| \equiv |S| \pmod M$. Then, for the set $B-D$ of non-selected elements, we have $$\begin{aligned} |B-D| &\geq \frac{1}{2}\big((2^{r+1}-4)\lcm(M)\big) - \ell \geq (2^r-2)\lcm(M) - \lcm(M)\\ &\geq (2^r - 4)\lcm(M) \end{aligned}$$ for all $\ell$ satisfying $0 \leq \ell < \lcm(M)$. Since $|D| = |B-D| + 2\ell$, obviously $|D| \geq (2^r-4)\lcm(M)$ as well. Thus, in both cases, by the induction hypothesis we get $S \equiv_{r-1}^M D$ and $A-S \equiv_{r-1}^M B-D$. Hence, by Corollary \[cor:cmso-disjoint-union-exp\] $(A,S) \equiv_{r-1}^M (B,D)$, , Duplicator has a winning strategy in the remaining $(r-1)$-round game from position $(S,D)$. If Spoiler makes a point move $s \in A$, Duplicator answers by choosing an arbitrary element $d \in B$. Similar to Case 1 above, we observe that $(\{s\},s) \isom (\{d\},d\,)$ and $A-\{s\} \equiv_{r-1}^M B-\{d\}$ by the induction hypothesis. Thus, by Lemma \[lem:cmso-disjoint-union\], $(A,s) \equiv_{r-1}^M (B,d)$ implying that Duplicator has a winning strategy for the remaining $r-1$ rounds from position $(s,d)$. The Separating Example ====================== We will first give a brief description of our example showing that is strictly more expressive than . We consider a property of two-dimensional grids, namely that the vertical dimension divides the horizontal dimension. This property is easily definable in for grids that are given as directed graphs with two edge relations, one for the horizontal edges pointing rightwards, and one for the vertical edges pointing upwards, by defining a new relation of diagonal edges combining one step rightwards and one step upwards wrapping around from the top border to the bottom border but not from the right to the left border. Note that there is a path following those diagonal edges starting from the bottom-left corner of the grid and ending in the top-right corner if, and only if, the vertical dimension divides the horizontal dimension of the grid. Thus, for our purposes, we have to weaken the structure in the sense that we hide information that remains accessible to -formulae but not to formulae. An appropriate loss of information is achieved by replacing the two edge relations with their reflexive symmetric transitive closure, , we consider grids as structures with two equivalence relations which provide a notion of *rows* and *columns* of the grid. Obviously, notions like corner and border vertices as well as the notion of an order on the rows and columns that were important for the MSO-definition of the divisibility property are lost, but clearly, all these notions can be regained in presence of an order. First, the order allows us to uniquely define an element (the $<$-least element) to be the bottom-left corner of the grid, and second, the order induces successor relations on the set of columns and the set of rows, from which both horizontal and vertical successor vertices of any vertex can be deduced. Since the divisibility property is obviously invariant with respect to the ordering of the rows or columns, this allows for expressing it in . In the course of this section we will develop the arguments showing that fails to express this property on the following class of grid-like structures. A *cliquey $(k,\ell)$-grid* is a $\{\sim_h,\sim_v\}$-structure that is isomorphic to $\StrG_{k\ell} := (\{0,\dots,k-1\}\times \{0,\dots,\ell-1\}, \sim_h, \sim_v)$, where $$\begin{aligned} \sim_h\ & := \{((x,y),(x',y')) : x=x'\} \text{ and}\\ \sim_v\ & := \{((x,y),(x',y')) : y=y'\}\,, \end{aligned}$$ , $\sim_h$ consists of exactly $k$ equivalence classes (called *rows*), each containing $\ell$ elements, and $\sim_v$ consists of exactly $\ell$ equivalence classes (called *columns*), each containing $k$ elements, such that every equivalence class of $\sim_h$ intersects every equivalence class of $\sim_v$ in exactly one element and vice versa. A *horizontally coloured cliquey $(k,\ell)$-grid*, denoted $\ColG_{k\ell}$, is the expansion of the $\{\sim_v\}$-reduct of the cliquey grid $\StrG_{k\ell}$ by unary predicates $\{P_1,\dots,P_k\}$, where the information of $\sim_h$ is retained in the $k$ new predicates (in the following referred to as *colours*) such that each set $P_i$ corresponds to exactly one former equivalence class. Note that the same class of grid-like structures has already been used by Otto in a proof showing that the number of monadic second-order quantifiers gives rise to a strict hierarchy over finite structures [@Ot95c]. The class is first-order definable by a sentence $\psi_\text{grid}$ stating that - $\sim_v$ and $\sim_h$ are equivalence relations, and - every pair consisting of one equivalence class of $\sim_h$ and $\sim_v$ each has exactly one element in common as these properties are sufficient to enforce the desired grid-like structure. Note that even the second property is first-order definable since every equivalence class is uniquely determined by each of its elements. The following two lemmata justify the introduction of the notion of horizontally coloured cliquey grids for use in the forthcoming proofs. \[lem:disjoint-grid-comp\] Let $\ColG_{k\ell_1}$, $\ColG_{k\ell_2}$, $\ColG_{k\ell'_1}$, and $\ColG_{k\ell'_2}$ be horizontally coloured cliquey grids such that $\ColG_{k\ell_1} \equiv_r^M \ColG_{k\ell'_1}$ and $\ColG_{k\ell_2} \equiv_r^M \ColG_{k\ell'_2}$. Then $\ColG_{k,\ell_1+\ell_2} \equiv_r^M \ColG_{k,\ell'_1+\ell'_2}$. Note that, since there are no horizontal edges in horizontally coloured cliquey grids and the vertical dimension of all grids is $k$, $\ColG_{k,\ell_1+\ell_2}$ is the disjoint union of the two smaller horizontally coloured cliquey grids $\ColG_{k\ell_1}$ and $\ColG_{k\ell_2}$, and of course, the same holds for $\ColG_{k,\ell'_1+\ell'_2}$. Thus, the claim follows by Lemma \[lem:cmso-disjoint-union\]. \[lem:col-to-noncol\] Let $\ColG_{k\ell} \equiv_r^M \ColG_{k\ell'}$. Then $\StrG_{k\ell} \equiv_r^M \StrG_{k\ell'}$. For each fixed horizontal dimension $k$, there exists a one-dimensional quantifier-free interpretation of a cliquey grid in its respective horizontally coloured counterpart since we can define the horizontal equivalence relation $\sim_h$ in terms of the colours as follows: $$x \sim_h y\ \equiv\ \biglor_{i=1}^k P_i x \land P_i y\,.$$ -5ex Actually, the argument implies that Duplicator wins a game on cliquey grids using the same strategy that is winning in the corresponding game on coloured grids since a strategy preserving the colours of selected elements especially preserves the equivalence relation $\sim_h$. Before stating the main lemma, we will first prove a combinatorial result which will later help Duplicator in synthesising her winning strategy and introduce the following weakened notion of equality between numbers. Two numbers $a, b \in \setN$ are called *threshold $t$ equal *(*mod $M$*)**, denoted $a =^M_t b$, if (i) $a = b$ or (ii) $a,b \geq t$ and $a \equiv b \pmod M$. Intuitively, $a =^M_t b$ means that the numbers are equal if they are small, or that they are at least congruent modulo all $m \in M$ if they are both at least as large as the threshold $t$. \[lem:combinatorics\] For every $p, t \in \setN$, and $M \finsubseteq \setN^+$, we can choose an arbitrary $T \geq p \cdot (t + \lcm(M) - 1)$ such that for all sets $A$ and $B$ with $\card{A} =^M_T \card{B}$ and for every equivalence relation $\eqrel_A$ on $A$ of index at most $p$ there exists an equivalence relation $\eqrel_B$ on $B$ and a bijection $g \colon \quot{A}{\eqrel_A} \to \quot{B}{\eqrel_B}$ satisfying $\card{\{a' \in A : a \eqrel_A a'\}} =^M_t \card{g(\{a' \in A : a \eqrel_A a'\})}$ for all $a\in A$. We let $\{\seq[p']a\}$, where $p' \leq p$ denotes the index of $\eqrel_A$, be the set of class representatives of $\quot{A}{\eqrel_A}$, and we let $[a]_{\eqrel_A} := \{ a'\in A : a' \eqrel_A a\}$ denote the equivalence class of $a$ in $A$. Note that we will usually omit the subscript $\eqrel_A$ if it is clear from the context and instead reserve the letters $a$ and $b$ for elements denoting equivalence classes in $A$ and $B$, respectively. Furthermore, a set will be called *small* in the following if it contains less than $t$ elements and *large* otherwise. The equivalence relation $\eqrel_B$ on $B$ is constructed by partitioning the set into $p'$ disjoint non-empty subsets $\{\seq[p']{B}\}$ as follows. If $\card{A} = \card{B}$, for each class $[a_i]$, we choose a set $B_i$ with exactly $\card{[a_i]}$ many elements. If $\card{A}, \card{B} \geq T$, we have to distinguish between the treatment of small and large classes. Since $\card{A} \geq T \geq p \cdot (t + \lcm(M) - 1)$, $\lcm(M) \geq 1$, and the index of $\eqrel_A$ is at most $p$, at least one of the equivalence classes contains at least $t$ elements, , it is large, and without loss of generality, it is assumed that this is the case for $[a_1]$. For each small class $[a_i]$, we choose a set $B_i$ with exactly $\card{[a_i]}$ many elements. If $[a_i]$ is large, we choose a set $B_i$ containing $t + \ell$ many elements where $\ell$ is the smallest non-negative integer such that $\card{[a_i]} \equiv \card{B_i} \pmod M$. The number of elements selected according to these rules is at most $p \cdot (t + \lcm(M) - 1) \leq T \leq \card{B}$. Since $[a_1]$ is large by assumption, any possibly remaining elements in $B$, that have not been assigned to one of the subsets $\seq[p']{B}$ yet, can be safely added to $B_1$ without violating the condition that $\card{[a_1]} \equiv \card{B_1} \pmod M$. This partitioning uniquely defines the equivalence relation $\eqrel_B := \bigcup_{i=1}^{p'} (B_i \times B_i)$ on $B$. By selecting an arbitrary element of each $B_i$ we get a set of class representatives $\{\seq[p']b\}$ which directly yields the bijection $g \colon [a_i] \mapsto [b_i]$ for all $1 \leq i\leq p'$ satisfying $\card{[a]} =^M_t \card{g([a])}$ for all $a\in A$ by construction. The following lemma extends the results on -equivalence of *large enough sets* to *large enough grids* by giving a sufficient condition on the sizes of two grids for the existence of a winning strategy for Duplicator in an $r$-round (mod $M$) game on the two structures. Due to the inductive nature of the proof that involves, in each step, a construction of equivalence classes as in the above lemma, we need as a criterion for the size, for fixed $p \in \setN$ and $M \finsubseteq \setN^+$, a function $f_{p,M} : \setN \to \setN$ such that, for all $r \in \setN^+$ and $t = f_{p,M}(r-1)$, we can choose $T = f_{p,M}(r)$ in the previous lemma. One function satisfying, for all $r \in \setN^+$, the inequality $f_{p,M}(r) \geq p \cdot (f_{p,M}(r-1) + \lcm(M) - 1)$ derived from the condition imposed on $T$ is $f_{p,M}(r) = 2\cdot(p^r-1)\cdot\lcm(M)$. \[lem:cmso-grid-equiv\] Let $M \finsubseteq \setN^+$, $r \in \setN$ and $k > 1$ be fixed. Then for $f(r) := f_{2^k,M}(r) = (2^{kr+1}-2)\lcm(M)$, as given above, $\StrG_{k\ell_1} \equiv_r^M \StrG_{k\ell_2}$ if $\ell_1 =^M_{f(r)} \ell_2$. As motivated by Lemma \[lem:col-to-noncol\], we consider the $r$-round (mod $M$) game on the corresponding horizontally coloured cliquey grids $\ColG_{k\ell_1}$ and $\ColG_{k\ell_2}$, and we show by induction on the number of rounds that Duplicator has a winning strategy in this game. Intuitively, the proof proceeds as follows. Spoiler’s set move induces an equivalence relation on the set of columns forming the grid he plays in, and the previous lemma implies that Duplicator is able to construct an equivalence relation on the columns of the other grid which is similar in the sense that corresponding equivalence classes satisfy certain cardinality constraints. Since the grids can be regarded as disjoint unions of these equivalence classes, we can argue by induction that corresponding subparts of the two grids, being similar enough, cannot be distinguished during the remaining $r-1$ rounds of the game. The case where $\ell_1 = \ell_2$ is trivial since grids of the same dimensions are isomorphic. Thus, we assume in the following that $\ell_1,\ell_2 \geq f(r)$ and $\ell_1 \equiv \ell_2 \pmod M$. The claim is obviously true for $r=0$, hence we assume that it holds for $r-1$ and proceed with the inductive step. As before, we assume without loss of generality that Spoiler makes his moves in $\StrG_{k\ell_1}$ since the other case is symmetric. A *coloured $k$-column* is a $\{\sim_v,P_1,\dots,P_k\}$-structure isomorphic to $\ColC_k := \ColG_{k,1}$, such that a coloured grid can be regarded as a disjoint union of columns. Given a subset $S$ of vertices of a grid and one of its coloured $k$-columns $\StrC$ with universe $C$, the *colour-type of $\StrC$ induced by $S$* is defined as the isomorphism type of the expansion $(\StrC,S \cap C)$ denoted by $\isotype(\StrC,S)$. Given a set $\mathcal F$ of $k$-columns, each subset $S$ of all of their vertices gives rise to an equivalence relation $\eqrel_S$ on $\mathcal F$ by virtue of $\StrC_1 \eqrel_S \StrC_2$ if, and only if, $\isotype(\StrC_1,S) = \isotype(\StrC_2,S)$. Note that the index of $\eqrel_S$ is at most $2^k$. Assume, Spoiler performs a set move and chooses a subset $S$ in $\ColG_{k\ell_1} = \StrC_1 \dunion \cdots \dunion \StrC_{\ell_1}$. As described above, $S$ induces an equivalence relation $\eqrel_S$ with at most $2^k$ equivalence classes on the set $\mathcal F = \{\StrC_1,\dots,\StrC_{\ell_1}\}$ of columns forming the grid. For $p = 2^k$, $t = f(r-1)$ and $M$ as given, by the previous lemma, there is an equivalence relation $\eqrel'_S$ on the set $\mathcal F' = \{\StrC'_1,\dots,\StrC'_{\ell_2}\}$ of columns on the Duplicator’s grid $\ColG_{k\ell_2}$ since $\ell_1,\ell_2 \geq f(r)$. Furthermore, there is a bijection $g$ mapping equivalence classes of columns in one grid to the other. Given that the index of both $\eqrel_S$ and $\eqrel'_S$ is $p' \leq p=2^k$, we can assume $\{\seq[p']{\StrC}\}$ and $\{\seq[p']{\StrC'}\}$ to be the sets of class representatives of $\eqrel_S$ and $\eqrel'_S$, respectively. Duplicator now selects the unique set $D$ of elements such that $\isotype(\StrC,S) = \isotype(\StrC',D)$ for all $1 \leq i\leq p'$, $\StrC \in [\StrC_i]$ and $\StrC' \in g([\StrC_i])$. For each $1 \leq i\leq p'$, we let $\indStr{\StrC_i} := \ColG_{k\ell_1}\!\restr{[\StrC_i]}$ and $\indStr{\StrC'_i} := \ColG_{k\ell_2}\!\restr{[\StrC'_i]}$ denote the substructures of the grids $\ColG_{k\ell_1}$ and $\ColG_{k\ell_2}$ induced by the sets of columns $[\StrC_i]$ and $[\StrC'_i]$, respectively. By construction, we have $\card{[\StrC_{i}]} =^M_{f(r-1)} \card{[\StrC'_{i}]}$ for all $i$. Thus, depending on whether $[\StrC_{i}]$ (and hence $[\StrC'_{i}]$) are small or large with respect to the threshold $f(r-1)$, either $\indStr{\StrC_{i}} \isom \indStr{\StrC'_{i}}$ or $\indStr{\StrC_{i}} \equiv_{r-1}^M \indStr{\StrC'_{i}}$ by the induction hypothesis. Since $S$ and $D$ induce the same colour-types on the columns in $[\StrC_{i}]$ and $[\StrC'_{i}]$, respectively, we have $$\big(\indStr{\StrC_{i}},S \cap \univ{\indStr{\StrC_{i}}}\big) \equiv_{r-1}^M \big(\indStr{\StrC'_{i}},D \cap \univ{\indStr{\StrC'_{i}}}\big)$$ for all $i$, where $\univ{\cdot}$ denotes the universe of the respective structure. Thus, iterating Lemma \[lem:cmso-disjoint-union\] yields that Duplicator has a winning strategy in the remaining rounds of the game $\Game_{r-1}^M(\ColG_{k\ell_1},\ColG_{k\ell_2})$ from position $(S,D)$. If Spoiler makes a point move $s$, say in column $\StrC_1$ of the grid $\ColG_{k\ell_1}$, Duplicator picks an arbitrary element $d$ of the same colour in her grid, say in column $\StrC'_1$. As the substructures consisting of just the columns containing the chosen elements are isomorphic, , $\big(\StrC_{1},s\big) \isom \big(\StrC'_{1},d\big)$, and by the induction hypothesis we have $\StrC_{2} \dunion \cdots \dunion \StrC_{\ell_1} \equiv^M_{r-1} \StrC'_{2}\dunion\cdots\dunion\StrC'_{\ell_2}$, Duplicator can win the remaining $(r-1)$-round game from position $(s,d)$ by Lemma \[lem:cmso-disjoint-union\]. Now we have the necessary tools available to prove the main theorem. $\CMSO \subsetneq \oiMSO$. We show that the class $\ClsC := \{\,\StrG_{k\ell}\ \colon\ k | \ell\,\}$ is not definable in but order-invariantly definable in by the sentence $\psi_\text{grid} \land \phi$, where $$\begin{aligned} \phi &= \exists \min\exists c \left(\begin{aligned} & \forall x (\min \leq x) \land \neg\exists z (E_h(c,z) \lor E_v(c,z))\\ \land\ & \forall T \big(\forall x\forall y (Tx \land \phi_\text{diag}(x,y) \limp Ty) \land T\min{} \limp Tc\big) \end{aligned}\right)\ , \end{aligned}$$ and $$\begin{aligned} \phi_\text{diag}(x,y) &= \big(\exists z (E_v(x,z) \land E_h(z,y))\big)\\ &\qquad \lor \big(\neg\exists z E_v(x,z) \land \exists z (z \sim_h \min{} \land z \sim_v x \land E_h(z,y))\big)\ ,\\ E_h(x,y) &= x \sim_h y \land \exists x_0\exists y_0 \left(\begin{aligned} & x_0 \sim_h \min{} \land y_0 \sim_h \min{}\\ \land\ & x \sim_v x_0 \land y \sim_v y_0 \land x_0 < y_0\\ \land\ & \forall z_0 ( z_0 \sim_h \min{} \limp z_0 \leq x_0 \lor z_0 \geq y_0)\quad , \!\!\!\!\!\!\! \end{aligned}\right)\\ E_v(x,y) &= x \sim_v y \land \exists x_0\exists y_0 \left(\begin{aligned} & x_0 \sim_v \min{} \land y_0 \sim_v \min{}\\ \land\ & x \sim_h x_0 \land y \sim_h y_0 \land x_0 < y_0\\ \land\ & \forall z_0 ( z_0 \sim_v \min{} \limp z_0 \leq x_0 \lor z_0 \geq y_0)\quad . \!\!\!\!\!\!\! \end{aligned}\right) \end{aligned}$$ As hinted above, the horizontal and vertical edge relations ($E_h$ and $E_v$, respectively) are defined using the successor relation which is induced by an arbitrary ordering on the row (and column) containing the minimal element ($\min$) which itself serves as the lower left corner of the grid. $\phi_\text{diag}$ defines diagonal steps through the grid that wrap around from the top to the bottom row. Finally, $\phi$ states that the pair consisting of the lower left corner ($\min$) and the upper right corner ($c$) of the grid is contained in the transitive closure of $\phi_\text{diag}$. Obviously, there is such a sawtooth-shaped path starting at $\min$ and ending exactly in the upper right corner if, and only if, $k | \ell$. The second step consists in showing that $\ClsC$ is not definable in . Towards this goal, we show that for any choice of $r \in \setN$ and $M \finsubseteq \setN^+$, we can find $k,\ell_1,\ell_2 \in \setN$, such that $\StrG_{k\ell_1} \in \ClsC$, $\StrG_{k\ell_2} \not\in \ClsC$, and $\StrG_{k\ell_1} \equiv_r^M \StrG_{k\ell_2}$ which contradicts the -definability of $\ClsC$. Let $r \in \setN$ and $M \finsubseteq \setN^+$ be fixed. We choose $s \geq r+1$ such that $2^s \notdiv \lcm(M)$. Let $k = 2^s$, $\ell_1 = 2^{kr+1}\lcm(M)$, and $\ell_2 = \ell_1 + \lcm(M)$. Obviously, $\ell_1$ and $\ell_2$ satisfy the conditions of Lemma \[lem:cmso-grid-equiv\], and thus $\StrG_{k\ell_1} \equiv_r^M \StrG_{k\ell_2}$. Furthermore, $\ell_1 = k \cdot 2^{2^s\cdot r-s+1}\lcm(M)$, hence $k \mid \ell_1$ and $\StrG_{k\ell_1} \in \ClsC$. On the other hand, $k \notdiv \ell_2 = \ell_1+\lcm(M)$ by the choice of $s$, thus $\StrG_{k\ell_2} \not\in \ClsC$. -0.5cm Conclusion ========== We have provided a characterisation of the expressiveness of in terms of an game that naturally extends the known game capturing -definability, and we have presented a class of structures that are shown, using the proposed game characterisation, to be undefinable by a -sentence yet being definable by an order-invariant -sentence. This establishes that order-invariant is strictly more expressive than counting in the finite. Modifying the separating example by considering a variant of cliquey grids where the two separate equivalence relations are unified into a single binary relation and considering, , the class of such grids where the horizontal dimension exactly matches the vertical dimension, we can also confirm Courcelle’s original conjecture. -definability is strictly weaker than -definability for general graphs. The separating query being essentially a transitive closure query, , the only place where monadic second-order quantification is used is in the definition of the transitive closure of a binary relation, we can conclude that the same class of structures yields a separation of $\mathInvLogic{(D)TC^1}{<}$ from $\text{(D)TC}^1$ (the extension of by a (deterministic) transitive closure operator on binary relations) and even from $\text{(D)TC}^1$ extended with modulo-counting predicates since $\text{(D)TC}^1 \subseteq \MSO$. Finding separating examples concerning higher arity (D)TC or even full (D)TC requires further investigation since, in general, $\MSO \subsetneq \DTC^2$. Following an opposite line of research, it would be interesting to identify further classes of graphs, besides classes of graphs of bounded tree-width, on which $\oiMSO$ is no more expressive than . -0.3cm [Cou96]{} Michael Benedikt and Luc Segoufin. Towards a characterization of order-invariant queries over tame structures. In [*Proceedings of the 14th Annual Conference on Computer Science Logic, CSL 2005*]{}, pages 276–291, 2005. Bruno Courcelle. The monadic second-order logic of graphs [I]{}: [R]{}ecognizable sets of finite graphs. , 85(1):12–75, 1990. Bruno Courcelle. The monadic second-order logic of graphs [X]{}: Linear orderings. , 160:87–143, 1996. Heinz-Dieter Ebbinghaus and J[ö]{}rg Flum. . Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1995. Denis Lapoire. Recognizability equals monadic second-order definability for sets of graphs of bounded tree-width. In [*Proceedings of the 15th Annual Symposium on Theoretical Aspects of Computer Science, STACS 1998*]{}, pages 618–628, 1998. Leonid Libkin. . Springer, 2004. Juha Nurmonen. Counting modulo quantifiers on finite structures. , 160(1-2):62–87, 2000. Martin Otto. A note on the number of monadic quantifiers in monadic [$\Sigma^1_1$]{}. , 53(6):337–339, March 1995. Martin Otto. Epsilon-logic is more expressive than first-order logic over finite structures. , 65(4):1749–1757, 2000. -0.6cm
--- abstract: | We derive [*a priori*]{} $C^2$ estimates for a class of complex Monge-Ampère type equations on Hermitian manifolds. As an application we solve the Dirichlet problem for these equations under the assumption of existence of a subsolution; the existence result, as well as the second order boundary estimates, is new even for bounded domains in ${\hbox{\bbbld C}}^n$. [*Mathematical Subject Classification (2010):*]{} 58J05, 58J32, 32W20, 35J25, 53C55. address: 'Department of Mathematics, Ohio State University, Columbus, OH 43210' author: - Bo Guan and Wei Sun title: | On a Class of Fully Nonlinear Elliptic Equations\ on Hermitian Manifolds --- Introduction ============ Let $(M^n,\omega)$ be a compact Hermitian manifold of dimension $n \geq 2$ with smooth boundary $\partial M$ and $\chi$ a smooth real $(1,1)$ form on $\bar M := M \cup \partial M$. Define for a function $u \in C^2 (M)$, $$\chi_u = \chi + \frac{\sqrt{-1}}{2} \partial {\bar{\partial}}u$$ and set $$[\chi] = \big\{\chi_u: \, u \in C^2 (M)\big\}, \;\; [\chi]^+ = \big\{\chi' \in [\chi]: \chi' > 0\}.$$ In this paper we are concerned with the equation for $1 \leq \alpha \leq n$, $$\label{CH-I10} \begin{aligned} \chi_u^n = & \psi \chi_u^{n-\alpha} \wedge \omega^{\alpha} \;\; \mbox{in $M$, }. \end{aligned} $$ We require $\chi_{u} > 0$ so that equation  is elliptic; we call such functions [*admissible*]{} or $\chi$-[*plurisubharmonic*]{}. Consequently, we assume $\psi > 0$ on ${\bar{M}}$; equation  becomes degenerate when $\psi \geq 0$. When $\alpha = n$ this is the complex Monge-Ampère equation which plays extremely important roles in complex geometry and analysis, especially in Kähler geometry, and has received extensive study since the fundamental work of Yau [@Yau78] (see also [@Aubin78]) on compact Kähler manifolds and that of Caffarelli, Kohn, Nirenberg and Spruck [@CKNS] for the Dirichlet problem in strongly pseudoconvex domains in ${\hbox{\bbbld C}}^n$. For $\alpha =1$ equation  also arises naturally in geometric problems; it was posed by Donaldson [@Donaldson99a] in connection with moment maps and is closely related to the Mabuchi energy [@Chen04], [@Weinkove06], [@SW08]. Donaldson’s problem assumes $M$ is closed, both $\omega$, $\chi$ are Kähler and $\psi$ is constant. It was studied by Chen [@Chen04], Weinkove [@Weinkove04], [@Weinkove06], Song and Weinkove [@SW08] using parabolic methods. In [@SW08] Song and Weinkove give a necessary and sufficient solvability condition. Their result was extended by Fang, Lai and Ma [@FLM11] to all $1 \leq \alpha < n$. In this paper we study the Dirichlet problem for equation  on Hermitian manifolds. Given $\psi \in C^{\infty} ({\bar{M}})$ and $\varphi \in C^{\infty} (\partial M)$, we wish to find a solution $u \in C^{\infty} ({\bar M})$ of equation  satisfying the boundary condition $$\label{CH-I10b} u = \varphi \;\; \mbox{on $\partial M$}.$$ The Dirichlet problem for the complex Monge-Ampère equation in ${\hbox{\bbbld C}}^n$ was studied by Caffarelli, Kohn, Nirenberg and Spruck [@CKNS] on strongly pseudoconvex domains. Their result was extended to Hermitian manifolds by Cherrier and Hanani [@CH99], [@Hanani96a], and by the first author [@Guan98b] to arbitrary bounded domains in ${\hbox{\bbbld C}}^n$ under the assumption of existence of a subsolution. See also the more recent papers [@GL10], [@Zhang10], and related work of Tosatti and Weinkove [@TWv10a], [@TWv10b] who completely extended the zero order estimate of Yau [@Yau78] on closed Kähler manifolds to the Herimatian case. In [@LiSY04] Li treated the Dirichlet problem for more general fully nonlinear elliptic equations in ${\hbox{\bbbld C}}^n$ but needed to assume the existence of a [*strict*]{} subsolution. Li’s result does not cover equation  as it fails to satisfy some of the key structure conditions in [@LiSY04]. In this paper we prove the following existence result which is new even in the case when $M$ is a bounded domain in ${\hbox{\bbbld C}}^n$ and $\chi = 0$; we assume $2 \leq \alpha \leq n-2$ as the cases $\alpha = 1$ and $\alpha = n-1$ were considered in [@GL12] and [@GL3], while for the complex Monge-Ampère equation ($\alpha = n$) it was proved in [@GL10]. \[gl-thm20\] Let $\psi \in C^{\infty} ({\bar{M}})$, $\psi > 0$ and $\varphi \in C^{\infty} (\partial M)$. There exists a unique admissible solution $u \in C^{\infty} ({\bar{M}})$ of the Dirichlet problem -, provided that there exists an admissible subsolution ${\underline}{u} \in C^2 ({\bar{M}})$: $$\label{CH-I20b} \left\{ \begin{aligned} & \chi_{{\underline}{u}}^n \geq \psi \chi_{{\underline}{u}}^{n-\alpha} \wedge \omega^{\alpha} \;\; \mbox{on ${\bar{M}}$} \\ & {\underline}{u} = \varphi \;\; \mbox{on $\partial M$}. \end{aligned} \right.$$ In order to solve the Dirichlet problem - one needs to derive [*a priori*]{} $C^2$ estimates up to the boundary for admissible solutions. The most difficult step is probably the second order estimates on the boundary. \[gl-thm20bc2\] Suppose $\psi \in C^1 ({\bar{M}})$, $\psi > 0$ and $\varphi \in C^4 (\partial M)$ and ${\underline}{u} \in C^2 ({\bar{M}})$ is an admissible subsolution satisfying . Let $u \in C^3 ({\bar{M}})$ be an admissible solution of the Dirichlet problem -. Then $$\label{cma-37} \max_{\partial M} |\nabla^2 u| \leq C$$ where $C$ depends on $|u|_{C^1 ({\bar{M}})}$, $\min \psi^{-1}$, $|{\underline}{u}|_{C^2 ({\bar{M}})}$ and $\min \{c_1: c_1 \chi_{{\underline}u} \geq \omega\}$, as well as other known data. This estimate is new for domains in ${\hbox{\bbbld C}}^n$. Note that $\partial M$ is assumed to be smooth and compact in Theorem \[gl-thm20bc2\], but otherwise is completely arbitrary. In general, the Dirichlet problem - is not always solvable in an arbitrary smooth bounded domain in ${\hbox{\bbbld C}}^n$ without the subsolution assumption. In the theory of nonlinear elliptic equations, many well known classical results assume certain geometric conditions on the boundary of the underlying domain; see e.g. [@Serrin69], [@CKNS], [@CNS1] and [@CNS3]. In [@GS93], [@Guan98a] and [@Guan98b], J. Spruck and the first author were able to solve the Dirichlet problem for real and complex Monge-Ampère equations on arbitrary smooth bounded domains assuming the existence of a subsolution. Their work was motivated by applications to geometric problems and had been found useful in some important problems such as the proof by P.-F. Guan [@GuanPF02],  [@GuanPF08] of the Chern-Levine-Nirenberg conjecture [@CLN69], and work on the Donaldson conjectures [@Donaldson99] on geodesics in the space of Kähler metrics; we refer the reader to [@PSS12] for recent progress and further references on this fast-developing subject. On a closed Kähler manifold $(M, \omega)$, Fang, Lai and Ma [@FLM11] proved second and zero order estimates for equation  when $\chi$ is also Kähler and $\psi$ is constant. We extend their second order estimates to Hermitian manifolds and for general $\chi$ and $\psi$. Technically the major difficulty is to control extra third order terms which occur due to the nontrivial torsion of the Hermitian metric. This was done in [@GL12], [@GL3] for $\alpha = 1$ and $\alpha = n-1$; the case $2 \leq \alpha \leq n-2$ is considerably more complicated. In order to solve the Dirichlet problem we also need global gradient estimates. Following [@SW08] and [@FLM11] let $$\label{CH-I20} \mathscr{C}_{\alpha} (\omega) = \big\{[\chi]: \, \exists \, \chi' \in [\chi]^+, \, n\chi'^{n-1} > (n-\alpha) \psi \chi'^{n-\alpha -1}\wedge \omega^\alpha\big\}.$$ \[cmate-main-1\] Let $u\in C^4(M) \cap C^2 (\bar M) $ be an admissible solution of equation (\[CH-I10\]) where $\psi \in C^2(\bar M)$, $\psi >0$. Suppose that $\chi \in \mathscr{C}_{\alpha} (\omega)$. Then there are constants $C_1, C_2$ depending on $|u|_{C^0(\bar M)}$ such that $$\label{CH-I30g} \sup_M |\nabla u| \leq C_1 (1 + \sup_{\p M}|\nabla u|),$$ $$\label{CH-I30s} \sup_M \Delta u \leq C_2(1 + \sup_{\p M} \Delta u).$$ In particular, if $M$ is closed ($\partial M = \emptyset$) then $ |\nabla u| \leq C_1$ and $|\Delta u | \leq C_2$ on $M$. The cone $\mathscr{C}_{\alpha} (\omega)$ was first introduced by Song and Weinkove [@SW08] ($\alpha=1$) and Fang, Lai and Ma [@FLM11] who derived the estimate on a closed Kähler manifold $(M, \omega)$ when $\chi$ is also Kähler and $$\psi = c_{\alpha} := \frac{\int_M \chi^n}{\int_M \chi^{n-\alpha} \wedge \omega^{\alpha}},$$ which is a Kähler class invariant. As in  [@SW08],  [@FLM11] the constant $C_2$ in Theorem \[cmate-main-1\] is independent of gradient bounds, i.e. $C_2$ is independent of $C_1$. The subsolution assumption implies $[\chi] \in \mathscr{C}_{\alpha} (\omega)$. On a closed manifold, a subsolution must be a solution or the equation has no solution. This is a consequence of the maximum principle and a concavity property of equation (\[CH-I10\]). The gradient estimate is crucial to the proof of Theorem \[gl-thm20\] and is also new when $\omega$ and $\chi$ are Kähler. Indeed, deriving gradient estimates for fully nonlinear equations on complex manifolds turns out to be a rather challenging and mostly open question. Only very recently were Dinew and Kolodziej [@DK] able to prove the gradient estimate using scaling techniques and Liouville type theorems for the complex Hessian equation $$\label{CH-I10H} \begin{aligned} \omega^n = & \psi \chi_u^{n-\alpha} \wedge \omega^{\alpha} \end{aligned} $$ on closed Kähler manifolds which is consequently solvable due to the earlier work of Hou, Ma and Wu [@HMW10]. The proof of Theorem \[cmate-main-1\] is carried out in Sections \[cmate-C1\] and \[cmate-C2\] where we derive the estimates for $|\nabla u|$ and $\Delta u$, the gradient and Laplacian of $u$, respectively. In Section \[gblq-B\] we establish the boundary estimates for second derivatives. These estimates allow us to derive global estimates for all (real) second derivatives as in Section 5 in [@GL10] and apply the Evans-Krylov theorem since equation   becomes uniformly elliptic. Theorem \[gl-thm20\] may then be proved by the continuity method. These steps are all well understood so we shall omit them. In section \[cmate-P\] we recall some formulas on Hermitian manifolds. Preliminaries {#cmate-P} ============= Let $g$ and $\nabla$ denote the Riemannian metric and Chern connection of $(M, \omega)$. The torsion and curvature tensors of $\nabla$ are defined by $$\label{cma-K95} \begin{aligned} T (u, v) \,& = \nabla_u v - \nabla_v u - [u,v], \\ R (u, v) w \,& = \nabla_u \nabla_v w - \nabla_v \nabla_u w - \nabla_{[u,v]} w, \end{aligned}$$ respectively. Following the notations in [@GL10], in local coordinates $z = (z_1, \ldots, z_n)$ we have $$\label{cma-K70} \left\{ \begin{aligned} g_{i {\bar{j}}} \,& = g \Big(\frac{\partial}{\partial z_i}, \frac{\partial}{\partial {\bar{z}}_j}\Big), \;\; \{g^{i{\bar{j}}}\} = \{g_{i{\bar{j}}}\}^{-1}, \\ T^k_{ij} \,& = \Gamma^k_{ij} - \Gamma^k_{ji} = g^{k{\bar{l}}} \Big(\frac{\partial g_{j{\bar{l}}}}{\partial z_i} - \frac{\partial g_{i{\bar{l}}}}{\partial z_j}\Big), \\ R_{i{\bar{j}}k{\bar{l}}} \,& = - g_{m {\bar{l}}} \frac{\partial \Gamma_{ik}^m}{\partial {\bar{z}}_j} = - \frac{\partial^2 g_{k{\bar{l}}}}{\partial z_i \partial {\bar{z}}_j} + g^{p{\bar{q}}} \frac{\partial g_{k{\bar{q}}}}{\partial z_i} \frac{\partial g_{p{\bar{l}}}}{\partial {\bar{z}}_j}. \end{aligned} \right.$$ Recall that for a smooth function $v$, $v_{i{\bar{j}}} = v_{{\bar{j}}i} = \partial_i {\bar{\partial}}_j v$, $v_{i{\bar{j}}k} = \partial_k v_{i{\bar{j}}} - \Gamma_{ki}^l v_{l{\bar{j}}}$ and $$v_{i{\bar{j}}k{\bar{l}}} = {\bar{\partial}}_l v_{i{\bar{j}}k} - {\overline}{\Gamma_{lj}^q} v_{i{\bar{q}}k}.$$ We have (see e.g. [@GL12]), $$\label{gblq-B145} \left\{ \begin{aligned} v_{i {\bar{j}}k} - v_{k {\bar{j}}i} = \,& T_{ik}^l v_{l{\bar{j}}}, \\ v_{i {\bar{j}}{\bar{k}}} - v_{i {\bar{k}}{\bar{j}}} = \,& {\overline}{T_{jk}^l} v_{i{\bar{l}}}, \end{aligned} \right.$$ $$\label{gblq-B147} \left\{ \begin{aligned} v_{i{\bar{j}}k{\bar{l}}} - v_{i{\bar{j}}{\bar{l}}k} = \,& g^{p{\bar{q}}} R_{k{\bar{l}}i{\bar{q}}} v_{p{\bar{j}}} - g^{p{\bar{q}}} R_{k {\bar{l}}p {\bar{j}}} v_{i{\bar{q}}}, \\ v_{i {\bar{j}}k {\bar{l}}} - v_{k {\bar{l}}i {\bar{j}}} = \,& g^{p{\bar{q}}} (R_{k{\bar{l}}i{\bar{q}}} v_{p{\bar{j}}} - R_{i{\bar{j}}k{\bar{q}}} v_{p{\bar{l}}}) + T_{ik}^p v_{p{\bar{j}}{\bar{l}}} + {\overline}{T_{jl}^q} v_{i{\bar{q}}k} - T_{ik}^p {\overline}{T_{jl}^q} v_{p{\bar{q}}}. \end{aligned} \right.$$ Let $u \in C^4 (M)$ be an admissible solution of equation . As in [@GL10] and [@GL12], we denote ${\mathfrak{g}}_{i{\bar{j}}} = \chi_{i{\bar{j}}} + u_{i{\bar{j}}}$, $\{{\mathfrak{g}}^{i{\bar{j}}}\} = \{{\mathfrak{g}}_{i{\bar{j}}}\}^{-1}$ and $w = {\mbox{tr}}\chi + \Delta u$. Note that $\{{\mathfrak{g}}_{i{\bar{j}}}\}$ is positive definite. Assume at a fixed point $p \in M$ that $g_{i{\bar{j}}} = \delta_{ij}$ and ${\mathfrak{g}}_{i{\bar{j}}}$ is diagonal. Then $$\label{gblq-R155a} \begin{aligned} u_{i {\bar{i}}k {\bar{k}}} - u_{k {\bar{k}}i {\bar{i}}} = \,& R_{k{\bar{k}}i{\bar{p}}} u_{p{\bar{i}}} - R_{i{\bar{i}}k{\bar{p}}} u_{p{\bar{k}}} + 2 {\mathfrak{Re}}\{{\overline}{T_{ik}^j} u_{i{\bar{j}}k}\} - T_{ik}^p {\overline}{T_{ik}^q} u_{p{\bar{q}}}, \end{aligned}$$ and therefore, $$\label{gblq-R155} \begin{aligned} {\mathfrak{g}}_{i {\bar{i}}k {\bar{k}}} - {\mathfrak{g}}_{k {\bar{k}}i {\bar{i}}} = \,& R_{k{\bar{k}}i{\bar{i}}} {\mathfrak{g}}_{i{\bar{i}}} - R_{i{\bar{i}}k{\bar{k}}} {\mathfrak{g}}_{k{\bar{k}}} + 2 {\mathfrak{Re}}\{{\overline}{T_{ik}^j} {\mathfrak{g}}_{i{\bar{j}}k}\} - |T_{ik}^j|^2 {\mathfrak{g}}_{j{\bar{j}}} - G_{i{\bar{i}}k{\bar{k}}} \end{aligned}$$ where $$\begin{aligned} G_{i{\bar{i}}k{\bar{k}}} = \,& \chi_{k {\bar{k}}i {\bar{i}}} - \chi_{i {\bar{i}}k {\bar{k}}} + R_{k{\bar{k}}i{\bar{p}}} \chi_{p{\bar{i}}} - R_{i{\bar{i}}k{\bar{p}}} \chi_{p{\bar{k}}} + 2 {\mathfrak{Re}}\{{\overline}{T_{ik}^j} \chi_{i{\bar{j}}k}\} - T_{ik}^p {\overline}{T_{ik}^q} \chi_{p{\bar{q}}}. \end{aligned}$$ Let $S_k (\lambda)$ denote the $k$-th elementary symmetric polynomial of $\lambda \in {\hbox{\bbbld R}}^n$ $$S_k (\lambda) = \sum_{1 \leq i_1 < \cdots < i_k \leq n} \lambda_{i_1} \cdots \lambda_{i_k}.$$ In local coordinates we can write equation  in the form $$\label{cmate-M10'} F ({\mathfrak{g}}_{i{\bar{j}}}) := \Big(\frac{S_n (\lambda_* ({\mathfrak{g}}_{i{\bar{j}}}))} {S_{n-\alpha} (\lambda_* ({\mathfrak{g}}_{i{\bar{j}}} ))}\Big)^{\frac{1}{\alpha}} = \Big(\frac{\psi}{C^\a_n}\Big)^{\frac{1}{\alpha}}$$ or equivalently, $$\label{cmate-M10} C^\a_n \psi^{-1} = S_\alpha (\lambda^* ({\mathfrak{g}}^{i{\bar{j}}}))$$ where $\lambda_* (A)$ and $\lambda^* (A)$ denote the eigenvalues of a Hermitian matrix $A$ with respect to $\{g_{i{\bar{j}}}\}$ and to $\{g^{i{\bar{j}}}\}$, respectively. Unless otherwise indicated we shall use $S_\alpha$ to denote $S_\alpha(\lambda^* ({\mathfrak{g}}^{i\bar j}))$ when no possible confusion would occur. We shall also occasionally write $F (\chi_u) := F ({\mathfrak{g}}_{i{\bar{j}}})$ and $F (\chi_{{\underline}u}) := F ({\underline}u_{i{\bar{j}}} + \chi_{i{\bar{j}}})$, etc. Differentiating equation twice at a point $p$ where $g_{i{\bar{j}}} = \delta_{ij}$ and ${\mathfrak{g}}_{i{\bar{j}}}$ is diagonal, we obtain $$\label{cmate-eq148} C^\a_n \partial_l (\psi^{-1}) = - \sum_i S_{\a -1;i} ({\mathfrak{g}}^{i{\bar{i}}})^2 {\mathfrak{g}}_{i{\bar{i}}l}$$ and $$\label{cmate-eq149'} \begin{aligned} C^\a_n {\bar{\partial}}_l\partial_l (\psi^{-1}) = \,& - \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}_{i\bar il\bar l} + \sum_{i,j} S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}^{j\bar j} ({\mathfrak{g}}_{i\bar j l} {\mathfrak{g}}_{j\bar i\bar l} + {\mathfrak{g}}_{j\bar i l} {\mathfrak{g}}_{i\bar j\bar l}) \\ & + \sum_{i \neq j} S_{\alpha -2;ij} ({\mathfrak{g}}^{i\bar i})^2({\mathfrak{g}}^{j\bar j})^2 \big({\mathfrak{g}}_{i \bar i l}{\mathfrak{g}}_{j \bar j\bar l} - {\mathfrak{g}}_{j\bar i l} {\mathfrak{g}}_{i\bar j\bar l}\big) \end{aligned}$$ where for $\{i_1,\cdots,i_s\} \subseteq \{1,\cdots,n\}$, $$S_{k;i_1\cdots i_s} (\lambda) = S_k (\lambda|_{\lambda_{i_1} = \cdots = \lambda_{i_s}= 0}).$$ We need the following inequality from [@GLZ]; see also Proposition 2.2 in [@FLM11], $$\label{glz} \sum^n_{i=1} \frac{S_{\a -1;i}(\lambda)}{\lambda_i} \xi_i \bar\xi_i + \sum_{i,j} S_{\a -2;ij}(\lambda) \xi_i\bar\xi_j \geq \sum_{i,j} \frac{S_{\a -1;i}(\lambda)S_{\a -1;j}(\lambda)} {S_\a (\lambda)}\xi_i\bar\xi_j \geq 0$$ for $\lambda = (\lambda_1, \ldots, \lambda_n)$, $\lambda_i > 0$ and $(\xi_1, \ldots, \xi_n) \in {\hbox{\bbbld C}}^n$. Apply to $\lambda_i = {\mathfrak{g}}^{i{\bar{i}}}$, $\xi_i = ({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}_{i \bar i l}$ and sum over $l$. We see that $$\label{cmate-C2-E2} \sum_{i,l} S_{\alpha-1;i}({\mathfrak{g}}^{i\bar i})^3{\mathfrak{g}}_{i\bar il}{\mathfrak{g}}_{i\bar i\bar l} + \sum_{i\neq j} \sum_{l} S_{\alpha-2;ij} ({\mathfrak{g}}^{i\bar i})^2({\mathfrak{g}}^{j\bar j})^2 {\mathfrak{g}}_{i\bar il} {\mathfrak{g}}_{j\bar j\bar l} \geq 0.$$ Note also that $$\sum_{i \neq j} (S_{\alpha -1;i} - S_{\alpha-2;ij} {\mathfrak{g}}^{j\bar j}) ({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}^{j\bar j} {\mathfrak{g}}_{j\bar i l} {\mathfrak{g}}_{i\bar j\bar l} \geq 0.$$ We obtain from , $$\label{cmate-eq149} \begin{aligned} \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}_{i\bar il\bar l} \geq \,& \sum_{i,j} S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{g}}^{j\bar j} {\mathfrak{g}}_{i\bar j l} {\mathfrak{g}}_{j\bar i\bar l} - C. \end{aligned}$$ Let ${\underline}u \in C^2 ({\bar{M}})$, $\chi_{{\underline}u}> 0$ such that $$\label{cmate-eq14} n \chi_{{\underline}u}^{n-1} > (n-\alpha) \psi \chi_{{\underline}u}^{n-\alpha -1} \wedge \omega^\alpha.$$ Thus there is $\epsilon > 0$ such that $$\label{cmate-eq151} \epsilon\omega \leq \chi_{{\underline}u} \leq \epsilon^{-1} \omega.$$ The key ingredient of our estimates in the following sections is the following lemma. \[cmate-2alternative\] There exist constants $N, \theta > 0$ such that when $w \geq N$ at a point $p$ where $g_{i{\bar{j}}} = \delta_{ij}$ and ${\mathfrak{g}}_{i{\bar{j}}}$ is diagonal, $$\label{GSn-P50} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 ({\underline}u_{i\bar i} - u_{i\bar i}) \geq \theta \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 + \theta$$ and, equivalently, $$\label{GSn-P50'} \sum_{i, j} F^{i{\bar{j}}} ({\underline}u_{i\bar j} - u_{i\bar j}) \geq \theta \sum_{i, j} F^{i{\bar{j}}} g_{i\bar j} + \theta. $$ Here and in the rest of this paper, $$F^{i\bar j} = \frac{\p F}{\p {\mathfrak{g}}_{i\bar j}} ({\mathfrak{g}}_{i{\bar{j}}}).$$ It is well known that $\{F^{i\bar j}\}$ is positive definite. An equivalent form of Lemma \[cmate-2alternative\] and its proof are given in [@FLM11] (Theorem 2.8); see also [@Guan2012b] where it is proved for more general fully nonlinear equations. So we shall omit the proof here. The gradient estimates {#cmate-C1} ====================== In this section we establish the [*a priori*]{} gradient estimates. Suppose $\chi \in \mathscr{C}_{\alpha} (\omega)$ and let $u \in C^3 (M) \cap C^1 ({\bar{M}})$ be an admissible solution of . There is a uniform constant $C > 0$ such that $$\label{cmate-C1-1} \sup_{{\bar{M}}} |\nabla u | \leq C(1 + \sup_{\partial M}|\nabla u|).$$ Let ${\underline}u \in C^2 ({\bar{M}})$, $\chi_{{\underline}u}> 0$ satisfy and consider $\phi = Ae^\eta$ where $$\eta = {\underline}u - u + \sup_M (u - {\underline}u)$$ and $A$ is a constant to be determined. Suppose the function $e^{\phi}|\nabla u|^2$ attains its maximal value at an interior point $p\in M$. Choose local coordinate around $p$ such that $g_{i\bar j} = \delta_{ij}$ and ${\mathfrak{g}}_{i\bar j}$ is diagonal at $p$. At $p$ we have $$\label{cmate-C0-1} \frac{\partial_i(|\nabla u|^2)}{|\nabla u|^2} + \partial_i \phi = 0, \;\; \frac{\bar\partial_i(|\nabla u|^2)}{|\nabla u|^2} + \bar\partial_i\phi = 0$$ and $$\label{cmate-C0-2} \frac{\bar\partial_i\partial_i(|\nabla u|^2)}{|\nabla u|^2} - \frac{\partial_i(|\nabla u|^2)\bar\partial_i(|\nabla u|^2)}{|\nabla u|^4} + \bar\partial_i\partial_i\phi \leq 0.$$ By direct computation, $$\label{cmate-C0-3} \partial_i (|\nabla u|^2) = \sum_k (u_k u_{i \bar k} + u_{ki} u_{\bar k}),$$ $$\label{cmate-C0-4} \begin{aligned} \bar \partial_i \partial_i (|\nabla u|^2) \,& = \sum_k (u_{k\bar i} u_{\bar k i} + u_{ki} u_{\bar k\bar i} + u_{ki\bar i} u_{\bar k} + u_k u_{\bar k i\bar i}) \\ \,& = \sum_k (u_{ki} u_{\bar k\bar i} + u_{i\bar i k} u_{\bar k} + u_{i\bar i\bar k} u_k + R_{i\bar i k\bar l} u_l u_{\bar k}) \\ + \sum_k \,& \Big|u_{\bar k i} - \sum_l T^k_{il} u_{\bar l}\Big|^2 - \sum_k \Big|\sum_l T^k_{il} u_{\bar l}\Big|^2. \end{aligned}$$ Therefore, by and , $$\label{cmate-C0-41} \begin{aligned} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \bar\partial_i\partial_i(|\nabla u|^2) \geq \,& \sum_{i,k} S_{\alpha - 1;i}({\mathfrak{g}}^{i\bar i})^2 |u_{ki}|^2 \\ - C |\nabla u|^2 - C |\nabla u|^2 \sum_{i} \,& S_{\alpha - 1;i}({\mathfrak{g}}^{i\bar i})^2. \end{aligned}$$ From (\[cmate-C0-1\]) and (\[cmate-C0-3\]), $$\label{cmate-C0-5} \begin{aligned} \big|\partial_i(|\nabla u|^2)\big|^2 = \,& \Big|\sum_k u_{ki}u_{\bar k}\Big|^2 - 2|\nabla u|^2 \sum_k \mathfrak{Re}\big\{ u_k u_{i \bar k} \phi_{\bar i}\big\} - \Big|\sum_k u_k u_{i \bar k}\Big|^2 \\ \leq \,& |\nabla u|^2 \sum_k |u_{ki}|^2 - 2 |\nabla u|^2 \sum_k \mathfrak{Re}\big\{ u_k u_{i \bar k} \phi_{\bar i}\big\} \end{aligned}$$ by Schwarz inequality. Combining (\[cmate-C0-2\]), and (\[cmate-C0-5\]) we derive $$\label{cmate-C0-F} \begin{aligned} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 (\phi_{i{\bar{i}}} - C) + \frac{2}{|\nabla u|^2} \sum_{i,k} S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \mathfrak{Re}\big\{ u_k u_{i \bar k} \phi_{\bar i}\big\} \leq \,& C. \end{aligned}$$ Next, $$\partial_i\phi = \phi \, \partial_i \eta, \;\; \bar\partial_i \partial_i \phi = \phi \, (|\partial_i\eta|^2 + \bar\partial_i\partial_i\eta).$$ Therefore, $$\label{cmate-C0-6} \begin{aligned} 2 \phi^{-1} \sum_k \mathfrak{Re} \big\{u_k u_{i \bar k} \phi_{\bar i}\big\} \geq \,& 2 {\mathfrak{g}}_{i\bar i} \mathfrak{Re}\big\{u_i\eta_{\bar i}\big\} - \frac{1}{2} |\nabla u|^2 |\eta_i|^2 - C \end{aligned}$$ and $$\label{cmate-C0-7} \begin{aligned} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \eta_{i \bar i} \,& + \frac{1}{2} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 |\eta_i|^2 \\ \leq \,& - \frac{2}{|\nabla u|^2} \sum_i S_{\alpha -1; i} {\mathfrak{g}}^{i\bar i} \mathfrak{Re}\big\{u_i\eta_{\bar i}\big\} + \frac{C}{\phi} \\ + C \Big(\frac{1}{\phi} \,& + \frac{1}{|\nabla u|^2}\Big) \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2. \end{aligned}$$ For $N > 0$ sufficiently large so that Lemma \[cmate-2alternative\] holds, we consider two cases: ([**a**]{}) $w > N$ and ([**b**]{}) $w \leq N$. Without loss of generality we can assume that $|\nabla u| > |\nabla \underline u|$ at $p$ or otherwise we are done. Note that $$\label{cmate-C0-8} - \frac{2}{|\nabla u|^2} \sum_i S_{\alpha -1; i} {\mathfrak{g}}^{i\bar i} \mathfrak{Re}\big\{u_i\eta_{\bar i}\big\} \leq 4 \sum_i S_{\alpha -1; i} {\mathfrak{g}}^{i\bar i} = 4 \alpha S_\alpha.$$ In case ([**a**]{}) we have by Lemma \[cmate-2alternative\] $$\label{cmate-C1-20} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \eta_{i \bar i} \geq \theta + \theta \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2.$$ So if $S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \geq K$ for some $i$ and $K$ sufficiently large we derive a bound $|\nabla u| \leq C$ from and when $A$ is sufficiently large. Suppose that $S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \leq K$ for all $i$ and assume ${\mathfrak{g}}_{1\bar 1} \leq \cdots \leq {\mathfrak{g}}_{n\bar n}$. Note that $$ \prod_{i = 1}^{\alpha} {\mathfrak{g}}^{i\bar i} \geq \frac{S_{\alpha}}{C_n^{\alpha}} = \frac{1}{\psi}.$$ We have $$\frac{{\mathfrak{g}}^{1\bar 1}}{\psi} \leq ({\mathfrak{g}}^{1\bar 1})^2 \prod_{i = 2}^{\alpha} {\mathfrak{g}}^{i\bar i} \leq S_{\alpha -1;1} ({\mathfrak{g}}^{1\bar 1})^2 \leq K.$$ Therefore, for all $1 \leq i \leq n$, $$S_{\alpha -1;i} \leq C_n^{\alpha -1} ({\mathfrak{g}}^{1\bar 1})^{\alpha -1} \leq C_n^{\alpha -1} (K \psi)^{\alpha -1} \leq K'.$$ By Schwarz inequality, $$\label{cmate-C0-8'} \begin{aligned} - 2 \sum_i S_{\alpha -1; i} {\mathfrak{g}}^{i\bar i} \mathfrak{Re}\big\{u_i\eta_{\bar i}\big\} \leq \,& 4 \sum_i S_{\alpha -1; i} + \frac{1}{4} |\nabla u|^2 \sum_i S_{\alpha -1; i} ({\mathfrak{g}}^{i\bar i})^2 |\eta_i|^2 \\ \leq \,& \frac{1}{4} |\nabla u|^2 \sum_i S_{\alpha -1; i} ({\mathfrak{g}}^{i\bar i})^2 |\eta_i|^2 + C. \end{aligned}$$ From , and we obtain $$\frac{\theta}{C} - \frac{1}{\phi} - \frac{1}{|\nabla u|^2} \leq 0.$$ This gives a bound for $|\nabla u|$ when $A$ is chosen sufficiently large. In case ([**b**]{}) we have $$\label{cmate-C0-F5} \begin{aligned} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 |\eta_i|^2 \,& \geq |\nabla \eta|^2 \min_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \geq \frac{|\nabla \eta|^2}{w^{\alpha + 1}} \geq \frac{|\nabla \eta|^2}{N^{\alpha + 1}}. \end{aligned}$$ Substituting this into (\[cmate-C0-7\]), we derive from and , $$\label{cmate-C0-F6} \begin{aligned} \frac{|\nabla \eta|^2}{2N^{\alpha + 1}} \leq \,& 5 \alpha S_\alpha + \frac{C}{\phi} + \Big(\frac{C}{\phi} + \frac{C }{|\nabla u|^2} - \epsilon\Big) \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2. \end{aligned}$$ This gives a bound $|\nabla u|\leq C$. Boundary estimates for second derivatives {#gblq-B} ========================================= In this section we prove Theorem \[gl-thm20bc2\]. Throughout this section we assume that $\varphi$ is extended smoothly to ${\bar{M}}$ and that ${\underline}{u} \in C^2 ({\bar{M}})$ is a subsoltuion satisfying . As in [@GL10] and [@GL3] we follow the idea of [@GS93], [@Guan98a], [@Guan98b] to use ${\underline}{u} - u$ in construction of barrier functions. To derive let us consider a boundary point $0 \in \partial M$. We use coordinates around $0$ such that $\frac{\p}{\p x_n}$ is the interior normal direction to $\partial M$ at 0 and $g_{i{\bar{j}}} (0) = \delta_{ij}$. For convenience we set $$t_{2k-1} = x_k,\; t_{2k} = y_k, \, 1 \leq k \leq n-1; \; t_{2n-1} = y_n, \; t_{2n} = x_n .$$ Since $u - \varphi = 0$ on $\partial M$, one derives $$\label{cma-70} |u_{t_{\alpha} t_{\beta}}(0)| \leq C, \;\;\;\; \alpha, \beta < 2n$$ where $C$ depends on $|u|_{C^1 ({\bar{M}})}$, $|{\underline}{u}|_{C^1 ({\bar{M}})}$, and geometric quantities of $\partial M$. To estimate $u_{t_{\alpha} x_n} (0)$ for $\alpha \leq 2n$, we shall employ a barrier function of the form $$\label{cma-E85} v = (u - {\underline}{u}) + t \sigma - T \sigma^2 \;\; \mbox{in $\Omega_{\delta} = M \cap B_{\delta}$}$$ where $t, T$ are positive constants to be determined, $B_{\delta}$ is the (geodesic) ball of radius $\delta$ centered at $p$, and $\sigma$ is the distance function to $\partial M$. Note that $\sigma$ is smooth in $M_{\delta_0} := \{z \in M: \sigma (z) < \delta_0\}$ for some $\delta_0 > 0$. \[cma-lemma-20\] There exists $c_0 > 0$ such that for $T$ sufficiently large and $t, \delta$ sufficiently small, $v \geq 0$ and $$\label{eq-100} \sum_{i,j} F^{i{\bar{j}}} v_{i{\bar{j}}} \leq - c_0 \Big(1 + \sum_{i,j} F^{i{\bar{j}}} g_{i{\bar{j}}}\Big) \;\; \mbox{in} \;\; \Omega_{\delta}. $$ The proof is very similar to that of Lemma 5.1 in [@GL3]; for completeness we include it here. First of all, since $\sigma$ is smooth and $\sigma = 0$ on $\partial M$, for fixed $t$ and $T$ we may require $\delta$ to be so small that $v \geq 0$ in $\Omega_{\delta}$. Next, note that $$\sum_{i,j} F^{i{\bar{j}}} \sigma_{i{\bar{j}}} \leq C_1 \sum_{i,j} F^{i{\bar{j}}} g_{i{\bar{j}}}$$ for some constant $C_1 > 0$ under control. Therefore, $$\label{eq-110} \sum_{i,j} F^{i{\bar{j}}} v_{i{\bar{j}}} \leq \sum_{i,j} F^{i{\bar{j}}} (u_{i{\bar{j}}} -{\underline}{u}_{i{\bar{j}}}) + C_1 (t + T \sigma) \sum_{i,j} F^{i{\bar{j}}} g_{i{\bar{j}}} - 2 T \sum_{i,j} F^{i{\bar{j}}} \sigma_i \sigma_{{\bar{j}}}.$$ Fix $N > 0$ sufficiently large so that Lemma \[cmate-2alternative\] holds. At a fixed point in $\Omega_{\delta}$, we consider two cases: (a) $w \leq N$ and (b) $w > N$. In case (a) let $\lambda_1 \leq \cdots \leq \lambda_n$ be the eigenvalues of $\{{\mathfrak{g}}_{i{\bar{j}}}\}$. We see from equation  that there is a uniform lower bound $\lambda_1 \geq c_1 > 0$. Consequently, $c_2 I \leq \{F^{i{\bar{j}}}\} \leq \frac{1}{c_2} I$ for some constant $c_2 > 0$ depending on $N$ and $c_1$, and hence $$\label{cma-E95} \sum_{i,j} F^{i{\bar{j}}} \sigma_i \sigma_{{\bar{j}}} \geq c_2 |\nabla \sigma|^2 = \frac{c_2}{4}.$$ Since $F$ is homogeneous of degree one, by , and , $$\label{cma-E90} \sum_{i,j} F^{i{\bar{j}}} v_{i{\bar{j}}} \leq F ({\mathfrak{g}}_{i{\bar{j}}}) + (C_1 (t + T \sigma) - \epsilon) \sum_{i,j} F^{i{\bar{j}}} g_{i{\bar{j}}} - \frac{c_2 T}{2} \leq - \frac{\epsilon}{2} \Big(1 + \sum_{i,j} F^{i{\bar{j}}} g_{i{\bar{j}}}\Big)$$ if we fix $T$ sufficiently large and require $t$ and $\delta$ small to satisfy $C_1 (t + T \delta) \leq \epsilon/2$. Suppose now that $w > N$. By Lemma \[cmate-2alternative\] and , we may further require $t$ and $\delta$ to satisfy $C_1 (t + T \delta) \leq \theta/2$ so that holds. Using Lemma \[cma-lemma-20\] we may derive as in [@GL10] (but see [@GL3] for some corrections) the estimates $|u_{t_{\alpha} x_n} (0)| \leq C$ (and therefore $|u_{x_n t_{\alpha}} (0)| \leq C$) for $\alpha < 2n$; we shall omit the proof here. It remains to prove $${\mathfrak{g}}_{n\bar n} (0) \leq C .$$ The proof below uses an idea of Trudinger [@Trudinger95]. Let $T_C \partial M$ be the complex tangent bundle and $$T^{1,0}\partial M = T^{1,0} M \bigcap T_C \partial M = \{\xi\in T^{1,0}M : d \sigma(\xi) = 0\}.$$ Let $\hat \chi_u$ and $\hat \omega$ denote the restrictions to $T_C \partial M$ of $\chi_u$ and $\omega$ respectively. As in [@GL3] we only have to show that $$m_0 := \min_{\partial M} \frac{n \hat x^{n-1}_u}{\psi (n - \alpha) \hat \chi^{n - \alpha - 1}_u \wedge \hat \omega^\alpha} > 1 .$$ Suppose that $m_0$ is reached at a point $0 \in \partial M$. Let $\tau_1, \cdots, \tau_{n-1}$ be a local frame of vector fields in $T^{1,0}_C \partial M$ around 0 such that $g(\tau_\beta, \bar\tau_\gamma) = \delta_{\beta\gamma}$ for $1 \leq \beta, \gamma \leq n - 1$ and $\tau_\beta = \frac{\partial} {\partial z_{\beta}}$ at $0$. We extend $\tau_1, \ldots, \tau_{n-1}$ by their parallel transports along geodesics normal to $\partial M$ so that they are smoothly defined in a neighborhood of $0$. Denote $\tilde{u}_{\beta \bar{\gamma}} = u_{\tau_{\beta} \bar{\tau}_{\gamma}}$ and $\tilde{{\mathfrak{g}}}_{\beta \bar{\gamma}} = \tilde{u}_{\beta \bar{\gamma}} + \chi (\tau_{\beta}, \bar{\tau}_{\gamma})$, $1 \leq \beta, \gamma \leq n - 1$, etc. On $\partial M$ we have $$\frac{n\hat \chi^{n-1}_u}{\psi(n-\alpha)\hat\chi^{n-\alpha-1}_u \wedge \hat\omega^\alpha} = \frac{C^\alpha_n}{\psi} \frac{S_{n-1} (\tilde{{\mathfrak{g}}}_{\beta \bar{\gamma}})} {S_{n-\alpha-1}(\tilde{{\mathfrak{g}}}_{\beta \bar{\gamma}})}.$$ Define, for a positive definite $(n-1)\times(n-1)$ Hermitian matrix $\{r_{\beta\bar\gamma}\}$, $$G[r_{\beta \bar\gamma}] := \left(\frac{S_{n-1}(\lambda (r_{\beta \bar\gamma}))} {S_{n-\alpha-1} (\lambda (r_{\beta \bar\gamma}))}\right)^{\frac{1}{\alpha}},$$ where $\lambda (r_{\beta \bar\gamma})$ denotes the ordinary eigenvalues of $\{r_{\beta \bar\gamma}\}$ (with respect to the identity matrix $I$), and let $$G^{\beta\bar\gamma}_0 = \frac{\partial G}{\partial r_{\beta \bar\gamma}} [{\mathfrak{g}}_{\beta \bar\gamma}(0)] .$$ Note that $G$ is concave and homogeneous of degree one. Therefore, $$\label{cma-701} \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 r_{\beta \bar\gamma} \geq G [r_{\beta \bar\gamma}]$$ for any $\{r_{\beta \bar\gamma}\}$. In particular, since $u_{\beta \bar\gamma}(0) = \underline u_{\beta \bar\gamma} (0) + (u- \underline u)_{x_n} (0) \sigma_{\beta\bar\gamma} (0)$, we have $$\label{cma-702} \begin{aligned} G[{\mathfrak{g}}_{\beta \bar\gamma} (0)] = \,& \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 {\mathfrak{g}}_{\beta \bar\gamma}(0) \\ = \,& \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 (\chi_{\beta \bar\gamma}(0) + \underline u_{\beta \bar\gamma}(0)) + (u - \underline u)_{x_n} (0) \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 \sigma_{\beta \bar\gamma}(0). \end{aligned}$$ We shall need the following elementary lemma. \[cma-lemma-30\] Let $$A = \left[ \begin{aligned} B \;\;\; & \; C \\ \bar{C}' \;\; & a_{n{\bar{n}}} \end{aligned} \right]$$ be a positive definite Hermitian matrix. Then $$\label{eq-101} G^{\alpha} (B) \geq (1+ c_0) \frac{S_{n}(\lambda (A))} {S_{n-\alpha} (\lambda (A))}$$ where $c_0 > 0$ depends on the lower and upper bounds of the eigenvalues of $A$. It is straightforward to verify that $$\left[ \begin{aligned} I \;\;\;\; & 0 \\ \bar{C}' B^{-1} \;\; & 1 \end{aligned} \right] \; \left[ \begin{aligned} B \;\;\; & \; C \\ \bar{C}' \;\; & a_{n{\bar{n}}} \end{aligned} \right] \; \left[ \begin{aligned} I \;\; & B^{-1} C \\ 0 \;\; & \;\;\; 1 \end{aligned} \right] = \left[ \begin{aligned} B \;\; & \;\;\;\;\;\; 0 \\ \;\; 0 \;\; & a_{n{\bar{n}}} - \bar{C}' B^{-1} C \end{aligned} \right].$$ So $$\det A = (a_{n{\bar{n}}} - \bar{C}' B^{-1} C) \det B.$$ We now claim $$S_{n-\alpha} (\lambda (A)) \geq (a_{n{\bar{n}}} - \bar{C}' B^{-1} C) S_{n-\alpha-1} (\lambda (B)) + S_{n-\alpha} (\lambda (B)).$$ To see this we can assume $B$ is diagonal and consider a submatrix of $A$ of the form $$A_J = \left[ \begin{aligned} B_J \;\;\; & \; C_J \\ \bar{C_J}' \;\; & a_{n{\bar{n}}} \end{aligned} \right].$$ We have $$\bar{C}' B^{-1} C \geq \bar{C_J}' B_J^{-1} C_J \geq 0$$ since $B$ is positive definite and $\bar{C}'$ is the conjugate transpose of $C$. Therefore, $$\det A_J = (a_{n{\bar{n}}} - \bar{C_J}' B_J^{-1} C_J) \det B_J \geq (a_{n{\bar{n}}} - \bar{C}' B^{-1} C) \det B_J.$$ The claim and now follow easily. We continue the proof of . Suppose that for some small $\theta_0 > 0$ to be determined later, $$- \sum_{\beta,\gamma < n} (u - \underline u)_{x_n} (0) G^{\beta \bar\gamma}_0 \sigma_{\beta \bar\gamma}(0) \leq \theta_0 \sum_{\beta,\gamma < n} G^{\beta\bar\gamma}_0 (\chi_{\beta\bar\gamma}(0) + \underline u_{\beta\bar\gamma}(0)).$$ Then, $$\begin{aligned} G[{\mathfrak{g}}_{\beta\bar\gamma}(0)] \geq \,& (1 - \theta_0) \sum_{\beta,\gamma < n} G^{\beta\bar\gamma}_0 (\chi_{\beta\bar\gamma}(0) + \underline u_{\beta\bar\gamma}(0)) \\ \geq \,& (1 - \theta_0) \, G[\chi_{\beta \bar\gamma}(0) + \underline u_{\beta \bar\gamma}(0)] \\ \geq \,& (1 - \theta_0) (1 + c_0) F (\chi_{{\underline}u}) \\ \geq \,& (1 - \theta_0) (1 + c_0) \left(\frac{\psi(0)}{C^\alpha_n}\right)^{\frac{1}{\alpha}}. \end{aligned}$$ The second and fourth inequalities follow from and , respectively, while the third from Lemma \[cma-lemma-30\]. Choosing $\theta_0$ small enough, we obtain $$m_0 = \frac{C_n^\alpha}{\psi(0)} G[{\mathfrak{g}}_{\beta\bar\gamma}(0)]^{\alpha} \geq 1 + \frac{\theta_0}{2}.$$ Suppose now that $$- (u - \underline u)_{x_n} (0) \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 \sigma_{\beta \bar\gamma} (0) > \theta_0 \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 (\chi_{\beta \bar\gamma}(0) + \underline u_{\beta \bar\gamma}(0)).$$ On $\partial M$, $\tilde u_{\beta\bar\gamma} = \tilde \varphi_{\beta\bar\gamma} + (u - \varphi)_\nu \tilde \sigma_{\beta\bar\gamma}$ where $$\nu = \sum^{2n}_{k = 1} \nu^k \frac{\p}{\p t_k}$$ is the interior unit normal vector field to $\partial M$. We have $|\nu^k| \leq C\rho$ for $k < 2n$ and $|(u - \varphi)_{t_k}| \leq C \rho$ since $\nu^k (0) = 0$ for $k < 2n$ and $u = \varphi$ on $\partial M$. Define $$\begin{aligned} \varPhi = \,& \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 (\tilde \chi_{\beta \bar\gamma} + \tilde\varphi_{\beta \bar\gamma}) + (u - \varphi)_{x_n} \nu^{2n} \sum_{\beta,\gamma < n} G^{\beta\bar\gamma}_0 \tilde \sigma_{\beta \bar\gamma} - \Big(\frac{m_0 \psi}{C^\alpha_n}\Big)^{\frac{1}{\alpha}} \\ := \,& - (u - \varphi)_{x_n} \eta + Q \end{aligned}$$ where $\eta$ and Q are smooth. Note that $\varPhi(0) = 0$ and $$\begin{aligned} \eta (0) = - \,& \nu^{2n} (0) \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 \sigma_{\beta \bar\gamma} (0) \\ > \,& \frac{\theta_0}{(u - {\underline}u)_{x_n} (0)} \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 (\chi_{\beta \bar\gamma}(0) + {\underline}u_{\beta \bar\gamma}(0)) \\ \geq \,& \frac{\theta (1 + \epsilon) \psi(0)}{C^\alpha_n (u - \underline u)_{x_n} (0) } \geq c_2 > 0. \end{aligned}$$ On $\partial M$, $$\begin{aligned} \varPhi = \,& \sum_{\beta,\gamma < n}G^{\beta \bar\gamma}_0 \tilde {\mathfrak{g}}_{\beta \bar\gamma} - \sum_{k < 2n}(u - \varphi)_{t_k} \nu^{k} \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 \tilde \sigma_{\beta \bar\gamma} - \Big(\frac{m_0 \psi}{C^\alpha_n}\Big)^{\frac{1}{\alpha}} \\ \geq & \sum_{k < 2n}(u - \varphi)_{t_k} \nu^{k} \sum_{\beta,\gamma < n} G^{\beta \bar\gamma}_0 \tilde \sigma_{\beta \bar\gamma} \geq - C \rho^2 \end{aligned}$$ since by $$\sum_{\beta,\gamma < n}G^{\beta \bar\gamma}_0 \tilde {\mathfrak{g}}_{\beta \bar\gamma} \geq G [\tilde {\mathfrak{g}}_{\beta \bar\gamma}] \geq \Big(\frac{m_0 \psi}{C^\alpha_n}\Big)^{\frac{1}{\alpha}}.$$ We calculate $$\begin{aligned} & \sum_{i,j} F^{i\bar j} \varPhi_{i\bar j} \leq - \eta \sum_{i,j} F^{i\bar j} (u_{x_n})_{i\bar j} + C \sum_{i,j} F^{i\bar j} g_{i\bar j} \\ + \sum_{i,j} F^{i\bar j} \,& (u - \varphi)_{x_n z_i} (u - \varphi)_{x_n {\bar{z}}_j}. \end{aligned}$$ As in [@CKNS] (see also [@GL3]), $$\sum_{i,j} F^{i\bar j} (u - \varphi)_{x_n z_i} (u - \varphi)_{x_n {\bar{z}}_j} \leq \sum_{i,j} F^{i\bar j} (u - \varphi)_{y_n z_i} (u - \varphi)_{y_n {\bar{z}}_j} + C \sum_{i,j} F^{i\bar j} g_{i\bar j} + C.$$ On the other hand, differentiating equation with respect to $x_n$, we see that $$\label{cmate-bd-E1} \begin{aligned} - \sum_{i,j} F^{i\bar j} (u_{x_n})_{i\bar j} \leq \,& 2 \Big|\sum_{i,j,l} F^{i\bar j} {\mathfrak{g}}_{i\bar l} \overline{\Gamma^l_{nj}}\Big| + C \sum_{i,j} F^{i\bar j} g_{i\bar j} + C. \end{aligned}$$ At a fixed point choose a unitary $A = \{a_{ij}\}_{n \times n}$ which diagonalizes $\{{\mathfrak{g}}_{i{\bar{j}}}\}$. We have $$\label{cmate-bd-E2} \begin{aligned} \sum_{i,j,l} F^{i\bar j} {\mathfrak{g}}_{i\bar l} \overline{\Gamma^l_{nj}} = \,& \sum_{i,j,l,s,t,p,q} a^{is} f_s \delta_{st} \bar a^{jt} a_{ip} \lambda_p \delta_{pq} \bar a_{lq} \overline{\Gamma^l_{nj}} \\ = & \sum_{q} f_q \lambda_q \sum_{j,l} \bar a^{jq} \bar a_{lq} {\overline}{\Gamma^l_{nj}} \leq C \psi . \end{aligned}$$Therefore, $$- \sum_{i,j} F^{i\bar j} (u_{x_n})_{i\bar j} \leq C \sum_{i,j} F^{i\bar j} g_{i\bar j} + C. $$ Applying Lemma \[cma-lemma-20\] we derive $$\sum_{i,j} F^{i{\bar{j}}} (Av + B\rho^2 + \Phi - |(u - \varphi)_{y_n}|^2)_{i{\bar{j}}} \leq 0 \;\; \mbox{in $M\cap B_\delta(0)$}$$ and $Av + B\rho^2 + \Phi - |(u - \varphi)_{y_n}|^2 \geq 0$ on $\partial (M \cap B_\delta(0))$ when $A \gg B \gg 1$. By the maximum principle, $Av + B\rho^2 + \Phi - |(u - \varphi)_{y_n}|^2 \geq 0$ in $M\cap B_\delta(0)$, and therefore $\Phi_{x_n}(0) \geq - C$. This gives $$u_{n\bar n} (0) \leq C.$$ We now have positive lower and upper bounds for all eigenvalues of $\{{\mathfrak{g}}_{i{\bar{j}}} (0)\}$. By Lemma \[cma-lemma-30\], $$G[{\mathfrak{g}}_{\beta \bar \gamma} (0))] \geq (1 + c_0) F ({\mathfrak{g}}_{i{\bar{j}}} (0))$$ for some $c_0 > 0$. It follows that $$m_0 = \frac{C_n^\alpha}{\psi(0)} G[{\mathfrak{g}}_{\beta \bar \gamma} (0))] \geq 1 + c_0.$$ The proof of is therefore complete. The second order estimates {#cmate-C2} ========================== Suppose $\chi \in \mathscr{C}_{\alpha} (\omega)$ and let $u \in C^4(M) \cap C^2({\bar{M}})$ be a solution of equation . Then there is a uniform constant $C>0$ such that $$\label{cmate-C2-1} \sup_{{\overline}M} \Delta u \leq C(1 + \sup_{\partial M} \Delta u).$$ Let $\phi$ be a function to be determined later and assume that $w e^{\phi}$ reaches its maximum at some point $p \in M $ where $w = \Delta u + tr\chi$. Choose local coordinates around $p$ such that $g_{i\bar j}(p) = \delta_{ij}$ and ${\mathfrak{g}}_{ij}$ is diagonal. At $p$ we have $$\label{cmate-C2-F1} \frac{\partial_l w}{w} + \partial_l\phi = 0, \;\; \frac{\bar\partial_l w}{w} + \bar\partial_l\phi = 0$$ and $$\label{cmate-C2-F} \frac{\bar\partial_l\partial_l w}{w} - \frac{\bar\partial_l w\partial_l w}{w^2} + \bar\partial_l\partial_l\phi \leq 0.$$ By and Schwarz inequality, $$\label{cmate-C2-2} \begin{aligned} \big|\partial_l w \big|^2 = \,& \Big|\sum_i {\mathfrak{g}}_{i\bar i l} \Big|^2 = \Big|\sum_i ({\mathfrak{g}}_{l\bar ii} - T^i_{li} {\mathfrak{g}}_{i\bar i}) + \lambda_l\Big|^2 \\ \leq \,& w \sum_i {\mathfrak{g}}^{i\bar i} \big| {\mathfrak{g}}_{l\bar ii} - T^i_{li} {\mathfrak{g}}_{i\bar i}\big|^2 - 2 w \mathfrak{Re} \big\{\phi_l \bar{\lambda_l} \big\} - \big|\lambda_l\big|^2 \end{aligned}$$ where $$\lambda_l = \sum_i \Big(\chi_{i\bar il} - \chi_{l\bar ii} + \sum_j T^j_{li}\chi_{j\bar i}\Big).$$ Next, by and , $$\label{cmate-C2-E1} \begin{aligned} \sum_l S_{\alpha-1;l} ({\mathfrak{g}}^{l\bar l})^2 \bar\partial_l\partial_l w = \, & \sum_{i,l} S_{\alpha-1;l} ({\mathfrak{g}}^{l\bar l})^2 {\mathfrak{g}}_{i\bar i l\bar l} \\ \geq \,& \sum_{i,l} S_{\alpha-1 ;l} ({\mathfrak{g}}^{l\bar l})^2 {\mathfrak{g}}_{l\bar li\bar i} - 2 \sum_{i,j,l} S_{\alpha-1 ;l} ({\mathfrak{g}}^{l\bar l})^2 \mathfrak{Re} \big\{\overline{T^j_{li}}{\mathfrak{g}}_{l\bar ji}\big\} \\ & + \sum_{i,j,l} S_{\alpha-1 ;l} ({\mathfrak{g}}^{l\bar l})^2 T^j_{li} \overline{T^j_{li}} {\mathfrak{g}}_{j\bar j} - C w \sum_l S_{\alpha-1;l} ({\mathfrak{g}}^{l\bar l})^2 \\ \geq \,& \sum_{i,j,l} S_{\a -1;i}({\mathfrak{g}}^{i{\bar{i}}})^2{\mathfrak{g}}^{j{\bar{j}}} \big|{\mathfrak{g}}_{i{\bar{j}}l} - T^j_{il}{\mathfrak{g}}_{j{\bar{j}}}\big|^2 \\ & - C w \sum_l S_{\alpha-1;l} ({\mathfrak{g}}^{l\bar l})^2 - C. \end{aligned}$$ It follows from (\[cmate-C2-F\]), (\[cmate-C2-2\]) and (\[cmate-C2-E1\]) that $$\label{cmate-C2-F2} \begin{aligned} 0 \geq \,& w \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2 \phi_{i\bar i} + 2 \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 \mathfrak{Re} \big\{\phi_i \bar {\lambda_i}\big\} \\ & - C w \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2 - C. \end{aligned}$$ Let $\phi = e^{A\eta}$ with $\eta = {\underline}u - u + \sup_M (u - {\underline}u)$, where ${\underline}u \in C^2 ({\bar{M}})$ satisfies $\chi_{{\underline}u}> 0$ and , and $A$ is a positive constant to be determined. So $$\phi_i = A \phi \eta_i, \;\; \phi_{i{\bar{i}}} = A \phi \eta_{i{\bar{i}}} + A^2 \phi |\eta_i|^2.$$ Applying Schwarz inequality again, $$\label{cmate-C2-3} \begin{aligned} 2 \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 {\mathfrak{Re}}\big\{\phi_i \bar{\lambda_ i}\big\} = \,& 2 A \phi \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i \bar i})^2 \mathfrak{Re} \big\{\eta_i \bar{\lambda_ i}\big\} \\ \geq - w A^2 \phi \sum_i \,& S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2 |\eta_i|^2 - \frac{C \phi}{w} \sum_i S_{\alpha -1;i} ({\mathfrak{g}}^{i\bar i})^2. \end{aligned}$$ Finally, by (\[cmate-C2-F2\]) and (\[cmate-C2-3\]), $$\label{cmate-C2-F3} \begin{aligned} w A \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2 \eta_{i\bar i} \leq \,& \frac{C}{\phi} + C \Big(\frac{1}{w} + \frac{w}{\phi}\Big) \sum_i S_{\alpha -1;i}({\mathfrak{g}}^{i\bar i})^2. \end{aligned}$$ From Lemma \[cmate-2alternative\], this gives a bound $w \leq C$ at $p$ for $A$ sufficiently large. [99]{} T. Aubin, [*Équations du type Monge-Ampère sur les variétés kählériennes compactes*]{}, (French) Bull. Sci. Math. (2) [**102**]{} (1978), 63–95. L. A. Caffarelli, L. Nirenberg and J. Spruck, [*The Dirichlet problem for nonlinear second-order elliptic equations I. Monge-Ampère equations*]{}, [ Comm. Pure Applied Math.]{} [**37**]{} (1984), 369–402. L. A. Caffarelli, J. J. Kohn, L. Nirenberg and J. Spruck, [*The Dirichlet problem for nonlinear second-order elliptic equations II. Complex Monge-Ampère and uniformly elliptic equations*]{}, [ Comm. Pure Applied Math.]{} [**38**]{} (1985), 209–252. L. A. Caffarelli, L. Nirenberg and J. Spruck, [*The Dirichlet problem for nonlinear second-order elliptic equations III: Functions of eigenvalues of the Hessians*]{}, [ Acta Math.]{} [**155**]{} (1985), 261–301. X.-X. Chen, [*A new parabolic flow in Kähler manifolds*]{}, Comm. Anal. Geom. [**12**]{} (2004), 837–852. S. S. Chern, H. I. Levine, L. Nirenberg, [*Intrinsic norms on a complex manifold*]{}, 1969 Global Analysis (Papers in Honor of K. Kodaira) pp. 119–139, Univ. Tokyo Press, Tokyo. P. Cherrier, [*Equations de Monge-Ampère sur les variétés hermitiennes compactes*]{}, Bull. Sci. Math. [**111**]{} (1987), 343–385. P. Cherrier and A. Hanani, [*Le problème de Dirichlet pour des équations de Monge-Ampère en métrique hermitienne*]{}, Bull. Sci. Math. [**123**]{} (1999), 577–597. S. Dinew and S. Kolodziej, [*Liouville and Calabi-Yau type theorems for complex Hessian equations*]{}, arXiv: 1203.3995. S. K. Donaldson, [*Symmetric spaces, Kähler geometry and Hamiltonian dynamics*]{}, Northern California Symplectic Geometry Seminar, 13–33, Amer. Math. Soc. Transl. Ser. 2, [**196**]{}, Amer. Math. Soc., Providence, RI, 1999. S. K. Donaldson, [*Moment maps and diffeomorphisms*]{}, Asian J. Math. [**3**]{} (1999), 1–16. H. Fang, M.-J. Lai and X.-N. Ma, [*On a class of fully nonlinear flows in Kähler geometry*]{} J. Reine Angew. Math. [**653**]{} (2011), 189–220. B. Guan, [*The Dirichlet problem for Monge-Ampère equations in non-convex domains and spacelike hypersurfaces of constant Gauss curvature*]{}, Trans. Amer. Math. Soc. [**350**]{} (1998), 4955–4971. B. Guan, [*The Dirichlet problem for complex Monge-Ampère equations and regularity of the pluri-complex Green function*]{}, Comm. Anal. Geom. [**6**]{} (1998), 687–703. [*A correction*]{}, [**8**]{} (2000), 213–218. B. Guan, [*Second order estimates for fully nonlinear elliptic equations on Kähler manifolds*]{}, preprint 2012. B. Guan and Q. Li, [*Complex Monge-Ampère equations and totally real submanifolds*]{}, Adv. Math. [**225**]{} (2010), 1185-1223. B. Guan and Q. Li, [*A Monge-Ampère type fully nonlinear equation on Hermitian manifolds*]{}, Disc. Cont. Dynam. Syst. B [**17**]{} (2012), 1991–1999. B. Guan and Q. Li, [*The Dirichlet problem for a Complex Monge-Ampère type equation on Hermitian Manifolds*]{}, ArXiv:1210.5526. B. Guan and J. Spruck, [*Boundary value problem on ${\hbox{\bbbld S}}^n$ for surfaces of constant Gauss curvature*]{}, [ Annals of Math.]{} [**138**]{} (1993), 601–624. P.-F. Guan, [*Extremal functions related to intrinsic norms*]{}, Ann. of Math. [**156**]{} (2002), 197–211. P.-F. Guan, [*Remarks on the homogeneous complex Monge-Ampère equation*]{}, Complex Analysis, Trends in Math., Springer Basel AG. (2010), 175–185. P.-F. Guan, Q. Li and X. Zhang, [*A uniqueness theorem in Kähler geometry*]{}, Math. Ann. [**345**]{} (2009), 377–393. A. Hanani, [*Équations du type de Monge-Ampère sur les variétés hermitiennes compactes*]{}, J. Funct. Anal. [**137**]{} (1996), 49–75. Z. Hou, X.-N. Ma and D.-M. Wu, [*A second order estimate for complex Hessian equations on a compact Kähler manifold*]{}, Math. Res. Lett. [**17**]{} (2010), 547-561. S.-Y. Li, [*On the Dirichlet problems for symmetric function equations of the eigenvalues of the complex Hessian*]{}, Asian J. Math. [**8**]{} (2004), 87–106. D. H. Phong, J. Song and J. Sturm. [*Complex Monge-Ampère equations*]{}, Surveys in Differential Geomety, vol. [**17**]{}, 327-411 (2012). J. Serrin, [*The problem of [D]{}irichlet for quasilinear elliptic differential equations with many independent variables*]{}, [Philos. Trans. Royal Soc. London]{} [**264**]{} (1969), 413–496. J. Song and B. Weinkove, [*On the convergence and singularities of the J-flow with applications to the Mabuchi energy*]{}, Comm. Pure Appl. Math. [**61**]{} (2008), 210-�229. V. Tosatti and B. Weinkove, [*Estimates for the complex Monge-Ampère equation on Hermitian and balanced manifolds*]{}, Asian J. Math. [**14**]{} (2010), 19–40. V. Tosatti and B. Weinkove, [*The complex Monge-Ampère equation on compact Hermitian manifolds*]{}, J. Amer. Math. Soc. [**23**]{} (2010), 1187–1195. N. S. Trudinger, [*On the Dirichlet problem for Hessian equations*]{}, Acta Math. [**175**]{} (1995), 151–164. B. Weinkove, [*Convergence of the J-flow on Kähler surfaces*]{}, Comm. Anal. Geom. [**12**]{} (2004), 949–965. B. Weinkove, [*On the J-flow in higher dimensions and the lower boundedness of the Mabuchi energy*]{}, J. Differential Geom. [**73**]{} (2006), 351–358. S.-T. Yau, [*On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I.*]{} Comm. Pure Appl. Math. [**31**]{} (1978), 339–411. X.-W. Zhang, [*A priori estimate for complex Monge-Ampère equation on Hermitian manifolds*]{}, Int. Math. Res. Notices [**2010**]{} (2010), 3814–3836.
--- abstract: 'Using a new technique to improve the sensitivity to weak Quasi-Periodic Oscillations (QPO) we discovered a new QPO peak at about 1100 Hz in the March 1996 outburst observations of 4U 1608–52, simultaneous with the $\sim 600 - 900$ Hz peak previously reported from these data. The frequency separation between the upper and the lower QPO peak varied significantly from $232.7 \pm 11.5$ Hz on March 3, to $293.1 \pm 6.6$ Hz on March 6. This is the first case of a variable kHz peak separation in an atoll source.' address: | $^{1}$Astronomical Institute “Anton Pannekoek”, University of Amsterdam and Center for High-Energy Astrophysics, Kruislaan 403, NL-1098 SJ Amsterdam, the Netherlands\ $^{2}$Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata, Argentina\ $^{3}$Physics Department, University of Alabama in Huntsville, Huntsville, AL 35899, USA\ $^{4}$Massachusetts Institute of Technology, Center for Space Research, Room 37-627, Cambridge, MA 02139, USA\ $^{5}$Space Radiation Laboratory, California Institute of Technology, MC 220-47, Pasadena CA 91125, USA\ $^{6}$Astrophysics, University of Oxford, Nuclear and Astrophysics Laboratory, Keble Road, Oxford OX1 3RH, United Kingdom\ $^{7}$Laboratory for High Energy Astrophysics, Goddard Space Flight Center, Greenbelt, MD 20771, USA\ $^{8}$Departments of Physics and Astronomy, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA\ $^{9}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA author: - 'M. Méndez$^{1,2}$, M. van der Klis$^{1}$, J. van Paradijs$^{1,3}$, W. H. G. Lewin$^{4}$, B. A. Vaughan$^{5}$, E. Kuulkers$^{6}$, W. Zhang$^{7}$, F. K. Lamb$^{8}$, D. Psaltis$^{9}$' title: 'A varying kHz peak separation in 4U 1608–52' --- Introduction {#introduction .unnumbered} ============ Observations with the Rossi X-ray Timing Explorer (RXTE) have so far led to the discovery of kilohertz quasi-periodic oscillations (kHz QPOs) in about a dozen low-mass X-ray binaries (LMXB; see van der Klis 1997 [@vanderklis97a] for a review). In most cases the power spectra of these sources show twin kHz peaks that move up and down in frequency together, keeping a constant separation (e.g., Wijnands et al. 1997 [@wijnandsetal97]). In several sources a third peak has been found near a frequency equal to the separation of the twin peaks, or twice that value, suggesting a beat-frequency interpretation for the kHz QPOs [@strohmayer96b]. Here we present the results of the analysis of two RXTE observations of the atoll source 4U 1608–52 on 1996 March 3 and 6, during an outburst. Using a new technique to increase the sensitivity to weak QPOs, we detected a second peak at $\sim 1100~$Hz, simultaneous with the $\sim 600 - 900$ Hz peak discovered by Berger et al. (1996) [@berger96] in the same data. Detection of the second QPO {#detection-of-the-second-qpo .unnumbered} =========================== We divided the high-time resolution data in segments of 64 sec and calculated a power spectrum for each segment. During both observations the strong peak previously reported by Berger et al. (1996) [@berger96] was well detected in each segment. Its frequency varied from $\sim 820$ to $\sim 890$ Hz on March 3, and between $\sim 650$ and $\sim 870$ Hz on March 6. We fitted the centroid frequency of the strong peak in the power spectrum of each segment, and we then shifted the frequency scale of each individual spectrum to a frame of reference where the position of the strong peak was constant in time. Finally, we averaged these shifted power spectra. If the frequency separation between the two peaks were constant, then this “shift and average” procedure to compensate for the frequency change of the strong peak would also compensate for the frequency change of the weak peak, optimizing chances to detect it. The improvement in the sensitivity comes from the fact that the signal-to-noise ratio $S/N$ of a QPO peak of given rms amplitude is inversely proportional to the square root of its width [@vanderklis89], and the motion of the peak, if uncorrected, makes it much wider, reducing $S/N$. In Fig. \[psfig\] we show both power spectra calculated using the above method. In both cases a second QPO peak can be seen at a frequency $\sim 200 - 300$ Hz above that of the strong peak of Berger et al. (1996) [@berger96]. We fitted each power spectrum with a function consisting of a constant level, representing the Poisson noise, the Very Large Event contribution [@zhang95i; @zhang95ii], and two Lorentzians. The $1\sigma$ error bars from the fits indicate the second peak to be $4.3 \sigma$ and $4.4 \sigma$ significant on March 3, and March 6, respectively. An $F$-test to the $\chi^{2}$ of the fits with and without this peak yields a probability of $7.1 \times 10^{-10}$ on March 3, and $5.3 \times 10^{-8}$ on March 6, for the null hypothesis that the peak is not present in the data. Considering the number of trials implied by the number of independent frequencies analyzed [@vanderklis89], these probabilities increase to $1.4 \times 10^{-7}$ and $1.1 \times 10^{-5}$, respectively. Interestingly, $\Delta \nu$ changed from $232.7 \pm 11.5$ Hz on March 3 to $293.1 \pm 6.6$ Hz on March 6, a change of $60.4 \pm 13.3$ Hz. We tested the significance of this result by fitting both power spectra simultaneously, but forcing the distance between the peaks to be the same in both of them. Applying an $F$-test to the $\chi^{2}$ of this fit and the fit where all parameters were free we get a probability of $2.4 \times 10^{-3}$ for the hypothesis that the peak separation did not change between the two observations: the difference in the frequency separation between March 3 and 6 is significant at the $3.2\sigma$ level. In Fig. \[freq\_rate\] we plot QPO frequency versus count rate for 1996 March 3, 6. In both cases frequency is positively correlated to the source count rate, however there is no simple function that fits both observations simultaneously. A similar effect has been observed when different sources, spanning a very large range of luminosities, are plotted together in a single frequency-luminosity diagram: each source shows a positive correlation, along lines which are more or less parallel (e.g., van der Klis et al. 1997a [@vanderklisetal97a]). A similar behavior as we see in 4U 1608–52 was also observed in 4U 0614+091 [@ford97a; @mendez97a]. The fact that this is observed in individual sources shows that a difference in neutron star properties such as mass or magnetic field strength can not be the full explanation for the differences observed in the frequency-luminosity relations. Conclusions {#conclusions .unnumbered} =========== We have found, for the first time, the second peak in 4U 1608–52 expected on the basis of comparison to other kHz QPO sources. We see the separation between the two peaks vary. In the case of Sco X–1, where $\Delta \nu$ also varies [@vanderklisetal97b], it has been argued that the variations can be attributed to near-Eddington accretion. This explanation can not apply to 4U 1608–52, as its luminosity was $1.3 \times 10^{37}$ erg s$^{-1}$ and $9.4 \times 10^{36}$ erg s$^{-1}$ on March 3 and March 6, respectively [@mendez97b], less than 10% of $L_{\rm Edd}$. The dependence of the QPO frequency on count rate is complex. Although for each individual observation frequency and count rates are correlated, no simple relation fits both observations simultaneously. The frequency of the QPO did not change much while the source count rate dropped by 20%. van der Klis, M. 1997, in Proc. of the NATO ASI “The many faces of neutron stars”, Lipari, Italy, in press (astro-ph/9710016) Wijnands, R. A. D., van der Klis, M., van Paradijs, J., Lewin, W. H. G., Lamb, F. K., Vaughan, B. A., & Kuulkers, E. 1997 ApJ, 479, L141 Strohmayer, T. E., Zhang, W., Smale, A., Day, C., Swank, J., Titarchuk, L., & Lee, U. 1996, IAU Circ. 6387 Berger, M. et al. 1996, ApJ, 469, L13 van der Klis, M. 1989, in Timing Neutron Stars, ed. H. Ögelman & E. P. J. van den Heuvel (NATO ASI Ser C262), 27 Zhang, W. 1995, XTE/PCA Internal Memo, 5–23–95 Zhang, W., Jahoda, K., Swank, J.H., Morgan, E. H., & Giles, A. B. 1995, ApJ, 449, 930 van der Klis, M., et al. 1997a, in preparation Ford, E., et al. 1997, ApJ, 475, L123 Méndez, M., van der Klis, M., van Paradijs, J., Lewin, W. H. G., Lamb, F. K., Vaughan, B. A., Kuulkers, E., & Psaltis, D. 1997a, ApJ, 485, L37 van der Klis, M., Wijnands, R. A. D., Horne, K., & Chen, W. 1997b, ApJ, 481, L97 Méndez, M., van der Klis, M., van Paradijs, J., Lewin, W. H. G., Vaughan, B. A., Kuulkers, E., Lamb, F. K., & Psaltis, D. 1997b, ApJ, submitted
--- abstract: 'We scrutinize the XENON1T electron recoil excess in the scalar-singlet-extended dark matter effective field theory. We confront it with various astrophysical and laboratory constraints both in a general setup and in the more specific, recently proposed, variant with leptophilic $Z_2$-odd mediators. The latter also provide mass to the light leptons via suppressed $Z_2$ breaking, a structure that is well fitting with the nature of the observed excess and the discrete symmetry leads to non-standard dark-matter interactions. We find that the excess can be explained by neutrino–electron interactions, linked with the neutrino and electron masses, while dark-matter–electron scattering does not lead to statistically significant improvement. We analyze the parameter space preferred by the anomaly and find severe constraints that can only be avoided in certain corners. Potentially problematic bounds on electron couplings from Big-Bang Nucleosynthesis can be circumvented via a late phase transition in the new scalar sector.' author: - Giorgio Arcadi - Andreas Bally - Florian Goertz - 'Karla Tame-Narvaez' - Valentin Tenorth - Stefan Vogl bibliography: - 'xenon1Tee.bib' title: | EFT Interpretation of XENON1T Electron Recoil Excess:\ Neutrinos and Dark Matter --- Introduction and Setup {#sec:intro} ====================== Recently, the XENON1T collaboration reported new results from an analysis of low-energy electronic recoil data [@Aprile:2020tmw]. In the energy range between $1\--5$ keV the collaboration observes an excess of events that could point towards new physics. An interpretation of the data in terms of solar axions, or a neutrino magnetic moment finds substantial statistical improvements over the background model with significances above $3 \sigma$. A more mundane explanation is also possible. With the current understanding of the experiment, a contamination with tritium, which could contribute to the excess via its beta decays, cannot be excluded and it is clearly premature to celebrate the discovery of physics beyond the Standard Model (SM). Nevertheless, the excess has attracted attention and it is of great interest to scrutinize alternative explanations and identify independent experimental probes that could confirm or refute them. As the excess is observed in a handful of bins above the threshold only theories that predict a highly localized energy deposit or an IR-dominated recoil energy spectrum can account for the observation. The proposed solutions include but are not limited to new interactions of neutrinos  [@Boehm:2020ltd; @AristizabalSierra:2020edu; @Khan:2020vaf; @Ge:2020jfn; @Chao:2020yro; @Gao:2020wfr], the absorption of keV-scale dark matter (DM) [@Takahashi:2020bpq; @Alonso-Alvarez:2020cdv; @Bloch:2020uzh; @Okada:2020evk; @He:2020wjs], scattering induced by a new $U(1)$ [@Lindner:2020kko; @Bally:2020yid], semi-relativistic or boosted DM [@Kannike:2020agf; @Fornal:2020npv; @Su:2020zny; @Cao:2020bwd; @Alhazmi:2020fju], axions [@DiLuzio:2020jjp; @Gao:2020wer], inelastic DM scattering [@Harigaya:2020ckz; @Bell:2020bes; @Lee:2020wmh; @Bramante:2020zos; @Baek:2020owl], large neutrino magnetic moment [@Babu:2020ivd; @Shoemaker:2020kji; @Miranda:2020kwy] or more exotic explanations [@McKeen:2020vpf; @Zu:2020idx; @Jho:2020sku; @Ko:2020gdg; @Cacciapaglia:2020kbf]. It is also an interesting possibility that the energy deposit is in the form of photons since the XENON1T analysis does not differentiate between photons and electron recoils [@Paz:2020pbc]. However, nuclear excitations, which naturally lead to such a signal, are not a viable solution since the energy scale does not match the observations [@Arcadi:2019hrw; @McCabe:2015eia]. In this article, we characterize the XENON1T excess in an effective field theory (EFT) description of a dark sector, recently proposed in [@Goertz:2019wtt], which naturally includes the appropriate ingredients for its explanation, namely modified neutrino interactions with electrons via a potentially light new scalar sector, coupling most prominently to the light fermion generations. The overarching framework is provided by the scalar-mediator extended DM EFT () [@Alanne:2017oqj; @Alanne:2020xcb], which corresponds to a systematic EFT incarnation of simplified DM models, with a modest number of new free parameters. In the variant employed here, a spontaneously broken $Z_2$ symmetry is included, that on the one hand leads to interesting DM signatures [@Goertz:2019wtt], and on the other allows to address the smallness of first-generation fermion masses via small symmetry-breaking effects, as detailed below. This automatically induces non-trivial couplings of the new scalar with these fermions and thereby allows to relate the XENON1T excess with the observed electron and neutrino masses. While the setup features in general a suppression of the DM direct-detection (DD) cross section due to the $Z_2$ symmetry, the leptophilic variant we will consider below is completely unconstrained from DD limits via nucleon interactions and thus invites searches employing electronic recoil. The paper is organized as follows. In the remainder of this section we detail the setup sketched above. In Section \[sec:XE\] we provide our fit to the XENONT1T electron recoil excess, first assuming neutrino-electron scattering as its origin and then exploring in addition the option of DM-scattering. Here, we will also examine whether the correct relic abundance can be achieved consistently with the tentative XENON1T signal. Subsequently, in Section \[sec:Cons\], we will confront the explanation with stringent limits on new electron and neutrino interactions from terrestrial and astrophysical constraints, considering also the fully agnostic case of freely modified couplings in a simple EFT and characterizing viable regions. In Section \[sec:PT\], we will show how our extended scalar sector allows to circumvent stringent constraints from Big-Bang Nucleosynthesis (BBN) via a late phase transition before presenting our conclusions in Section \[sec:Conc\]. General Setup ------------- We consider the leptophilic variant of the recently put forward in [@Goertz:2019wtt], which corresponds to the SM field content augmented with a fermionic DM singlet $\chi$ and a real, CP even scalar mediator ${\cal S}$, with the assumption that ${\cal S}$ and the right-handed first lepton generation are odd under a $Z_2$ parity. In this model, the first-generation leptons obtain masses from the vacuum expectation value (vev) of ${\cal S}$. We will here extend the scalar sector by assuming two different $Z_2$ symmetries, one shared by the neutrinos, the other by the electron, that are then broken by vevs of two distinct scalars ${\cal S}_{\nu,e}$ and allow to simultaneously address tiny neutrino masses and small charged-lepton masses. As we will show in Section \[sec:PT\], this richer scalar sector can also lead to a delayed coupling of the electrons to the lighter mediator in the thermal evolution of the universe, which will make possible to address the recoil excess with moderately sizable electron couplings, while avoiding bounds from physics of the early universe. The corresponding Lagrangian reads [@Goertz:2019wtt] $$\begin{aligned} \label{eq:LEFT} \mathcal{L}_{\rm eff}^{{\cal S} \chi}& = & {\cal L}_{\rm SM^\prime} + \frac{1}{2} \left( \partial_\mu {\cal S}_\ell \partial^\mu {\cal S}_\ell - \mu_\ell^2 {\cal S}_\ell ^2 \right) + \bar \chi i \dslash\chi- m_\chi \bar \chi \chi \nonumber \\ &-& \frac{\lambda_\ell}{4} {\cal{S}}_\ell^4 - \lambda_{\nu e}\, {\cal{S}}_\nu^2 {\cal{S}}_e^2\ - \lambda_{H{\cal S}_\ell} |H|^2 {\cal S}_\ell^2\\ &-& \frac{1}{\Lambda} \Big[(y_\nu^{\cal S})_{ij}\, \bar{L}_{\mathrm L}^i H \nu^j_{\mathrm R}\, {\cal S}_\nu + (y_e^{\cal S})_i\, \bar{L}_{\mathrm L}^i H e_{\mathrm R}\, {\cal S}_e + \mathrm{h.c.}\Big]\nonumber\\[1mm] &-& \frac{y_\chi^{\ell} {\cal S}_\ell^2 + y^H_\chi |H|^2}{\Lambda}\ \bar{\chi}_{\mathrm L} \chi_{\mathrm R} + \mathrm{ h.c.}\,,\nonumber\end{aligned}$$ where a summation over $\ell=\nu,e$ is understood and $L_{\mathrm{L}}$ are the left-handed $SU(2)_{\mathrm L}$ lepton doublets, $e_{\mathrm R}, \nu^e_{\mathrm R}$ are the right-handed electron and right-handed neutrinos, while $H$ is the Higgs doublet. The latter develops a vev, $|\langle H \rangle| \equiv v/\sqrt 2 \simeq 174$GeV, triggering electroweak symmetry breaking (EWSB). In unitary gauge, the Higgs field is expanded around the vev as $H \simeq 1/\sqrt2 (0, v + h)^T$, where $h$ is the physical Higgs boson with mass $m_h \approx 125$GeV. Finally, ${\cal L}_{\rm SM^\prime}$ denotes the SM Lagrangian without the Yukawa couplings of the electron (and the neutrinos), see Eq.  below. Importantly, also the mediators develop small vevs $|\langle {\cal S}_\ell \rangle| \equiv v_\ell \ll v $, which break the $Z_2^{\ell}$ symmetries, $\ell=\nu, e$, carried by all the right-handed neutrinos and the right-handed electron, respectively, and thereby generate masses for the light leptons. The mixing with the Higgs via the $|H|^2 {\cal S}_\ell^2$ operators has to be small and this effect will not be considered in the following. Note that the conventional DM interaction ${\cal S}_\ell\, \bar{\chi}\chi$ is still generated with coefficient $\sim 2 y_\chi^\ell v_\ell/\Lambda$ which will remain relevant for our analysis. Finally, we take the coefficient of the operator $|H|^2 \bar{\chi}\chi$ to be negligibly small, such as to evade direct detection constraints and limits from invisible Higgs decays [@Fedderke:2014wda]. Fermion Masses, Scalar Mixing, and Free Parameters -------------------------------------------------- In the following we will study the fermion and scalar mass spectrum of the setup and summarize its relevant free parameters. The fermion mass terms after electroweak and $Z_2^\ell$ breaking read $${\cal L} \supset - \sum_{\ell=e,\nu} \bar \ell_L \frac{v}{\sqrt 2} \left( Y_\ell^H + \frac{v_\ell}{\Lambda} Y_\ell^{\cal S} \right) \ell_R \equiv - \sum_{\ell=e,\nu} \bar \ell_L M^\ell \ell_R \,, \label{eq:Lm}$$ where $\ell_{L,R}=e_{L,R},\nu_{L,R}$ are three-vectors in flavor space and the Yukawa matrices $$\label{eq:Yukawas} \begin{split} Y_e^H = \begin{pmatrix} 0 & y_{12}^e & y_{13}^e \\ 0 & y_{22}^e & y_{23}^e \\ 0 & y_{32}^e & y_{33}^e \end{pmatrix}\,,\quad \quad Y_e^{\cal S} = \begin{pmatrix} ({y_e^{\cal S}})_1 & 0 & 0 \\ ({y_e^{\cal S}})_2 & 0 & 0\\ ({y_e^{\cal S}})_3 & 0 & 0 \end{pmatrix}\,, \\ Y_\nu^H = \text{\bf 0}\,,\hspace{1cm} Y_\nu^{\cal S} = \begin{pmatrix} ({y_\nu^{\cal S}})_{11} & ({y_\nu^{\cal S}})_{12} & ({y_\nu^{\cal S}})_{31} \\ ({y_\nu^{\cal S}})_{21} & ({y_\nu^{\cal S}})_{22} & ({y_\nu^{\cal S}})_{32}\\ ({y_\nu^{\cal S}})_{31} & ({y_\nu^{\cal S}})_{32} & ({y_\nu^{\cal S}})_{33} \end{pmatrix}\,, \end{split}$$ reflect the $Z_2^\ell$ assignments. Without breaking of the latter symmetry via $v_{ \ell}>0$, the electron and the neutrinos would remain massless, corresponding to vanishing eigenvalues of $Y_\ell^H$. On the other hand, a small breaking of $v_\nu \sim {\cal O}({\rm eV})$ and $v_e \sim {\cal O}({\rm MeV})$ is sufficient to generate $m_\nu \sim 0.1$eV and $m_e \sim 0.5$MeV with natural $({y_\ell^{\cal S}}) \lesssim {\cal O}(1)$ and $\Lambda \gtrsim 1\,$TeV. We note that to explain the XENONT1T excess in light of various constraints, it will later be necessary to somewhat deviate from these natural scales, remaining with a partly explanation of light-fermion masses. We now perform a rotation to the mass basis $$\begin{split} M^\nu &= U_L^\nu\, M_{\rm diag}^\nu U_R^{\nu\, \dagger}, \ \, M_{\rm diag}^\nu\!={\rm diag}(m_{\nu^1},m_{\nu^2},m_{\nu^3})\,,\\ M^e &= U_L^e\, M_{\rm diag}^e U_R^{e\, \dagger}, \ \, M_{\rm diag}^e\!={\rm diag}(m_e,m_\mu,m_\tau)\,, \end{split}$$ with $U_L^e = U_L^\nu\, V_{\rm PMNS}$, to obtain the couplings of the physical leptons to the Higgs boson and the scalar mediators [@Goertz:2019wtt] $${\cal L} \supset - \sum_{\ell=e,\nu} \bar \ell_L \left( \frac{\hat{Y}_\ell^H + v_\ell/\Lambda\, \hat{Y}_\ell^{\cal S}}{\sqrt 2}\, h + \frac{v\, \hat{Y}_\ell^{\cal S}}{\sqrt 2 \Lambda}\, {\cal S}_\ell \right) \ell_R\,,$$ where $\hat{Y}_\ell^s = U_L^{\ell\, \dagger} Y_\ell^s U_R^\ell,\, s=H,{\cal S};\,\ell=\nu,e$, and (with some abuse of notation) we denote the mass eigenstates by the same spinors $\ell=e,\nu$. The Yukawa matrices in the mass basis can be expressed as $$\begin{split} \hat Y_\ell^{\cal S} =& \frac{\sqrt 2 \Lambda}{v v_\ell} \, M_{\rm diag}^\ell \, U_R^{\ell\, \dagger} C_\ell^S\, U_R^\ell\,, \\[2mm] \hat Y_\ell^H =& \frac{\sqrt 2}{v} \,M_{\rm diag}^\ell\, U_R^{\ell\, \dagger} C_\ell^H \,U_R^\ell \,, \end{split}$$ where $C_e^S = {\rm diag}(1,0,0)\,, C_\nu^S ={\rm diag}(1,1,1) \,, C_e^H = {\rm diag}(0,1,1)\,, C_\nu^H = \text{\bf 0}$, and the unitary rotations of the left-handed lepton fields drop out since they share the same $Z_2^{\ell}$ charges and their couplings (with a fixed right-handed lepton) are thus aligned with the corresponding mass terms. While this is not true for the right handed leptons, which could introduce flavor-changing neutral currents (FCNCs), here we just chose the Yukawa matrices starting from $M_{\rm diag}^\ell$, such that $U_R^e = {\bf 1}$, avoiding FCNCs [@Goertz:2019wtt]. We thus finally arrive at $$\label{eq:YukFin2} \begin{split} \hat Y_\nu^{\cal S} = & \, \frac{\sqrt 2 \Lambda}{v v_\nu} \,{\rm diag}(m_{\nu^1},m_{\nu^2},m_{\nu^3})\,,\\ \hat Y_e^{\cal S} = & \, \frac{\sqrt 2 \Lambda}{v v_e} \,{\rm diag}(m_e,0,0)\,,\\ \hat Y_\nu^H = & \ \text{\bf 0}\,,\\ \hat Y_e^H = &\, \frac{\sqrt 2}{v} \,{\rm diag}(0,m_\mu,m_\tau)\,. \end{split}$$ In consequence, the $\mu$ and $\tau$ interact with the Higgs boson as in the SM while the electron couples instead only to ${\cal S}_e$, with strength determined by the free parameter $v_e$, which can be traded for ${\rm y}_e^{\cal S}/\Lambda \equiv (\hat Y_e^{\cal S})_{11}/\Lambda$. Similarly, $v_\nu$ can be traded e.g. for ${\rm y}_1^{\cal S}/\Lambda$, where ${\rm y}_i^{\cal S}/\Lambda \equiv (\hat Y_\nu^{\cal S})_{ii}/\Lambda$. In addition to fermion mixing, the scalar potential term $\sim \lambda_{\nu e}$ leads to a mixing between the scalar singlets after they obtain their vevs $|\langle {\cal S}_\ell \rangle| = v_\ell$, described by an angle $\theta$ as $$\begin{aligned} \label{eq:rotation} \left( \begin{array}{c} s \\ S \end{array} \right) = \left( \begin{array}{cc} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{array} \right) \left( \begin{array}{c} {\cal S}_\nu \\ {\cal S}_e \end{array} \right),\end{aligned}$$ with $$\label{eq:mixinghS} \tan 2\theta=\frac{4 \, \lambda_{\nu e} v_\nu v_e}{M_\nu^2 -M_\ell^2}\,,$$ where $M_\ell^2 = \mu_\ell^2 + 3 \lambda_\ell v_\ell^2 + 2\lambda_{\nu e}\, v_\nu^2 v_e^2 /v_\ell^2$. The resulting physical masses read $$m_{s/S}^2 = \frac{1}{2}(M_\nu^2 +M_\ell^2) \pm \frac{M_\nu^2-M_\ell^2}{2\,\cos 2\theta} \,.$$ Note that, as suggested by the lightness of neutrino masses compared to the charged lepton masses, we will always assume $v_\nu \ll v_e$ and accordingly $M_\nu \ll M_e $. This leads to $m_s \approx M_\nu,\ m_S \approx M_e $, as well as $\cos \theta \approx 1$, $\sin \theta \ll 1$. The mixing will induce suppressed couplings between the electron and the light mediator $s$ (as well as between the neutrinos and the heavy $S$), given by $$\begin{split} \mathcal{L}_s =& - s \frac{v}{\sqrt 2\Lambda} \left(c_\theta {\rm y}_i^{\cal S}\, \bar{\nu}_{\mathrm L}^i \nu^i_{\mathrm R} + s_\theta {\rm y}_e^{\cal S}\, \bar{e}_{\mathrm L} e_{\mathrm R} \right),\\[2mm] \mathcal{L}_S =& - S \frac{v}{\sqrt 2 \Lambda} \left(c_\theta {\rm y}_e^{\cal S}\, \bar{e}_{\mathrm L} e_{\mathrm R} - s_\theta {\rm y}_i^{\cal S}\, \bar{\nu}_{\mathrm L}^i \nu^i_{\mathrm R} \right)\,, \end{split}$$ where ${c_\theta\equiv\cos\theta},\, {s_\theta\equiv\sin\theta}$. Before moving on, we are now in a position to summarize the free parameters of our setup, relevant for our study, which are - [the mediator masses $m_{s,S}\approx M_{\nu,e}$]{} - [the ${\cal S}_e-$Yukawa coupling ${\rm y}_e^{\cal S}/\Lambda$]{} - [the ${\cal S}_\nu-$Yukawa coupling ${\rm y}_1^{\cal S}/\Lambda$]{} - [the mixing portal $\lambda_{\nu e}$,]{} as well as, in the dark sector, - [the DM mass $m_{\chi}$]{} - [the bi-quadratic DM portal coupling $y_{\chi}^{\cal S}/\Lambda$]{}, where the remaining Yukawa couplings are given by . Fitting the XENON1T Excess {#sec:XE} ========================== Modified Neutrino Interactions ------------------------------ ![Comparison between an exemplary differential event rate for a scalar with $m_s=60$eV and $\sqrt{y^s_e y^s_\nu }=7.9 \times 10^{-7}$ and the data as reported by [@Aprile:2020tmw]. The full differential event rate is shown in blue while the pure signal (background) contribution is depicted in orange (red).[]{data-label="fig:event_rate"}](fit_neutrino.pdf){width="48.00000%"} We start by assuming that the $\chi$-scattering induced electron recoil is negligible. In this case, the excess can be explained by modified neutrino scattering with electrons, in our model mediated by $s$ and $ S$. As we will see observational constraints prefer $m_{s}\ll m_{\cal S}$ such that neutrino-electron scattering can to good approximation be described by $s$-exchange alone. The differential cross section for the new-physics signal reads [@Cerdeno:2016sfi] $$\frac{d\sigma_{\nu e}}{d E_r}= \frac{(y^s_{e} y^s_{\nu})^2}{4\pi (2 m_e E_r +m^2_s)^2}\frac{m_e^2 E_r}{E_\nu^2}\ \,,$$ where $m_e$ is the electron mass, $E_\nu$ the energy of the incoming neutrino, $E_r$ the electron recoil energy, and we denoted the couplings of the electrons and the first neutrino to the light (and heavy) mediators $s$ (and $S$) by $$\begin{split} y^s_e \equiv s_\theta \frac{v}{\sqrt 2\Lambda} {\rm y}_e^{\cal S} \,,\quad y^s_\nu \equiv c_\theta \frac{v}{\sqrt 2\Lambda} {\rm y}_1^{\cal S}\,,\\ y^S_e \equiv c_\theta \frac{v}{\sqrt 2\Lambda} {\rm y}_e^{\cal S} \,,\quad y^S_\nu \equiv - s_\theta \frac{v}{\sqrt 2\Lambda} {\rm y}_1^{\cal S}\,. \end{split}$$ The true differential event rate is then given by convoluting the differential cross section and the incident neutrino flux $\phi_{\nu}$ and weighting by the number of electrons per unit mass $N_e$ $$\frac{d R}{d E_r}=N_e \int d E_\nu \frac{d\sigma_{\nu e}}{d E_r}\frac{d\phi_{\nu}}{d E_\nu}\,.$$ At the energies relevant for the XENON1T excess the neutrino flux is dominated by $pp$ neutrinos from the sun. We use the observed value of the $pp$-flux from [@Vitagliano:2019yzm] and employ the parameterization of the spectrum from [@Bahcall:1997eg]. Here we have assumed a universal interaction between $s$ and the different neutrino flavors such that oscillation effects do not affect the scattering rate, on which we will comment more further below. To make connection with the observed rate experimental effects have to be included. The limited detector resolution is taken into account via a gaussian smearing function with an energy dependent resolution. As suggested in [@XENON:2019dti] we take the ansatz $$\sigma(E)/E=\frac{a}{\sqrt{E}}+b$$ and assume that the resolution varies between $\approx 30\%$ at $E_r=1$ keV and $\approx 6\%$ at 30 keV. Finally, the detector efficiency reported in the experimental paper [@Aprile:2020tmw] is applied. We adopt the best fit background model from the experimental publication but allow the normalization to vary within the $1\sigma$ allowed range. In order to assess the impact of a light scalar on the electron neutrino scattering we perform a $\chi^2$ analysis of the signal and background model in light of the observed data. We find that a coupling of $$\label{eq:fit} \sqrt{y^s_e y^s_\nu }\approx7.9 \times 10^{-7}$$ is preferred with very little dependence on $m_s$ for masses smaller than $\approx 20$ keV. An exemplary comparison between the signal associated with the best fit point for $m_s= 60$ eV and the data is shown in Fig. \[fig:event\_rate\]. Our results are in good qualitative agreement with those in [@Boehm:2020ltd; @Khan:2020vaf; @AristizabalSierra:2020edu] which study a related set-up and we will confront them with a comprehensive set of complementary experimental constraints in Section \[sec:Cons\]. DM Scattering and Relic Abundance --------------------------------- ![Comparison between the best fit differential event rate for a DM particle with $m_\chi=10$ GeV and $\sigma_{e\chi}= 1.25 \times 10^{-39}\, \mbox{cm}^2$ and the data. The style is similar to Fig. \[fig:event\_rate\] and for better visibility we also show the signal rate enhanced by a factor of 5 as an orange dashed line. []{data-label="fig:event_rate_DM"}](fit_DM.pdf){width="48.00000%"} DM can scatter on the electrons in the detector and could account for the excess. Beyond that, our model contains in any case a DM candidate and thus it is interesting to check whether the correct relic abundance can be achieved simultaneously with an explanation of the XENON1T excess. These observables are correlated with each other also in case of the neutrino explanation, via the mediator couplings to SM fermions. A naive estimate of the maximum recoil energy possible in non-relativistic DM-electron collisions leads to $$\begin{aligned} E_{r,max}= \frac{2\mu_{\chi,e}^2}{m_e}v^2_{max}\approx 2 \times 10^{-6} m_e \end{aligned}$$ where $\mu_{\chi,e}$ is the reduced mass of the system and $v$ the velocity of the DM. For $m_{DM}\gg m_e$ and after taking into account that the velocity is limited by the local escape velocity of our galaxy $v_{esc}=\mathcal{O}(10^{-3}\,c)$ this leads to an estimate of $E_{r,max} \approx 1$ eV and thus much below the energy scale required to account for the signal. However, it is crucial to note that the electrons form part of a bound system, the xenon atom. Therefore, the momentum of the electron is not zero but could take an arbitrary value. The typical momentum of the bound electron is expected to be $\mathcal{O}(\alpha_{em} m_e)$ which is still small but allows for a larger energy transfer in the DM-electron scattering process [@Roberts:2015lga]. The differential event rate is given by $$\begin{aligned} \frac{d R}{d E_r}=\frac{n_{Xe} \rho_\chi}{m_\chi}\frac{d \langle \sigma_{\chi e}\rangle}{d E_r}\end{aligned}$$ where $N_{Xe}$ is the number of xenon atoms per unit mass in the detector and $\rho_\chi \approx 0.3\, \mbox{GeV}/\mbox{cm}^3$ the local DM density. For the velocity averaged differential cross section we rely on the results of [@Roberts:2016xfw; @Roberts:2019chv]. In the heavy mediator limit[^1] it can be parametrized as $$\begin{aligned} \frac{ d \langle \sigma_{\chi e}\rangle}{d E_r} =\frac{\sigma_{\chi e}}{2 m_e}\int d v \frac{f(v)}{v}\int dq a_0^2 q K(E_r,q) \end{aligned}$$ where $\sigma_{\chi e}$ is the cross section for scattering on a free electron with a momentum transfer $a_0^{-1}=\alpha_{em} m_e$, while $f(v)$ denotes the velocity distribution of the DM at Earth. The atomic physics is encoded in the excitation factor $K$ which has originally been computed in [@Roberts:2016xfw]. In order to estimate the implications of a DM signal we consider the averaged cross sections reported in [@Roberts:2019chv] and perform a fit to the signal using the same assumptions about the detector as before. The best fit recoil rate we found is shown in Fig. \[fig:event\_rate\_DM\]. As can be seen the signal rises very steeply at low energies such that the peak occurs at $\approx 1.5$ keV instead of the $\approx 2.5$ keV needed to reproduce the data. It is interesting to note that the fit shows some improvement if a small DM signal is added. The best fit corresponds to $m_\chi=10$ GeV and $\sigma_{\chi e}\approx1.25 \times 10^{-39} \mbox{cm}^2$, which could for instance be explained by an MeV scale mediator with an $\mathcal{O}(1)$ coupling to DM and $y^S_e\approx10^{-5}$. However, the statistical improvement only amounts to marginally more than $1\sigma$. Therefore, the DM-electron-scattering hypothesis does not provide a convincing explanations of the observation and we do not entertain this possibility further; similar conclusions were reached for example in [@Bloch:2020uzh]. A better fit with DM requires a flatter recoil spectrum. This could for instance be achieved if a relativistic or semi-relativistic DM sub-population [@Fornal:2020npv; @Kannike:2020agf; @Alhazmi:2020fju] contribute to the signal or if the interaction has additional momentum dependence [@Bloch:2020uzh]. Nevertheless, it is interesting to ask whether the observed DM relic density can be accounted for in our framework. Assuming production via freeze-out, the correct relic abundance can be achieved if the thermally averaged annihilation cross-section is $\mathcal{O}(10^{-26}\,{\mbox{cm}}^3\,{\mbox{s}}^{-1})$. In our model the main annihilation channels for the DM are into $e^+ e^-$ and $SS$ final states.[^2] The cross-section for the former channel can be estimated as (see, e.g., [@Alanne:2020xcb]): $$\begin{aligned} & \langle \sigma v \rangle_{ee} \approx \frac{1}{8\pi}\, \frac{v^2 v_e^2}{\Lambda^4}\, \frac{(y_\chi^e)^2 (y^{\cal S}_e)^2 m_\chi^2}{(m_S^2-4 m_\chi^2)^2}\ v_\chi^2 \nonumber\\ & \approx 10^{-5} \sigma_v^0 {\left(\frac{v_e}{5\,\mbox{GeV}}\right)}^2 {\left(\frac{1\,\mbox{GeV}}{m_\chi}\right)}^2 {\left(\frac{10\,\mbox{TeV}}{\Lambda}\right)}^4 (y_\chi^e)^2 (y^{\cal S}_e)^2 \,,\end{aligned}$$ where $v_\chi$ is the DM velocity (the cross-section is p-wave suppressed), while $\sigma_v^0=2 \times 10^{-26}\,{\mbox{cm}}^3\,{\mbox{s}}^{-1}$. A similar estimate for the cross-section into the $SS$ final state leads to: $$\langle \sigma v \rangle (\chi \chi \rightarrow SS) \approx 10^{-3}\, \sigma_v^0\, {\left(\frac{10\,{\rm TeV}}{\Lambda}\right)}^2 (y_\chi^e)^2. \label{eq:xsec_XXSS}$$ While the former cross section is way too small in case the XENON1T excess should be explained consistently, the second cross section could in principle lead to a viable scenario, however only in case $\Lambda$ is lowered to the TeV scale. Values $y_\chi^e \gtrsim 1$ would be nevertheless required. An alternative production mechanism that is more easily realized within the setup at hand is freeze-in. In this case $y_\chi^\ell \ll 1$ and the DM interactions are so weak that thermal equilibrium has never been realized in the early Universe. Then the relic density can be built up from a negligible initial value, by $S S (ss) \rightarrow \chi \chi$ inverse annihilation processes and, for sufficiently light $\chi$, $S \rightarrow \chi \chi$ decays. Since it is realized via a $D=5$ operator, the annihilation process leads to a UV dominated rate. Hence the relic density is sensitive to the largest temperature and we need to specify our assumed value for the reheating temperature $T_R$. In order not to exceed the validity of our EFT we limit ourselves to $T_R$ below the new physics scale $\Lambda$. ![Isocontours of correct DM relic density assuming production through freeze-in and considering the assignations of model parameters for BM1 (red) and BM2 (black). The reheating temperature $T_R$ has been set to 100 GeV.[]{data-label="fig:relic_FI"}](relic.png){width=".45\textwidth"} We compute the relic density with the freeze-in module of the public code micrOMEGAs 5 [@Belanger:2018mqt] which takes the full momentum dependence of the annihilation and decay rates into account. Fig. \[fig:relic\_FI\] shows isocontours of $\Omega_\chi h^2 =0.12$ in the $(m_\chi,y_\chi^e)$ plane for the two benchmark models described in the next section, assuming $y_\chi^\nu=y_\chi^e$. In our computation we have adopted $T_R=100\,\mbox{GeV}$. The DM relic density depends, besides $(m_\chi,y_\chi^e)$, on $m_S$ and $v_e$. The values of these two parameters are comparable for our benchmarks ($m_S\sim 5$MeV, $v_e\sim 5$GeV), so that the two contours in Fig. \[fig:relic\_FI\] are rather close to each other and align in the high and low mass limit. Terrestrial and Astrophysical Constraints {#sec:Cons} ========================================= ![Constraints in the $m_{S/s}-y_e^{S/s}$-plane from [@Knapen:2017xzo] and our own analysis, including our two BM points. For a discussion of the various limits and their shading see the main text.[]{data-label="fig:exclusions_e"}](exclusion_e.png){width=".45\textwidth"} ![Constraints in the $m_{s}-y_\nu^{s}$-plane including our two BM points. Note that the heavier mediator has too small couplings to appear.[]{data-label="fig:exclusions_nu"}](exclusion_nu.png){width=".45\textwidth"} We now confront the neutrino-scattering explanation of the XENON1T excess with various experimental constraints (in our specific setup and in a more general EFT), which are collected in Figs \[fig:exclusions\_e\] and \[fig:exclusions\_nu\] for the scalar-couplings to electrons and neutrinos, respectively. [**Bounds on the combination $y^s_e y^s_\nu$:**]{} Neutrino-electron scattering has long been a staple signature in experiments aiming to observed solar and reactor neutrinos. These experiments probe very similar physics and place an upper bound on the combination of couplings relevant for the XENON1T signal. Conventionally, bounds on new physics that leads to a recoil spectrum peaking at low energies are interpreted in terms of a neutrino magnetic moment $\mu_\nu$. Currently, the best limits are from Borexino and GEMMA and stand at $2.9 \times10^{-11} \mu_B$ [@Borexino:2017fbd; @Beda:2013mta]. This is right on the edge of the values preferred by the XENON1T excess, $\mu_\nu = 1.4 \-- 2.9\times 10^{-11} \mu_B$ but does not exclude the magnetic moment interpretation [@Aprile:2020tmw]. This observation is highly relevant for the light scalar mediation scenario under consideration here. In the energy range where the XENON1T signal is observed the recoil energy distribution of events that are induced by solar or reactor neutrinos interacting via a light scalar ($m_s \lesssim E_r$) or a magnetic moment are essentially indistinguishable. Consequently, an interpretation of the Borexino data in our model will lead to a constraint that is just on the upper boundary of the preferred region. With the signal and the expected exclusion so close to each other the exact position of the bound will depend on the details of the experimental data and the statistical procedure. A naive phenomenologists recast is therefore unlikely to allow a clear comparison. Thus, we refrain from quoting an explicit limit derived from a reinterpretation of Borexino and GEMMA data and just note that the bound is expected to be closely aligned with the upper edge of the preferred values of $\sqrt{y_e ^sy_\nu^s}$. [**Bounds on electron couplings:**]{} Beyond the SM forces coupling to the electron can be tested very rigorously with terrestrial precision experiments. In the mass range of interest here the most stringent constraints come from the anomalous magnetic moment of the electron $a_e$ since both the experimental measurement and the SM prediction are incredibly precise. At $3\sigma$ the deviation of $a_e$ from the SM expectation is limited to $\delta a_e \lesssim 1.4 \times 10^{-12}$ [@Hanneke_2008; @Giudice:2012ms]. A new scalar contributes [@Jackiw:1972jz] $$\delta a^s_e = \frac{(y_e^s)^2}{4\pi^2} \frac{m_e^2}{m_s^2} I_S\left(\frac{m_e^2}{m_s^2}\right)\,,$$ where the loop function is given by $$I_S(r)=\int_0^1 dz \frac{z^2(2-z)}{1-z+z^2 r}\,.$$ For $m_s\ll m_e$ this leads to $y^s_e \lesssim 10^{-5}$ while the limit relaxes for $m_s\geq m_e$, c.f. Fig. \[fig:exclusions\_e\]. Softer terrestrial constraints can be derived from $e^+e^-$ colliders through the process $e^+e^- \to \gamma s$. They have the largest impact close to $m_s \sim {\cal O}(1) $GeV, see for example [@Knapen:2017xzo]. In addition, there are a number of bounds on $s,S$ from astrophysical and cosmological observations. If the mass of the mediator is comparable or smaller than the core temperature of a star, the emission of the scalars can contribute to the energy loss and change the properties and dynamics of these astrophysical systems. Strong limits can be derived from red giants (RG) and horizontal branch stars (HB). We adopt the results of [@Hardy:2016kme; @Knapen:2017xzo] where plasma mixing is considered to be the main production mechanism of the light scalars, for a more recent analysis of the impact of stellar cooling on new physics in other models see [@Capozzi:2020cbu]. In principle, for $m_{s,S}\ll 10$ keV the RG bound excludes couplings larger than $y_e^s \approx 10^{-15}$. Clearly, such a small $y_e^s$ would prevent a solar neutrino interpretation of the XENON1T excess for all reasonable values of $y_\nu^s$. The bounds from observations of HB stars are less sever at low masses but take over for $m_{s,S}\gtrsim 10$keV. However, it is conceivable that these constraints can be circumvented in the presence of additional new physics such as an environment-dependent mass for the scalar similar to the chameleon mechanism considered in cosmology [@Khoury:2003aq; @Joyce:2014kja]. First attempts to realize such a solution for theories that explain the XENON1T signal appear promising [@DeRocco:2020xdt]. Following such a reasoning, we consider such astrophysical bounds less robust than the direct laboratory bounds discussed before and in consequence draw them as lines, removing the shading from the disfavored regions. Another constraint for mediator masses up to ${\cal O}(10)$ MeV is set by the supernova (SN) SN1987A, as additional light degrees of freedom would rapidly cool the SN in contrast to observation [@Raffelt:1996wa]. Due to the very high density of the SN core the scalar mediator can be trapped before actually leaving the core. We consider the limits from [@Knapen:2017xzo], where only the resonant production via mixing with the longitudinal component of the photon is included and direct production through Compton scattering or electron-ion recoil is neglected. This is possible for $m_s < w_p \sim 20$ MeV, where $w_p$ is the photon plasma frequency [@Chen_2018]. The trapping regime for resonant production is also included by using the balance of production and absorption rate, with the requirement of the scalar to be re-absorbed in a range of $R\approx10$ km. In this trapping regime, the decay $s \to e^+e^-$ determines the bound for masses MeV$\leq m_s \leq 30$ MeV. In addition, there are bounds from Big Bang Nucleosynthesis (BBN) for additional light degrees of freedom entering thermal equilibrium with $e$ and $\gamma$. On top of an increase of $N_{eff}$, the entropy release from $e^+e^-$ annihilation is diluted in that case. This leads to a lower photon temperature during BBN and therefore an higher baryon-to-photon ratio, which causes a decrease of the deuterium abundance [@Knapen:2017xzo]. For $m_s \lesssim 1$ MeV The BBB bound is largely flat and requires $y^s_e\lesssim 10^{-9}$. Even though the BBN bound is quite robust it can be circumvented in our setup. A late time phase transitions in the new physics sector can prevent the mixing of $s$ and $S$ in the early Universe and thus remove the coupling between the lighter scalar and the electrons at the relevant temperatures. We will comment more on this possibility in the next section. [**Bounds on neutrino couplings:**]{} New physics interacting with neutrinos is harder to test than in the case of electrons and we expect the bounds to be less constraining. Robust terrestrial constraints arise from searches for new meson decays such as $K^-/D^-/\pi^-\to e^- s \nu$ [@Berryman:2018ogk]. Alternatively, also decays to $\mu^-$ can be considered. We show the strongest combination of those in Fig. \[fig:exclusions\_nu\] assuming a flavor universal coupling. In case of flavor non-universality, the bounds for electron couplings are slightly stronger. For $m_h> m_s \gtrsim 1$GeV limits on the decay width of Higgs to invisible states via $h\to s \nu\nu$ give the strongest bound on $y_\nu^s$ [@Berryman:2018ogk]. In Fig. \[fig:exclusions\_e\] we use the latest ATLAS result of BR$(h\to \text{inv.}) < 0.13$ [@ATLAS:2020cjb]. The observation of MeV-scale neutrinos originating from SN1987A constraints the neutrino self-interaction [@Shalgar:2019rqe]. Scattering of the SN-neutrinos with the C$\nu$B via the new mediator shifts their energy to significantly lower values and potentially brings them below the detection threshold. In addition the SN neutrinos get deflected which delays their arrival on earth. An early bound was derived in [@Kolb:1987qy], we show the one from [@Shalgar:2019rqe] in Fig. \[fig:exclusions\_nu\], where the recent limits on the neutrino masses were used. The model could also have an impact on the amount of radiation in the Universe which can be tested by BBN. In particular the right-handed neutrinos are dangerous since fully thermalized each of them will contribute $\Delta N_{eff}=1$ while the upper bound stands at $\approx 0.2$ [@Cyburt:2015mya]. Therefore, only the region of parameter space were the right-handed neutrinos do not reach thermal equilibrium before the left-handed ones decouple from the SM bath are allowed by cosmology. Even if the initial population of $\nu_R$ is negligible we can produce them in neutrino-antineutrino scattering via t-channel $s$ exchange. A good estimate for thermalization can be obtained by requiring that the production rate $\gamma$ does exceed the Hubble rate $H$ prior to neutrino decoupling which happens at about $2\--3$ MeV. In our model the thermally averaged production rate reads $$\begin{aligned} \gamma \approx\langle \sigma v \rangle \times n_{\nu} \approx \frac{(y^s_\nu)^4 \, }{512\pi}T,\end{aligned}$$ where $n_{\nu}$ is the equilibrium number density of neutrinos and $\langle \sigma v \rangle$ is the thermally averaged $\nu_R$ production cross section. By equating the rate and $H$ we find $y^s_{\nu}\lesssim 6.3 \times 10^{-5}$ for $m_{s}\ll 2$ MeV. For larger masses the bound weakens. The contribution of $s$ is less pronounced than in the case of electrons since the absence of a $\nu_R$ bath prevents the direct production of $s$. We note that this bound can be avoided if an additional mass terms for the right-handed neutrinos makes them too heavy to contribute to $N_{eff}$. This can be realized rather straightforwardly in our setup by increasing $v_\nu$ such as to generate a more sizable Dirac-mass term that then leads to viable neutrino masses via see-saw suppression in the presence of large Majorana masses for the right-handed neutrinos. This would provide a hybrid explanation for the smallness of neutrino masses, which will however require a refined analysis, that goes beyond the scope of this work. Finally, there are constraints from CMB. If the interaction rate of neutrinos is high enough they cannot be treated as a free-streaming gas and the impact of their interactions has be to included in the Boltzmann equations governing the evolution of the primordial perturbations. For a heavy mediator this leads to an upper bound on the interaction strength of $\left(y^s_\nu/m_s\right)^2\leq (0.06 \,\mbox{GeV})^{-2}$ [@Archidiacono:2013dua]. In order for this estimate to be valid we need $m_s\gg 10$ eV and, therefore, the limit becomes unreliable towards the lower end of the mass range considered here. [^3] [**Benchmark models:**]{} In order to confront our model for the XENON1T excess with these astrophysical and laboratory constraints, let us define two different benchmarks (BMs) that both deliver a good fit to the anomaly as in Eq. . While we require roughly natural scales for the model, we are mainly led by the goal to avoid the most severe experimental bounds. The BMs are defined by the independent input parameters BM $M_\nu$ $M_e$ ${\rm y}_\nu^{\cal S}$ ${\rm y}_e^{\cal S}$ $\Lambda$ $\lambda_{\nu e}$ ----- --------- ------- ------------------------ ---------------------- ----------- ---------------------- BM1 18.5keV 5MeV $1\!\times\!10^{-4}$ 0.005 10TeV $3\!\times\!10^{-4}$ BM2 60eV 10MeV 0.06 0.005 10TeV 0.001 which lead to the vevs $(v_\nu,v_e)=(26.5\,{\rm keV},5.3\,{\rm GeV})$ and $(v_\nu,v_e)=(50\,{\rm eV},5.9\,{\rm GeV})$ for BM1 and BM2, respectively, as well as to the (derived) physical couplings BM $y^s_\nu$ $y^s_e$ $y^S_\nu$ $y^S_e $ ----- ------------------------ ------------------------- ------------------------ ---------------------- BM1 $1.8\!\times\!10^{-6}$ $-4.5\!\times\!10^{-7}$ $8.3\!\times\!10^{-9}$ $9\!\times\!10^{-5}$ BM2 0.001 $-5\!\times\!10^{-10} $ $6\!\times\!10^{-9}$ $8\!\times\!10^{-5}$ with mixing angles $s_\theta^1= -5\times10^{-3}$ and $s_\theta^2=-6\times10^{-6}$. We note that in defining these benchmarks we followed two different assumptions regarding the neutrino masses, that are both consistent with the values above.  1) We assumed an ‘inverted’ neutrino-mass hierarchy with $m_{\nu^3} \ll m_{\nu^1} \sim m_{\nu^2} \sim 0.05\,$eV. In this case actually both $\nu^{1,2}$ couple to the mediator $s$ with similar strength $y^s_\nu$, while the interaction of the lightest neutrino is negligible, see Eq. . Since $\nu^{1,2}$ contain almost all the electron-flavor content and couple universally to $s$, basically no flux from the sun will be lost when considering neutrino-electron scattering in XENON1T and the analysis as described above remains valid.  2) A ‘normal’ hierarchy with $m_{\nu^1} \sim m_{\nu^2} \sim 0.05\,{\rm eV} \ll m_{\nu^3}$, would also be consistent with the same BMs, where we now assume that both chiralities of the heaviest state $\nu^3$ are even under the $Z_2^{\nu}$ symmetry, such that it does not couple to $s$ (while again the electron-neutrino content is almost entirely in the universally coupling eigenstates). For both BMs, we arrive at a prediction for the strength of the anomaly of $$\label{eq:res} y^s_e y^s_\nu \approx - (5-7) \times 10^{-13}\,,$$ in line with the best-fit value obtained before in Eq. .[^4] Finally, the couplings associated with BM1 (BM2) are displayed in red (black) in the landscape of collected bounds on $y_e^{s/S}$ and $y_\nu^s$ in Fig. \[fig:exclusions\_e\] and Fig. \[fig:exclusions\_nu\], respectively. A few comments are in order. First, a value of $v_e > M_e$, somewhat above the electron mass, leads to a coupling of electrons to the heavy mediator $S$ ($y_e^S\sim 10^{-4}$) that just evades the precision bounds for the corresponding mediator mass of $m_S \sim 10$ MeV [@Knapen:2017xzo]. On the other hand, the coupling to the potentially dangerously light $s$ is suppressed in $s_\theta$, pushing the resulting interaction [*just*]{} into the window above the SN1987a exclusion region but below the $(g-2)_e$ limit for BM1, while BM2 can even evade BBN constraints without further ingredients (at the price of a higher neutrino coupling). The BBN constraint for electrons in BM1 can be avoided via a late phase transition, generating the vev $v_\nu>0$ below $T\approx 150$keV, as we discuss in the next section. ![Constraints in the $y_e^s-y_\nu^s$-plane for a $60$ eV mediator (as in BM2) and the $1\sigma$ preferred region from our fit to the XENON1T excess. The BBN bound on the electron coupling, indicated by the hatched region, can be circumvent by a late time phase transition.[]{data-label="fig:exclusions_e_nu_60eV"}](exclusion_e_nu_60eV.png){width=".45\textwidth"} ![Constraints in the $y_e^s-y_\nu^s$-plane for a $20$ keV mediator (as in BM1) and the $1\sigma$ preferred region from our fit to the XENON1T excess. The BBN bound on the electron coupling, indicated by the hatched region, can be circumvent by a late time phase transition.[]{data-label="fig:exclusions_e_nu_20keV"}](exclusion_e_nu_20keV.png){width=".45\textwidth"} [**Free EFT description:**]{} Finally, we confront the general EFT resolution to the XENONT1T anomaly via scalar couplings to electrons and neutrinos in the couplings plane with the constraints discussed above. Being agnostic, here we just employ the effective Lagrangian (omitting kinetic and potential terms) $$\label{eq:EFT2} \mathcal{L}_{\rm eff} = -\frac{\sqrt 2}{v} \Big[y_\nu^s \bar{L}_{\mathrm L}^1 H \nu^1_{\mathrm R}\, s + y_e^s \bar{L}_{\mathrm L}^1 H e_{\mathrm R}\, s + \mathrm{h.c.}\Big]\,,$$ which can be obtained from Eq.  by neglecting the second scalar singlet, while coupling the remaining one to both electrons and neutrinos and removing the $Z_2$ symmetries as well as the vev of the mediator. In consequence, all fermion masses are solely induced by the Higgs and $y_{\nu,e}^s$ are now completely free couplings. In particular, Eq.  corresponds to a subset of operators of the general  [@Alanne:2017oqj; @Alanne:2020xcb]. In Fig. \[fig:exclusions\_e\_nu\_60eV\] we show the constraints and best fit region in the $y_e^s-y_\nu^s$-plane for a mediator mass of $60$ eV and in Fig. \[fig:exclusions\_e\_nu\_20keV\] for $20$ keV respectively. For comparison we also add the coupling values used in the two BMs above. There are two regions in the couplings preferred by the XENON1T fit, that potentially remain valid but both need extra mechanisms to avoid bounds from BBN in the early universe. The one around $y_e^s =\mathcal{O}(10^{-9})$ is excluded by the neutrino BBN bound. As discussed before, this could be avoided by an additional mass terms for the right-handed neutrinos. The other benchmark around $y_e^s =\mathcal{O}(10^{-6})$ is under pressure from the electron BBN bound. However, here a late phase transition can remove the interaction of the light mediator and electrons during the relevant age of the Universe and make this point potentially viable. Avoiding BBN Bounds via a Late Phase Transition {#sec:PT} =============================================== As discussed in the previous section, without further ado, BM1 would be excluded from BBN bounds on the electron coupling. However, in this section we will demonstrate how our scenario naturally realizes a late $Z_2$ breaking phase transition delaying the coupling of the electron to the light mediator until after BBN has completed. The scalar potential of our model can lead to a rich cosmological history in which the $Z_2^{\ell}$ symmetries are broken in a stepwise fashion [@Carena:2019une]. For simplicity we neglect mixing between the scalars $S_\nu$, $S_e$ and the Higgs doublet $H$ by turning $\lambda_{HS_\ell}$ zero. The tree-level scalar potential is then given by $$V_{\textrm{tree}}=\frac{1}{2}\mu_\nu^2S_\nu^2+\frac{1}{2}\mu_e^2S_e^2+\lambda_{\nu e}S_1^2S_2^2+\frac{1}{4}\lambda_\nu S_\nu^4+\frac{1}{4}\lambda_eS_e^4\,.$$ To study the cosmological evolution of this potential we add the one-loop thermal corrections given by [@Quiros:1999jp] $$V_{\textrm{thermal}}=\frac{T^4}{2\pi^2}\big(J_B(\frac{m_s^2}{T^2})+J_B(\frac{m_S^2}{T^2})\big)\,,$$ where $J_B(\alpha)=\int_0^{\infty}x^2\ln(1-e^{\sqrt{x^2+\alpha}})dx$ is the thermal correction for bosonic degrees of freedom. Working in the high-temperature limit, the thermal corrections have analytical forms $J_B(\alpha)=-\frac{\pi^4}{45}+\frac{\pi^2}{12}\alpha+O(\alpha^{3/2})$. Note that, since mixing between $S_\nu$ and $S_e$ is small, we can take approximately $m_s\approx M_\nu$ and $m_S\approx M_e$. Under these approximations, the critical temperature $T_{c2}$ at which a second minimum $(\langle S_\nu\rangle, \langle S_e\rangle)=(0,v_e)$ degenerate with the $Z_2 ^\nu \times Z_2^e$ preserving vacuum $(\langle S_\nu\rangle, \langle S_e\rangle)=(0,0)$ forms, is given by $$T_{c2}=\frac{\sqrt{-12\mu_e^2}}{2\lambda_{\nu e}+3\lambda_e}\,.$$ A second phase transition appears once the temperature has dropped to $T_{c1}$ at which a non zero vev of $S_\nu$ forms, with $$T_{c1}^2=\frac{12\big(2\lambda_{\nu e}\mu_e^2-\lambda_e\mu_\nu^2\big)}{\lambda_e(2\lambda_{\nu e}+3\lambda_\nu)-2\lambda_{\nu e}(2\lambda_{\nu e}+3\lambda_e)}\,.$$ For BM1 the first phase transition occurs around $500$ MeV while the second phase transition occurs at $150$ keV. At this temperature most of the photon heating is complete and the electron density has already dropped significantly – and, therefore, the thermalization rates are starting to be exponentially suppressed. Conclusions {#sec:Conc} =========== We have investigated the excess in low energy electron recoil events reported by the XENON1T collaboration. Our work is based on the $Z_2$ symmetric extended DM EFT which connects neutrino mass generation and DM [@Goertz:2019wtt]. We find that conventional DM-electron scattering only allows for a marginally better fit than the background-only hypothesis since the signal spectrum peaks at lower energies than observed experimentally. Therefore, DM does not provide convincing explanation of the data. However, the new neutrino and electron couplings induced by the neutrino mass mechanism embedded in the model predict a significant neutrino-electron scattering cross section. Including this interaction in the fit improves it considerably and we find that a light scalar with an average electron-neutrino coupling of $\sqrt{y^s_e y^s_\nu}\approx 5 \times 10^{-13}$ is preferred by more than 2$\sigma$. These observations motivated us to scrutinize the parameter space of the model in more detail and compare it to limits from various other observations. In general the parameter space that allows for a successful explanation of the XENON1T excess is rather constrained. While limits from terrestrial experiments can be avoided comparatively easily, bounds from cosmology are more constraining. In particular BBN bounds on a light scalar coupling to electrons are very severe. Interestingly, the model under consideration here naturally allows for a late phase-transition in the early Universe which prevents the scalar-electron coupling during BBN. However, in such a scenario additional contribution to the right-handed neutrino masses are required in order to avoid their thermalization prior to BBN. Once this is taken into account we find solutions that comply with cosmological bounds. Still, there remains a strong tension with astrophysics bounds, that rely on stellar cooling arguments. All considered, a new physics explanation of the excess is a tantalizing possibility, but in light of stringent constraints from other observations this potential sign of physics beyond the Standard Model should be taken with a grain of salt. Luckily, the upcoming run of the XENONnT will be able to weigh in on this question in the near future and either strengthen the excess or rule it out conclusively. Acknowledgments {#sec:Ack} =============== We thank Tommi Alanne and Simone Blasi for helpful discussions. VT acknowledges support by the IMPRS-PTFS and KMTN acknowledges support from the research training group “Particle Physics Beyond the Standard Model” (Graduiertenkolleg 1940). [^1]: A light mediator leads to a much stronger energy dependence of the signal and is expected to provide a worse fit of the signal than a heavy mediator. [^2]: Here we neglect the corresponding contributions involving the light mediator $s$ for simplicity, which will not lead to qualitative changes. [^3]: Alternative limits on very light mediators are also available [@Archidiacono:2013dua] but they only become applicable at even smaller masses. [^4]: Moreover, both BMs satisfy the positive-definiteness condition $M_\nu M_e > 2 \lambda_{\nu e} v_\nu v_e$, ensuring a proper potential minimum.
--- abstract: 'In the present paper, we make a detailed analysis for the QCD corrections to the electroweak $\rho$ parameter by applying the principle of maximum conformality (PMC). As a comparison, we show that under the conventional scale setting, we have $\Delta\rho|_{\rm N^3LO} = \left(8.257^{+0.045}_{-0.012}\right) \times10^{-3}$ by varying the scale $\mu_{r}\in[M_{t}/2$, $2M_{t}]$. By defining a ratio, $\Delta R=\Delta\rho/3X_t-1$, which shows the relative importance of the QCD corrections, it is found that its scale error is $\sim \pm9 \%$ at the two-loop level, which changes to $\sim\pm4\%$ at the three-loop level and $\sim \pm 2.5\%$ at the four-loop level, respectively. These facts well explain why the conventional scale uncertainty constitutes an important error for estimating the $\rho$ parameter. On the other hand, by applying the PMC scale setting, the four-loop estimation $\Delta\rho|_{\rm N^3LO}$ shall be almost fixed to $8.228\times10^{-3}$, which indicates that the conventional scale error has been eliminated. We observe the pQCD convergence for the $\rho$ parameter has also been greatly improved due to the elimination of the divergent renormalon terms. As applications of the present QCD improved $\rho$ parameter, we show the shifts of the $W$-boson mass and the effective leptonic weak-mixing angle due to $\Delta\rho$ can be reduced to $\delta M_{W}|_{\rm N^3LO} =0.7$ MeV and $\delta \sin^2{\theta}_{\rm eff}|_{\rm N^3LO}=-0.4\times10^{-5}$.' address: ' Department of Physics, Chongqing University, Chongqing 401331, P.R. China' author: - 'Sheng-Quan Wang' - 'Xing-Gang Wu' - 'Jian-Ming Shen' - 'Hua-Yong Han' - Yang Ma title: 'The QCD improved electroweak parameter $\rho$' --- Introduction ============ The $\rho$ parameter, being defined as the ratio between the strengths for the charged and neutral currents [@rho1], plays an important role for the electroweak physics. A precise determination of $\rho$ can further improve the accuracy of the electroweak precision observables (EWPOs), such as it provides strong indirect constraints for the top quark $M_t$ [@rhopredi; @CDF; @D0] and the Higgs mass $M_{H}$ [@disin; @dimw; @higglom]. Thus, it is helpful to derive a more accurate $\rho$ parameter for precision test of standard model (SM) and for finding new physics beyond SM. The $\rho$ parameter can be schematically written as $$\rho=1+\Delta\rho \;.$$ At the Born level, $\rho|_{\rm Born}=1$, and the shift of $\rho$ caused by loop-corrections can be defined as, $\Delta\rho={\Pi_{Z}(0) / M^{2}_{Z}}-{\Pi_{W}(0) / M^{2}_{W}}$, where $\Pi_{Z}(0)$ and $\Pi_{W}(0)$ are transversal parts of $W$-boson and $Z^0$-boson self-energies at the zero momentum transfer. At present, the one-loop QCD corrections [@rho1], the two-loop QCD corrections [@rhoqcdtwo1; @rhoqcdtwo2; @rhoqcdtwo3], the three-loop QCD corrections [@rhoqcdthr1; @rhoqcdthr2; @rhoqcdthr3; @rhoqcdthr4], and the four-loop QCD corrections [@rhoqcdfo1; @rhoqcdfo2; @rhoqcdfo3; @rhoqcdfo4] to the $\rho$ parameter have been done in the literature. All those improvements on loop calculations provide us great chances for deriving more accurate QCD estimation for $\rho$. Under the conventional scale setting, there is renormalization scheme and scale ambiguity for a fixed-order pQCD correction. That is, conventionally, one always takes $\mu_{r}=Q$ ($Q$ being the typical momentum flow of the process) as its central value, and then varies the scale within a certain region, e.g. $\mu_r\in [Q/2, 2Q]$, to ascertain the scale uncertainty. More specifically, we shall show that the conventional scale uncertainty for the $\rho$ parameter is still large even for the four-loop level, thus, it is important to find a reliable way to suppress, or even eliminate, such large scale uncertainty. The principle of maximum conformality (PMC) [@pmc1; @pmc2; @pmc3; @pmc4; @pmc5; @pmc6; @pmc8; @pmc9; @pmc11] has been designed for eliminating the renormalization scale ambiguity via a systematic way. By applying the PMC scale setting, all the non-conformal terms in perturbative QCD series are summed into the running coupling, and one obtains a unique, scale-fixed and scheme-independent prediction at any finite order. We shall try to eliminate the renormalization scale ambiguity for $\Delta\rho$ by using the PMC $R_\delta$-scheme [@pmc6; @pmc11]. The PMC scales are formed by absorbing the $\{\beta_{i}\}$-terms that govern the behavior of the running coupling via the renormalization group equation into the running coupling. Those $\{\beta_{i}\}$-terms that are related to the quark mass renormalization and etc. should be kept as a separate during the PMC scale setting. To avoid the confusion of using PMC, one can first transform expressions in terms of $\overline{\rm MS}$-quark mass into those of on-shell quark mass [@higrr] and then apply PMC. In the present paper, we shall explain this treatment in detail. The remaining parts of the paper are organized as follows. In Sec.II, we give our calculation technology for $\Delta\rho$ and show how to deal with it within the framework of PMC. In Sec.III, we present our numerical results for $\Delta\rho$, and also present the application of $\Delta\rho$ for both the shift of the $W$-boson mass $\delta M_{W}|_{\rm N^3LO}$ and the shift of the effective leptonic weak-mixing angle $\delta \sin^2{\theta}_{\rm eff}|_{\rm N^3LO}$ up to four-loop QCD corrections. The final section is reserved for a summary. Calculation technology for the QCD corrections to the $\rho$ parameter ====================================================================== For the conventional scale setting, the renormalization scale $\mu_r$ is fixed to be an initial value $\mu^{\rm init}_r$, which is usually chosen as the typical momentum transfer of the process. While for PMC, the value of $\mu^{\rm init}_{r}$ is arbitrary. Thus, in order to apply PMC properly, we shall first transform the four-loop expression for $\Delta\rho$ derived in Refs.[@rhoqcdfo1; @rhoqcdfo2; @rhoqcdfo3; @rhoqcdfo4] into those with full initial scale dependence, in which both the singlet and non-singlet contributions shall be taken into consideration. Then, we transform the $\Delta\rho$ parameter with the $\overline{\rm MS}$ quark masses into the one with the on-shell quark masses. This transformation is important to separate out the right $\{\beta_{i}\}$-terms that govern the behavior of the running coupling. The relation between the $\overline{\rm MS}$-quark mass and the on-shell quark mass up to three-loop level can be found in Refs.[@pomstwo1; @pomstwo2; @pomstwo3; @pomstwo4; @pomsthr1; @pomsthr2; @pomsthr3]. After doing such transformation, all remaining $\{\beta_i\}$-terms are rightly pertained to the running coupling and the PMC scales can be readily determined. More explicitly, we write done $\Delta\rho$ up to order ${\cal O}(a_s^4)$ in the following: $$\begin{aligned} \Delta\rho&=&3X_{t}\bigg[1+c_{1,0}(\mu^{\rm init}_{r})a_{s}(\mu^{\rm init}_{r})+ \bigg(c_{2,0}(\mu^{\rm init}_{r}) +c_{2,1}(\mu^{\rm init}_{r})n_{f}\bigg) a_{s}^{2}(\mu^{\rm init}_{r})+\bigg(c_{3,0}(\mu^{\rm init}_{r})\nonumber\\ &&\quad\quad +c_{3,1}(\mu^{\rm init}_{r})n_{f} +c_{3,2}(\mu^{\rm init}_{r}) n_{f}^{2}\bigg) a_{s}^{3}(\mu^{\rm init}_{r}) +\mathcal{O}\bigg(a_{s}^{4}\bigg)\bigg], \label{rhocij}\end{aligned}$$ where $X_{t}=(G_{F}M^{2}_{t})/(8\sqrt{2}\pi^{2})$ stands for the one-loop result [@rho1], $a_{s}(\mu^{\rm init}_{r})=\alpha_{s}(\mu^{\rm init}_{r})/4\pi$ and $G_{F}$ is the Fermi constant. The coefficients $c_{i,j}(\mu^{\rm init}_{r})$ are put in the Appendix. The $n_{f}$-series in Eq.(\[rhocij\]) can be unambiguously associated with the $\{\beta_{i}\}$-terms that rightly govern the running behavior of the coupling constant via the $R_\delta$-scheme [@pmc6; @pmc11]. In the $R_\delta$-scheme, an arbitrary constant $-\delta$ is subtracted in addition to the standard subtraction $\ln 4 \pi - \gamma_E$ for the $\overline{\rm MS}$-scheme. The $\delta$-subtraction defines an infinite set of new $\overline{\rm MS}$-like renormalization schemes. The $\beta$-function of the coupling constant within any $R_\delta$-scheme is the same as the usual $\overline{\rm MS}$ one. All $R_\delta$-schemes are connected to each other by a scale-displacement relation, e.g. for the schemes with $\delta_1$ and $\delta_2$, their coupling constants are related by $$a_s(\mu_{\delta_1}) = a_s(\mu_{\delta_2}) + \sum_{n=1}^\infty \frac{1}{n!} { \frac{{\rm d}^n a_s(\mu_{r})}{({\rm d} \ln \mu^2_{r})^n} |_{\mu_{r} =\mu_{\delta_2}} (-\delta)^n},$$ where $\ln\mu^2_{\delta_1}/\mu^2_{\delta_2}=-\delta$. At each perturbative order, the running behavior of the coupling constant is controlled by such displacement relation, which inversely determines the $\{\beta_i\}$-terms that pertain to a specific perturbative order. By collecting up all those $\{\beta_i\}$-terms for the same order, one can obtain the general pattern of non-conformal $\{\beta_i\}$-terms at each perturbative order. More specifically, according to the $R_\delta$-scheme, we can rewrite Eq.(\[rhocij\]) as $$\begin{aligned} \Delta\rho&=&3X_{t}\bigg[1+r_{1,0}(\mu^{\rm init}_{r})a_{s}(\mu^{\rm init}_{r}) + \bigg( r_{2,0}(\mu^{\rm init}_{r}) + \beta_{0}r_{2,1}(\mu^{\rm init}_{r})\bigg) a_{s}^{2}(\mu^{\rm init}_{r})+ \nonumber\\ && \quad\quad \bigg( r_{3,0}(\mu^{\rm init}_{r}) +\beta_{1}r_{2,1}(\mu^{\rm init}_{r})+2\beta_{0}r_{3,1}(\mu^{\rm init}_{r})+ \beta_{0}^{2}r_{3,2}(\mu^{\rm init}_{r})\bigg) a_{s}^{3}(\mu^{\rm init}_{r})+\mathcal{O}\bigg(a_{s}^{4}\bigg)\bigg], \label{rhorij}\end{aligned}$$ where $\beta_{0} = 11-{2\over 3}n_{f}$, $\beta_{1} = 102-{38\over 3} n_{f}$, and the coefficients $r_{i,j}(\mu^{\rm init}_{r})$ can be derived from the coefficients $c_{i,j}(\mu^{\rm init}_{r})$ defined in Eq.(\[rhocij\]), which are also put in the Appendix. The $r_{i,0}$ with i=(1,2,3) are conformal coefficients, and the $r_{i,j}$ with $1\leq j\leq i\leq 3$ are non-conformal ones that should be absorbed into the running coupling. After absorbing all those non-conformal terms into the running coupling, we finally obtain the scheme-independent conformal series for $\Delta\rho$, i.e. $$\begin{aligned} \Delta\rho&=&3X_{t}\bigg[1+r_{1,0}(\mu^{\rm init}_{r})a_{s}(Q_{1}) + r_{2,0}(\mu^{\rm init}_{r}) a_{s}^{2}(Q_{2}) \nonumber\\ && + r_{3,0}(\mu^{\rm init}_{r})a_{s}^{3}(Q_{3}) +\mathcal{O}\bigg(a_{s}^{4}\bigg)\bigg]. \label{rhorijPMC}\end{aligned}$$ Here $Q_{i}$ with $i=(1,2,3)$ are PMC scales. At each perturbative order, there are new types of $\{\beta_i\}$-terms, so we should introduce new PMC scales at each perturbative order so as to absorb all the $\{\beta_i\}$-terms into the running coupling consistently [@pmc8]. The PMC scales $Q_{1}$ and $Q_{2}$ can be written as $$\begin{aligned} Q_{1} &=& \mu^{\rm init}_{r}\exp\bigg({1\over 2}{-r_{2,1}+{1\over 2}{\partial\beta\over \partial a_{s}}r_{3,2}\over r_{1,0}-{1\over 2}{\partial\beta\over \partial a_{s}}r_{2,1}}\bigg) , \label{rhorijPMCscale1} \\ Q_{2} &=& \mu^{\rm init}_{r}\exp\bigg(-{1\over 2}{r_{3,1}\over r_{2,0}}\bigg), \label{rhorijPMCscale2}\end{aligned}$$ where $\beta=-a^{2}_{s}\sum^{\infty}\limits_{i=0} \beta_{i} a^{i}_{s}$. There is no higher-order $\{\beta_{i}\}$-terms to determine $Q_{3}$, we set its value as $\mu^{\rm init}_{r}$. This treatment causes residual scale dependence, which, however, can be highly suppressed [^1]. Numerical results and discussions ================================= To do numerical calculation, we take the top-quark pole mass $M_{t}=173.3$ GeV [@toppole], which is compatible with the $\overline{\rm MS}$ mass $\overline{m}_t(\overline{m}_t)= 163.3$ GeV [@toptt]. The $W$-boson mass $M_{W}=80.385$ GeV and the $Z^0$-boson mass $M_{Z}=91.1876$ GeV [@pdg]. The Fermi constant $G_{F}=1.16638\times10^{-5}{\rm GeV}^{-2}$. To be consistent, as an estimation of $\Delta\rho$ up to certain QCD loop correction, we will use different $\Lambda_{\rm QCD}$ determined by using world average $\alpha_s(M_{Z})=0.1184$ [@pdg]: we use $\Lambda^{(n_f=5)}_{\rm QCD}=0.213$ GeV, and $\Lambda^{(n_f=6)}_{\rm QCD}=0.0904$ GeV for three-loop $\alpha_s$ running; $\Lambda^{(n_f=5)}_{\rm QCD}=0.231$ GeV and $\Lambda^{(n_f=6)}_{\rm QCD}=0.0938$ GeV for two-loop $\alpha_s$ running; $\Lambda^{(n_f=5)}_{\rm QCD}=0.0899$ GeV and $\Lambda^{(n_f=6)}_{\rm QCD}=0.0437$ GeV for the one-loop $\alpha_s$ running. As a subtle point, as shown by Eqs.(\[rhorijPMCscale1\],\[rhorijPMCscale2\]), the PMC scales themselves are in perturbative series, thus the PMC scales shall be improved to a certain degree for the estimation with more and more QCD loop corrections being included. For example, to determine PMC scale $Q_1$ for $\Delta\rho$ up to three-loop level, we have only $\beta_0$ term to determine its value; while for $\Delta\rho$ up to four-loop level, we have both $\beta_0$ and $\beta_1$ terms to determine its value. Within the PMC scale $Q_1$, the $\beta_1$ terms are $\alpha_s$ suppressed in comparison to the leading $\beta_0$ terms and such difference shall further be exponentially suppressed, thus, its value changes slightly. The QCD improved electroweak $\rho$ parameter --------------------------------------------- -------------------------- ----------- --------- ---------- ----------- --------- ---------- $\mu^{\rm init}_{r}$ $M_{t}/2$ $M_{t}$ $2M_{t}$ $M_{t}/2$ $M_{t}$ $2M_{t}$  $\Delta R_{\rm NLO}$  -0.109 -0.098 -0.091 -0.131 -0.131 -0.131 $\Delta R_{\rm N^{2}LO}$ -0.121 -0.118 -0.112 -0.127 -0.127 -0.127 $\Delta R_{\rm N^{3}LO}$ -0.124 -0.123 -0.118 -0.126 -0.126 -0.126 -------------------------- ----------- --------- ---------- ----------- --------- ---------- : Initial scale dependence for the ratio $\Delta R_i$ under the conventional scale setting ($\mu_r\equiv\mu^{\rm init}_r$) and the PMC scale setting, where $i$=NLO, $\rm N^{2}LO$ and $\rm N^{3}LO$ stand for the QCD corrections to the $\Delta\rho$ parameter up to two-loop, three-loop, and four-loop levels, respectively. []{data-label="tableunscale"} After the PMC scale setting, we obtain a more steady prediction over the scale changes for $\Delta\rho$. To show this point more clearly, we define a parameter: $$\Delta R_i=\frac{\Delta\rho|_{i}}{3X_{t}} -1 ,$$ where $i$=NLO, N$^{2}$LO, N$^{3}$LO stand for the QCD corrections to the $\Delta\rho$ parameter up to two-loop, three-loop, and four-loop levels, respectively. The scale dependence for $\Delta R_i$ under the conventional scale setting and the PMC scale setting are put in Table \[tableunscale\], where three typical initial scales $\mu^{\rm init}_{r}=M_{t}/2$, $M_{t}$ and $2M_{t}$ are adopted. We can see from Table \[tableunscale\] that under the conventional scale setting, the QCD corrections for $\Delta R_i$ shows a strong dependence on the choice of (initial) scale $\mu^{\rm init}_{r}$. For example, its conventional scale error is $\sim \pm9 \%$ for $\mu^{\rm init}_{r}\in[M_{t}/2$, $2M_{t}]$ at the two-loop level, which changes to $\sim\pm4\%$ at the three-loop level and $\sim \pm 2.5\%$ at the four-loop level. Thus, the scale uncertainty constitutes a systematic error for $\Delta\rho$. In contrast, after applying the PMC scale setting, the value of $\Delta R_i$ is almost unchanged for $\mu^{\rm init}_{r}\in[M_{t}/2$, $2M_{t}]$ even at the two-loop level. In fact, by using the formulas (\[rhorijPMCscale1\]), (\[rhorijPMCscale2\]), it is found that the PMC scales themselves are almost fixed, i.e. for any $\mu^{\rm init}_{r}$, $$Q_1 \simeq 26.2 {\rm GeV} \; {\rm and}\; Q_2\simeq 84.6 {\rm GeV}\;.$$ The conformal coefficients $r_{i,0}$, as shown in the Appendix, are also independent of $\mu^{\rm init}_{r}$. ![The $\Delta\rho$ parameter versus the initial renormalization scale $\mu^{\rm init}_{r}$ under the conventional scale setting. The solid, dashed and doted lines stand for QCD corrections up to NLO/two-loop, $\rm N^{2}LO$/three-loop and $\rm N^{3}LO$/four-loop, respectively.[]{data-label="Plot:rhoco"}](Rhocon.eps){width="50.00000%"} ![The $\Delta\rho$ parameter versus the initial renormalization scale $\mu^{\rm init}_{r}$ under the PMC scale setting. The solid, dashed and doted lines stand for QCD corrections up to NLO/two-loop, $\rm N^{2}LO$/three-loop and $\rm N^{3}LO$/four-loop, respectively.[]{data-label="Plot:rhopm"}](Rhopmc.eps){width="50.00000%"} We present the dependence of $\Delta\rho$ over $\mu^{\rm init}_{r}$ before and after the PMC scale setting in Figs. \[Plot:rhoco\] and \[Plot:rhopm\], where the solid, dashed and doted lines stand for QCD corrections up to NLO/two-loop, $\rm N^{2}LO$/three-loop and $\rm N^{3}LO$/four-loop levels, respectively. Fig.\[Plot:rhoco\] shows that as one includes higher-and-higher orders, the scale uncertainty will be decreased accordingly to a certain degree: I) By setting $\mu_r=M_t$, we obtain $\Delta\rho\simeq8.49\times10^{-3}$, $8.30\times10^{-3}$, and $8.26\times10^{-3}$ at the two-loop, three-loop and four-loop levels, respectively; II) By setting $\mu_r=M_t/2$, we obtain $\Delta\rho\simeq8.39\times10^{-3}$, $8.27\times10^{-3}$, and $8.24\times10^{-3}$ at the two-loop, three-loop and four-loop levels, respectively. Those results agree with the conventional wisdom that by finishing a higher-order enough calculation, one can finally achieve desirable convergent and scale-invariant estimations. It is often argued that by varying the scale, one can estimate contributions from higher-order terms under the conventional scale setting. However, this procedure can only partly estimate the higher-order contributions, since it only partly exposes the $\{\beta_i\}$-dependent non-conformal terms, not the entire perturbative series [@pmc8]. More explicitly, by varying $\mu^{\rm init}_{r}\in [M_t/2,2M_t]$, Table \[tableunscale\] shows the central value of $\Delta R_{\rm N^2LO}$ is not within the error of $\Delta R_{\rm NLO}$ and the central value of $\Delta R_{\rm N^3LO}$ is also not within the error of $\Delta R_{\rm N^2LO}$. On the other hand, after applying the PMC scale setting, we obtain $\Delta\rho\simeq8.17\times10^{-3}$, $8.22\times10^{-3}$, and $8.23\times10^{-3}$ up to two-loop, three-loop and four-loop QCD corrections, respectively. Fig.\[Plot:rhopm\] shows the $\Delta\rho$ with QCD corrections up to NLO, $\rm N^{2}LO$ and $\rm N^{3}LO$ are almost flat versus the initial scale $\mu^{\rm init}_{r}$. It shows that after the PMC scale setting, the value of $\Delta\rho$ shows a faster steady behavior by including higher-and-higher order corrections, which quickly approaches its steady value with more-and-more loop corrections included. One may even estimate that $\Delta\rho=8.23\times10^{-3}$ could be the final pQCD estimations even by including up to infinite order corrections. To show how the theoretical prediction changes when more and more loop corrections are included, we define a ratio $$\kappa_{i}=\left|\frac{\Delta R_i - \Delta R_{i-1}}{\Delta R_{i-1}}\right|,$$ where $i={\rm N^2LO}$, ${\rm N^3LO}$, respectively. This ratio exactly shows how a (‘newly’) available higher-order correction could be varied from the (‘known’) lower-order estimation. Under the conventional scale setting, we have $$\begin{aligned} & & \kappa_{\rm N^2LO}=11\%,\;\;\kappa_{\rm N^3LO}=2\%\;{\rm for}\;\mu^{\rm init}_{r}=M_t/2 \\ & & \kappa_{\rm N^2LO}=20\%,\;\;\kappa_{\rm N^3LO}=4\%\;{\rm for}\;\mu^{\rm init}_{r}=M_t \\ & & \kappa_{\rm N^2LO}=23\%,\;\;\kappa_{\rm N^3LO}=5\%\;{\rm for}\;\mu^{\rm init}_{r}=2M_t\end{aligned}$$ While, after the PMC scale setting, we have $$\kappa_{\rm N^2LO}\simeq 3\%,\;\;\kappa_{\rm N^3LO}\simeq 0.8\%\;{\rm for}\;\mu^{\rm init}_{r}\in[M_t,2M_t].$$ Moreover, we note that: - The PMC scale at each perturbative order is determined by absorbing particular $\{\beta_i\}$-terms into the running coupling. Other than a guess work for the conventional scale setting, the PMC scales and hence the PMC estimations are highly independent of the choice of initial scale $\mu^{\rm init}_r$. Thus, the conventional renormalization scale ambiguity is solved. - A comparison of Table \[tableunscale\] indicates that after absorbing the non-conformal terms into the running coupling by PMC, the leaving conformal terms shall provide slight positive contributions from the two-loop level, so $\Delta\rho|_i$ shall be increased when more-and-more loop corrections being included. While, under the convention scale setting, the combination of both the conformal and the non-conformal terms shall always provide negative contributions, so $\Delta\rho|_i$ shall decrease with more-and-more loop corrections being included. - Another important feature of PMC scale setting is that its final estimation is conformal series and is renormalization scheme independent [@pmc8], thus the renormalization scale and scheme dependence under the conventional scale setting are eliminated at the same time. - For any scale-setting method, we need to finish a full higher-order calculation so as to estimate the magnitude of the conformal terms. The PMC provides a systematic way to estimate the unknown conformal contributions via the extended renormalization group equations [@pmcext]. As a byproduct, from a comparison of Table \[tableunscale\], we show that if setting $\mu_r\sim M_t/2$ for the conventional scale setting, one can get the same estimation under the PMC scale setting. Thus, the effective momentum flow for the whole process is $\sim M_t/2$ other than the conventionally suggested $M_t$, or equivalently, it is $\sim M_t/2$ that can rightly eliminate the large logs and get a more convergent/correct pQCD estimation. There are some other examples also show that the conventional choice of scale is really a guess work. Ref.[@higrr] indicates that the effective momentum flow for $H\to\gamma\gamma$ decay is $\sim2M_H$ other than $M_H$. Ref.[@Q6] argues that after including the first and second order corrections to several deep inelastic sum rules which are due to heavy flavor contributions, the effective scale $\mu_{r}$ for the deep inelastic sum rules should be $\sim 6.5 m_Q$ other than $m_Q$ ($m_Q$ being the heavy quark mass). All those indicate that it is clearly artificial to guess a scale $Q$ (we even do not know whether it is the central scale or not) and to study its uncertainty by simply varying $\mu_{r}\in [Q/2, 2\,Q]$, as the conventional scale setting does. After the PMC scale setting, there is residual scale dependence for the final terms proportional to $a^3_{s}(Q_{3})$, since we have no $\{\beta_{i}\}$-terms to determine $Q_3$. We can estimate the magnitude of such residual scale dependence following the spirit of PMC scale setting. That is, as suggested in Ref.[@jpsi], we rewrite the coupling constant $a_{s}(Q_{3})$ at the four-loop level as follows, $$\begin{aligned} a_{s}(Q_{3})=a_{s}(\mu^{\rm init}_{r})+\beta_{0} \ln\left({(\mu^{\rm init}_{r})^2 \over Q^2_{3}}\right) a^{2}_{s}(\mu^{\rm init}_{r}). \label{rhorijasq3}\end{aligned}$$ Since the log term $\ln\left({\mu^{\rm init}_{r}/Q_{3}}\right)^{2}$ can largely compensate the scale changes at the $\mathcal{O}(a_{s}^{3})$ level, and as expected, we obtain a very small residual scale dependence by varying $\mu^{\rm init}_{r}\in[M_t/2,2M_t]$. ------------------------------------- ------------ ------------- ----------------- ------------------ ------------ ------------- ----------------- ------------------          $\rm LO$   $\rm NLO$   $\rm N^{2}LO$   $\rm N^{3}LO$    $\rm LO$   $\rm NLO$   $\rm N^{2}LO$   $\rm N^{3}LO$    $\Delta\rho|_{i} (\times10^{-3})$  9.411 8.483 8.305 8.257 9.411 8.175 8.217 8.228  $\delta\rho|_{i} (\times10^{-3})$  - $-0.928$ $-0.178$ $-0.048$ - $-1.236$ $0.042$ $0.011$  $K_{i}$  - $9.8\%$ $2.1\%$ $0.6\%$ - $13\%$ $0.5\%$ $0.1\%$ ------------------------------------- ------------ ------------- ----------------- ------------------ ------------ ------------- ----------------- ------------------ : The parameter $\Delta\rho$, the shift $\delta\rho$, and the $K$ factor before and after the PMC scale setting. $\Delta\rho|_{i}$ with $i$=LO, NLO, N$^{2}$LO and N$^{3}$LO denote the QCD corrections up to one-loop, two-loop, three-loop, and four-loop levels, respectively. The $\delta\rho|_{i}$ and $K_i$ stand for the shift of $\Delta\rho|_{i}$ and $K$ factor for the two-loop, three-loop or four-loop level, respectively. $\mu^{\rm init}_r=M_t$. []{data-label="tablerho"} Finally, we present the QCD correction to $\Delta\rho$ at each perturbative order before and after the PMC scale setting in Table \[tablerho\], where $\Delta\rho_{i}$, with $i$=LO, NLO, N$^{2}$LO, or N$^{3}$LO, denote the $\Delta\rho$ with QCD correction up to one-loop, two-loop, three-loop, and four-loop level, respectively. After the PMC scale setting, we obtain a conformal series for $\Delta\rho$, and because of the elimination of renormalons (together with large log-terms), the pQCD convergence can be greatly improved. To show this point more clearly, we define a $K$ factor, whose value at each perturbative order is defined as $$K_i=\left|\frac{\Delta\rho|_{i}-\Delta\rho|_{i-1}}{\Delta\rho|_{i-1}}\right| =\left|\frac{\delta\rho|_{i}}{\Delta\rho|_{i-1}}\right|,$$ where the shift of $\Delta\rho|_{i}$ is defined as $\delta\rho|_{i}= \left(\Delta\rho|_{i}-\Delta\rho|_{i-1}\right)$, and its values are presented in Table \[tablerho\]. The $K_i$ stands for the $K$ factor up to two-loop, three-loop or four-loop level, respectively; that is, $i$=NLO, N$^{2}$LO, or N$^{3}$LO, denotes the QCD correction up to one-loop, two-loop, three-loop, and four-loop level, respectively. The results of $K_i$ are also presented in Table \[tablerho\]. The values for $K$ factors decrease much faster after the PMC scale setting, which agree with the above observation that the pQCD convergence can be greatly improved after PMC scale setting. More over, we obtain $\Delta\rho|_{\rm N^3LO} = \left(8.257^{+0.045}_{-0.012}\right) \times10^{-3}$ for $\mu_r\in[M_t/2,2M_t]$ under the conventional scale setting; at the same time, the $\Delta\rho|_{\rm N^3LO}$ is almost fixed to be $8.228\times10^{-3}$ after the PMC scale setting. It is noted that several ways to absorb the $\{\beta_i\}$-terms into the running coupling have been suggested, such as the PMC-I approach (based on the PMC-BLM correspondence) [@pmc2], the PMC $R_\delta$ approach [@pmc6], and the seBLM approach [@seblm; @seblm0]. A detailed comparison of those approaches can be found in Ref. [@higbbgg]. At present, we observe the same conclusions as those of Ref. [@higbbgg]. It is noted that the values of $\Delta\rho$ up to four-loop level agree with each other for those approaches. Because of the choice of different effective $\{\beta_i\}$-series, there are differences at each perturbative order. More explicitly, if setting the initial scale as $\mu^{\rm init}_{r}=M_{t}$, we obtain two effective scales for those approaches as follows $$\begin{aligned} R_{\delta}~{\rm approach} : Q_1&=&26.2 {\rm GeV}, Q_2=84.6 {\rm GeV}, \nonumber \\ {\rm PMC-I}\; {\rm approach} : Q_1&=&26.3 {\rm GeV}, Q_2=83.5 {\rm GeV}, \nonumber \\ {\rm seBLM\;approach} : Q_1&=&26.1 {\rm GeV}, Q_2=263.7 {\rm GeV}. \nonumber\end{aligned}$$ The scales $Q_{1,2}$ are almost the same for both the $R_{\delta}$-approach and the PMC-I approach. Under the seBLM approach, the effective scale $Q_{1}$ is the same as that of the two PMC approaches, but it has a larger $Q_2$. Applications of the QCD improved $\rho$ parameter for $\delta M_W$ and $\delta\sin^{2}\theta^{\rm lept}_{\rm eff}$ ------------------------------------------------------------------------------------------------------------------ A lot of efforts have been devoted to predict the values of the two important EWPOs as $M_{W}$ and $\sin^{2}\theta^{\rm left}_{\rm eff}$ within SM, either theoretically or experimentally [@unmw; @unsin; @expersin; @ilc]. At present, the total experimental uncertainties for $M_{W}$ and $\sin^{2}\theta^{\rm lept}_{\rm eff}$ are $\delta M_{W} =15$ MeV [@pdg] and $\delta\sin^{2}\theta^{\rm lept}_{\rm eff} =16\cdot10^{-5}$ [@expersin]. Recent improvements on Higgs at the LHC [@higgs1; @higgs2; @higgsmass1; @higgsmass2] also allow us to determine the EWPOs with high precision [@elefit1; @elefit2; @elefit3; @elefit4; @elefit5]. At the future International Linear Collider (ILC), it is estimated that the accurate experimental uncertainties as $\delta M_{W}=6$ MeV and $\delta\sin^{2}\theta^{\rm lept}_{\rm eff}=13\cdot10^{-5}$ can be achieved [@ilc] [^2]. On the other hand, theoretically, the dominant shifts of $M_{W}$ and $\sin^{2}\theta^{\rm left}_{\rm eff}$ are due to $\Delta\rho$ though the following formulas [@rhoqcdfo2] $$\begin{aligned} \delta M_{W}|_{i} &=& {M_{W}\over 2}{c^{2}_{W}\over c^{2}_{W}-s^{2}_{W}}\delta\rho|_{i} \nonumber\\ &=& {M_{W}\over 2}{c^{2}_{W}\over c^{2}_{W}-s^{2}_{W}} \left(\Delta\rho|_{i}-\Delta\rho|_{i-1}\right) \label{rhoshidm}\end{aligned}$$ and $$\begin{aligned} \delta\sin^{2}\theta^{\rm left}_{\rm eff}|_{i} &=& -{c^{2}_{W}s^{2}_{W}\over c^{2}_{W}-s^{2}_{W}}\delta\rho|_{i} \nonumber\\ &=& -{c^{2}_{W}s^{2}_{W}\over c^{2}_{W}-s^{2}_{W}} \left(\Delta\rho|_{i}-\Delta\rho|_{i-1}\right), \label{rhoshids}\end{aligned}$$ where $c_{W}=M_{W}/M_{Z}$ and $s^{2}_{W}=1-c^{2}_{W}$, and sequently, with $i$=NLO, N$^{2}$LO, or N$^{3}$LO, respectively. ----------------------------------------------------------------- --------- --------------- --------------- --------- --------------- ---------------         NLO $\rm N^{2}LO$ $\rm N^{3}LO$ NLO $\rm N^{2}LO$ $\rm N^{3}LO$ $\delta M_{W}|_{i}$ (MeV) $-52.3$ $-10.0$ $-2.7$ $-69.7$ $+2.4$ $+0.7$ $\delta\sin^{2}\theta^{\rm left}_{\rm eff}|_{i}(\times10^{-5})$ $+29.0$ $+5.6$ $+1.5$ $+38.6$ $-1.3$ $-0.4$ ----------------------------------------------------------------- --------- --------------- --------------- --------- --------------- --------------- : The shifts $\delta M_{W}$ and $\delta\sin^{2}\theta^{\rm left}_{\rm eff}$ due to the QCD improved $\rho$ parameter before and after the PMC scale setting, where the symbols NLO, N$^{2}$LO and N$^{3}$LO shifts due to the QCD corrections up to two-loop, three-loop, and four-loop levels, respectively. $\mu^{\rm init}_r=M_t$. []{data-label="tabledws"} An QCD improved $\rho$ parameter leads to improved estimations on $M_{W}$ and $\sin^{2}\theta^{\rm lept}_{\rm eff}$, which shall help us for a more confidential comparison with the experimental results and for searching new physics beyond SM over those EWPOs. We present the shifts of $M_{W}$ and $\sin^{2}\theta^{\rm left}_{\rm eff}$ caused by the QCD corrections to $\Delta\rho$ before and after the PMC scale setting in Table \[tabledws\]. At the N$^{3}$LO level, the shifts are $\delta M_{W}|_{\rm N^3LO}$=-2.7 MeV and $\delta\sin^{2}\theta^{\rm left}_{\rm eff}|_{\rm N^3LO}=1.5\times10^{-5}$ under the conventional scale setting, whose precision can be greatly improved by about four times after the PMC scale setting due to a more convergent pQCD series, i.e. $$\begin{aligned} \delta M_{W}|_{\rm N^3LO} &=& 0.7 \; {\rm MeV} \label{mw}\end{aligned}$$ and $$\begin{aligned} \delta\sin^{2}\theta^{\rm left}_{\rm eff}|_{\rm N^3LO} &=& -0.4\times10^{-5} . \label{theta}\end{aligned}$$ Summary ======= We have applied PMC to analyze the electroweak $\rho$ parameter up to four-loop QCD corrections. After the PMC scale setting, we obtain a more accurate estimation on $\Delta\rho$ with a better pQCD convergence, and then a better estimation of the two EWPOs as $\delta M_{W}$ and $\delta\sin^{2}\theta^{\rm lept}_{\rm eff}$ can be achieved. More specifically, - We obtain, $\Delta\rho|_{\rm N^3LO} = \left(8.257^{+0.045}_{-0.012}\right)\times10^{-3}$ for $\mu_r\in[M_t/2,2M_t]$ under the conventional scale setting; while, at the same time, the $\Delta\rho|_{\rm N^3LO}$ is almost fixed to be $8.228\times10^{-3}$ after the PMC scale setting. It shows that the conventional scale uncertainty can be eliminated and the pQCD convergence can also be greatly improved by applying the PMC scale setting. Thus, it provides another good example for achieving the optimal renormalization scales of the process via PMC, which has been detailed illustrated in Ref.[@pmc5]. - In comparison to the results under the conventional scale setting, after applying the PMC scale setting, we obtain a more steady prediction for the shift of $W$-boson mass, $\delta M_{W}$, and the shift of the effective leptonic weak-mixing angle, $\delta\sin^{2}\theta^{\rm lept}_{\rm eff}$. More over, as shown by Eqs.(\[mw\],\[theta\]), the QCD improved shifts $\delta M_{W}|_{\rm N^3LO}$ and $\delta\sin^{2}\theta^{\rm left}_{\rm eff}|_{\rm N^3LO}$ are well below the precision anticipated even for the future ILC experiment. Thus, we shall have great chances to test SM with high precision. - To apply the PMC scale setting to higher-order pQCD calculations, it is more convenient to use the expressions of $\Delta\rho$ under the on-shell renormalization scheme, e.g. via using the pole top-quark mass, such that there is no ambiguity in dealing with the $n_f$ series of the process, i.e. only those $n_{f}$-terms that rightly determine the running behavior of the running coupling should be absorbed into the running coupling. [**Acknowledgments**]{}: This work was supported in part by Natural Science Foundation of China under Grant No.11275280, by the Program for New Century Excellent Talents in University under Grant No.NCET-10-0882, and by the Fundamental Research Funds for the Central Universities under Grant No.CQDXWL-2012-Z002.\ Appendix: the coefficients $c_{i,j}(\mu^{\rm init}_{r})$ and $r_{i,j}(\mu^{\rm init}_{r})$ {#appendix-the-coefficients-c_ijmurm-init_r-and-r_ijmurm-init_r .unnumbered} ========================================================================================== By using the expressions of Refs.[@rhoqcdfo1; @rhoqcdfo2; @rhoqcdfo3; @rhoqcdfo4], including both the singlet and non-singlet contributions, the coefficients $c_{i,j}(\mu^{\rm init}_{r})$ for $\Delta\rho$ up to four-level are $$\begin{aligned} c_{1,0}(\mu^{\rm init}_{r})&=&-{8\over 3}-{8\pi^{2}\over 9},\\ c_{2,0}(\mu^{\rm init}_{r})&=&-404.981+ 125.836 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\\ c_{2,1}(\mu^{\rm init}_{r})&=&28.5794 - 7.62643 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\\ c_{3,0}(\mu^{\rm init}_{r})&=&-20372.1 + 10076.4 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}} \nonumber\\ &&- 1384.2 \ln^{2}{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}}, \\ c_{3,1}(\mu^{\rm init}_{r})&=&2843.1 - 1313.62 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}} \nonumber\\ && +167.782 \ln^{2}{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\\ c_{3,2}(\mu^{\rm init}_{r})&=&-73.558 + 38.1059 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}} \nonumber\\ && - 5.0843 \ln^{2}{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\end{aligned}$$ where $M_t$ stands for the top-quark pole mass. The coefficients $r_{i,j}(\mu^{\rm init}_{r})$ for $\Delta\rho$ can be divided into two types, one is the conformal type, which includes $$\begin{aligned} r_{1,0}(\mu^{\rm init}_{r})&=&-{8\over 3}-{8\pi^{2}\over 9},\\ r_{2,0}(\mu^{\rm init}_{r})&=&66.5793 ,\\ r_{3,0}(\mu^{\rm init}_{r})&=&1925.76,\end{aligned}$$ and the other is the non-conformal type, which includes $$\begin{aligned} r_{2,1}(\mu^{\rm init}_{r})&=&-42.8691 + 11.4396 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\\ r_{3,1}(\mu^{\rm init}_{r})&=&95.5017 - 66.5793 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}},\\ r_{3,2}(\mu^{\rm init}_{r})&=&-165.506 + 85.7382 \ln{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}} \nonumber\\ && - 11.4396 \ln^{2}{M^{2}_{t}\over (\mu^{\rm init}_{r})^{2}}.\end{aligned}$$ [99]{} M.J.G. Veltman, Nucl. Phys. B [**123**]{}, 89 (1977). C. Campagnari and M. Franklin, Rev. Mod. Phys. [**69**]{}, 137 (1997). F. Abe [*et al.*]{}, CDF Collaboration, Phys. Rev. Lett. [**74**]{}, 2626 (1995). S. Abachi [*et al.*]{}, D0 Collaboration, Phys. Rev. Lett. [**74**]{}, 2632 (1995). M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, Phys. Rev. Lett. [**93**]{}, 201805 (2004). M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, Phys. Rev. D [**69**]{}, 053006 (2004). M.J.G. Veltman, Acta Phys. Pol. B [**8**]{}, 475 (1977). A. Djouadi and C. Verzegnassi, Phys. Lett. B [**195**]{}, 265 (1987). A. Djouadi, Nuovo Cimento A [**100**]{}, 357 (1988). B.A. Kniehl, J. H. Kuhn, and R.G. Stuart, Phys. Lett. B [**214**]{}, 621 (1988). L. Avdeev, J. Fleischer, S. Mikhailov and O. Tarasov, Phys. Lett. B [**336**]{}, 560 (1994). L. Avdeev, J. Fleischer, S. Mikhailov and O. Tarasov, Phys. Lett. B [**349**]{}, 597, Erratum (1995). K.G. Chetyrkin, J.H. Kuhn, and M. Steinhauser, Phys. Lett. B [**351**]{}, 331 (1995). K.G. Chetyrkin, J.H. Kuhn, M. Steinhauser, Phys. Rev. Lett. [**75**]{}, 3394 (1995). K.G. Chetyrkin, M. Faisst, J.H. Kuhn, P. Maierhofer, and C. Sturm, Phys. Rev. Lett. [**97**]{}, 102003 (2006). Y. Schroder and M. Steinhauser, Phys. Lett. B [**622**]{}, 124 (2005). R. Boughezal and M. Czakon, Nucl. Phys. B [**755**]{}, 221 (2006). M. Faisst, P. Maierhofer, and C. Sturmb, Nucl. Phys. B [**766**]{}, 246 (2007). S.J. Brodsky and X.G. Wu, Phys. Rev. Lett. [**109**]{}, 042002 (2012). S.J. Brodsky and X.G. Wu, Phys. Rev. D [**85**]{}, 034038 (2012). S.J. Brodsky and X.G. Wu, Phys. Rev. D [**85**]{}, 114040 (2012). S.J. Brodsky and X.G. Wu, Phys. Rev. D [**86**]{}, 014021 (2012). S.J. Brodsky and X.G. Wu, Phys. Rev. D [**86**]{}, 054018 (2012). S.J. Brodsky and L.D. Giustino, Phys. Rev. D [**86**]{}, 085026 (2012). X.G. Wu, S.J. Brodsky, and M. Mojaza, Prog. Part. Nucl. Phys. [**72**]{}, 44 (2013). M. Mojaza, S.J. Brodsky, and X.G. Wu, Phys. Rev. Lett. [**110**]{}, 192001 (2013). S.J. Brodsky, M. Mojaza, and X.G. Wu, Phys. Rev. D [**89**]{}, 014027 (2014). S.Q. Wang, X.G. Wu, X.C. Zheng, G. Chen, and J.M. Shen, arXiv:1311.5106. N. Gray, D. J. Broadhurst, W. Grafe, and K. Schilcher, Z. Phys. C [**48**]{}, 673 (1990). F. Jegerlehner and M.Y. Kalmykov, Acta Phys. Pol. B [**34**]{}, 5335 (2003). M. Faisst, J.H. Kuhn, and O. Veretin, Phys. Lett. B [**589**]{}, 35 (2004). D. Eiras and M. Steinhauser, JHEP [**0602**]{}, 010 (2006). K.G. Chetyrkin and M. Steinhauser, Phys. Rev. Lett. [**83**]{}, 4001 (1999). K.G. Chetyrkin and M. Steinhauser, Nucl. Phys. B [**573**]{}, 617 (2000). K. Melnikov and T.V. Ritbergen, Phys. Lett. B [**482**]{}, 99 (2000). X.C. Zheng, X.G. Wu, S.Q. Wang, J.M. Shen, and Q.L. Zhang, JHEP [**1310**]{}, 117 (2013). E. Braaten and Y.Q. Chen, Phys. Rev. D [**57**]{}, 4236 (1998). ATLAS and CMS Collaborations, ATLAS-CONF-2012-095, CMS-PAS-TOP-12-001. S. Alekhin, A. Djouadi, and S. Moch, Phys. Lett. B [**716**]{}, 214 (2012). J. Beringer [*et al.*]{}, Particle Data Group, Phys. Rev. D [**86**]{}, 010001 (2012). H.J. Lu and S.J. Brodsky, Phys. Rev. D [**48**]{}, 3310 (1993). J. Blumlein and W.L. van Neerven, Phys. Lett. B [**450**]{}, 417 (1999). S.Q. Wang, X.G. Wu, X.C. Zheng, J.M. Shen, and Q.L. Zhang, Nucl. Phys. B [**876**]{}, 731 (2013). S.V. Mikhailov, JHEP [**0706**]{}, 009 (2007). A.L. Kataev and S.V. Mikhailov, Theor. Math. Phys. [**170**]{}, 139 (2012). S.Q. Wang, X.G. Wu, X.C. Zheng, J.M. Shen and Q.L. Zhang, Eur. Phys. J. C [**74**]{}, 2825 (2014) The ALEPH, DELPHI, L3, OPAL, SLD Collaborations, the LEP Electroweak Working Group, the SLD Electroweak and Heavy Flavour Working Groups, Phys. Rept. [**427**]{}, 257 (2006). G. Aarons [*et al.*]{}, ILC Collaboration, arXiv:0709.1893. M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, Phys. Rev. D [**69**]{}, 053006 (2004). M. Awramik, M. Czakon, and A. Freitas, JHEP [**0611**]{}, 048 (2006). G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012). S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B[**716**]{}, 30 (2012). \[ATLAS Collaboration\], ATLAS-CONF-2013-014 (2013). \[CMS Collaboration\], CMS-PAS-HIG-13-005 (2013). M. Baak [*et al.*]{}, Eur. Phys. J. C [**72**]{}, 2205 (2012). M. Baak and R. Kogler, arXiv:1306.0571. A. Akhundov, A. Arbuzov, S. Riemann, and T. Riemann, arXiv:1302.1395. J. Erler, arXiv:1209.3324. O. Eberhardt [*et al.*]{}, Phys. Rev. Lett. [**109**]{}, 241802 (2012). J.A. Aguilar-Saavedra [*et al.*]{}, ECFA/DESY LC Physics Working Group Collaboration, hep-ph/0106315. [^1]: There is another type of residual scale dependence for the already determined PMC scales, which are smaller and are highly exponentially suppressed [@pmc12]. This is the reason why the large-$\beta_0$ approximation suggested in the literature provides a good approximation for setting the scale in certain processes [@beta0]. [^2]: Through its GigaZ program, $\delta\sin^{2}\theta^{\rm lept}_{\rm eff}$ can be even improved as $1.3\cdot10^{-5}$ [@elefit6].
--- abstract: 'A new dynamical model for the propagation of strangelets through the terrestrial atmosphere is proposed.' --- [**STRANGELETS IN TERRESTRIAL ATMOSPHERE**]{} [**Shibaji Banerjee$^a$[[^1]]{}, Sanjay K. Ghosh$^a$[[^2]]{}, Sibaji Raha$^a$[[^3]]{} and Debapriyo Syam$^b$**]{} \(a) Department of Physics, Bose Institute,\ 93/1, A. P. C. Road, Calcutta 700 009, INDIA\ (b) Department of Physics, Presidency College,\ 86/1, College Street, Calcutta 700 073, INDIA\ Strangelets are small lumps of strange quark matter (SQM) which consist of roughly equal numbers of up, down and strange quarks. Since the suggestion of Witten[@wit]in 1984 that these strangelets, and not ordinary nuclear matter, represent the true ground state of QCD, this area of research has expanded considerably. The existence of stable or metastable lumps of SQM would have numerous implications for physics and astrophysics. Most importantly, they can account for the cosmological dark matter problem to a large extent, if not entirely [@ars]. Moreover, the mere existence of strangelets may be one of the most perfect evidences for the QGP phase transition that is believed to have occurred in the early universe or inside neutron stars. To establish the existence of strangelets, it is, of course, necessary to detect them. The obvious place to search for them would be in the cosmic rays. To this end, it is necessary to understand the propagation of strangelets through the earth’s atmosphere, as these would have to traverse the whole atmospheric depth before arriving at the detectors. In this letter, we attempt to provide a realistic dynamical model which describes the propagation of strangelets. The fact that SQM is absolutely stable does not contradict the ordinary experience that matter is, for the most part, made of ordinary nuclear matter. This is because, in order for ordinary matter to change to SQM, a large number of $u$ and $d$ quarks need to change to $s$ quarks, which would require a very high order weak interaction and therefore can be considered to be highly improbable [@wit]. On the other hand, the existence of ordinary nuclear matter shows that quark matter consisting of only $u$ and $d$ quarks is unstable [@wit]. Introduction of a third flavour reduces the energy relative to a two-flavour system since another Fermi well is now present and this makes the system stable. In the earlier works [@fgm], calculations along the lines of Fermi gas model seemed to suggest that although strangelets with high atomic numbers would be absolutely stable, the same could not be said for strangelets with low A; the surface effect which must be taken into account for small strangelets would make the energy per baryon increase with decreasing baryon number, turning the SQM unstable. In later calculations [@shm1; @shm2], the Fermi gas model was replaced by the spherical MIT hadronic bag models and the stability of strangelets with low A was examined in the light of this model. The works done by various authors [@an1; @an2] tell us that for low $s$-quark masses, shell-like structures occur for A = 6, 18, 24, 42, 54, 60, 84, 102 etc.These values may change somewhat with the change in the strangeness fraction [@an1; @an2]. Strangelets are expected to possess a small positive electric charge. They would have been neutral, if the ground state composition consisted of equal numbers of quarks of the three types of quark flavours, which is the most favourable state. Actually, however, the $s$ quark is heavier than the other two flavours and so their number is slightly less than that of the $u$ or $d$ quarks. Hence the fortuitous cancellation $\frac{2}{3}n_{u}-\frac{1}{3}n_{d}-\frac{1}{3}n_{s}$ does not occur exactly, as a result of which we are left with a small residual positive charge. It can also be argued from mere experience that they cannot be negatively charged because although overall charge neutrality can be maintained by covering them up by a cloud of positrons, they will eat up every piece of ordinary matter they find in their path since it is energetically favorable for nuclear matter to convert to SQM. (For a recent survey of the physics of SQM, see [@mad].) On the other hand, a small positive charge would be helpful, as in that case, ordinary nuclei will be electrostatically repelled when the strangelets are moving slowly; when they move fast enough (highly energetic), nothing can prevent them from absorbing neutrons and becoming more and more tightly bound. It is generally assumed that the strangelets which come to the upper layer of the atmosphere have baryon number $A\sim 1000$ or more. This assumption is usually made on the basis of the following experimental and theoretical considerations. Some of the reports which suggest direct candidates for SQM are tabulated below.\ ---------------------------------------------------------- -- -- *Event & *Mass & *Charge\ Counter experiment [@tab1] & $A\sim 350-450$ &14\ Exotic Track [@tab2]& $A\sim 460$ & 20\ Price’s Event [@tab3] & $A > 1000$ & 46\ Balloon Experiments [@tab41; @tab42] & $A\sim 370$ & 14\ *** ---------------------------------------------------------- -- -- The theoretical tip is usually taken from the generally accepted mass number for strangelets which is given in the above paragraph. It is often remarked that that the observation of these candidates at such large atmospheric depths ($\sim 500$ $g/cm^2$) requires unusual penetrability of these lumps which means that their cross sections should be very small and hence the geometric size much smaller than typical nuclear size. This, however, is not borne out by the models of SQM where the density of SQM is seen to be not much larger than ordinary nuclear matter [@wit]. Wilk *et al.* [@wlk1; @wol; @wlk2; @wlk3] proposed a mechanism by which the strangelets are able to cover great atmospheric depths. They assumed that the mass and hence the cross sections of the strangelets incident on the upper layers of the atmosphere with initial masses of the order of $10^3$ a.m.u. decrease rapidly due to their collisions with air molecules in which a mass equal to that of the nucleus of an atmospheric atom is ripped off from the strangelet in every encounter. They also proposed that there should exist a critical mass $m_{crit}$ such that when the mass of the strangelet evolving out of an initially large strangelet drops below that, it simply evaporates into neutrons and that is what would fix the lower limit of the altitude upto which a strangelet would be able to penetrate. Let us recapitulate the basic conclusions that can be derived from the earlier works. Firstly, strangelets observed at the mountain altitudes typically have mass around 300 to 450 and charge between 10 to 20. But the experimental results obtained till date are inconclusive and hence they do not impose a strict bound on the mass and charge of strangelets that can be observed in future experiments. Secondly, although the correlation between penetrability and geometric cross sections is usually valid for ordinary nuclei, the same cannot be easily extrapolated to the case of strangelets since these are very tightly bound massive objects and are not expected to break up as a result of nuclear collisions. Indeed, in a typical interaction between a strangelet and the nucleus of an atmospheric atom, it is more probable for the strangelet to absorb neutrons so that the colliding nucleus, and not the strangelet, is likely to break up most of the time. Hence the scheme proposed in [@wlk1], namely that the mass of a strangelet decreases in every encounter, seems to be unrealistic. We, in this letter, propose an alternative scheme based on the following premises : 1. The collision of a lump of SQM with ordinary matter results in the *absorption* of the neutrons from the colliding nucleus, as a result of which the mass of the strangelet increases in every collision and it becomes more tightly bound. 2. The initial masses of the strangelets are assumed to be small in order to obtain final baryon numbers which are nearly equal to the observed ones at mountain altitudes. The discussion above indicates that it is quite possible to have stable lumps of SQM with low mass numbers. This would also facilitate a somewhat larger flux in the cosmic rays. 3. The speed, and hence the kinetic energy of these particles, must be such that they would arrive at a distance of 25 km above the sea level, surmounting the geomagnetic effects. We start with such an altitude since the atmospheric density above 25 km is low enough to be neglected. The charge of the strangelet is also fixed by this assumption, corresponding to a certain strangeness fraction. The simple assumptions proposed above give a picture more or less in accord with the observation of the propagation of the strangelets in the atmosphere, which can give useful indications of the type of things to be expected in an actual experiment. The description of the model is given next. We consider a situation in which a strangelet with a low baryon number enters the upper layers ($\sim 25 Km$ from the sea level) of the atmosphere. To arrive at this point, a charged particle must possess a speed determined by the formula (see, *e.g*, [@menzel]). $$\frac {pc}{Ze} \geq \frac{M}{r_{o}^{2}} \frac{cos^{4}\vartheta}{\left ( \sqrt{1+cos^{3}\vartheta}+1\right )^2}$$ where $M$is the magnetic dipole moment of the Earth, $r_o$ the radius of the Earth and $\vartheta$ is the (geomagnetic) latitude of the point of observation ($\sim 30^o$, which might represent a location in north eastern India). $p$ and $Ze$ represent the momentum and charge, respectively, of the particle. The magnetic field of the Earth is taken to be equivalent to that due to a magnetic dipole of moment $M=8.1\times 10^{22} J/T$, located near the centre of the earth, the dipole axis pointing south. We have fixed the mass, initial speed and charge to be 64 amu $6.6\times10^{7}$m per sec and 2 (electron charge), respectively, at the initial altitude of 25 km, for the purpose of the present work. In the course of its journey, the strangelet comes in contact with air molecules, mainly $N_2$. During such collisions, the strangelet absorbs neutrons from some of these molecules, as a result of which it becomes more massive. The effect of such encounters is summarized in the formula $$\frac{dm_S}{dh}= \frac{f\times m_{N}}{\lambda}$$ where $m_S$ is the mass of the strangelet, $m_{N}$ the total mass of the neutrons in the atmospheric atom, $\lambda$ the mean free path of the strangelet in the atmosphere and $h$ the path length traversed. (It should be emphasized here that the strangelets would preferentially absorb neutrons, as protons would be coulomb repelled. Nonetheless, some protons could still be absorbed initially when the relative velocity between the strangelet and the air molecule is large, leading to increase in both the mass and the charge of the strangelet. We do not address this issue in the present work for the sake of simplicity, although it can be readily seen that the rate of increase in mass would obviously be faster than that in charge.) In the above equation, $\lambda$ depends both on $h$, which determines the density of air molecules and the instantaneous mass of the strangelet, which relates to the interaction cross section. The mean free path decreases as lower altitudes are reached since the atmosphere becomes more dense and the collision frequency increases. Finally the factor $f$ determines the fraction of neutrons that are actually absorbed out of the total number of neutrons in the colliding nucleus. The expression for this factor has been determined by geometric considerations [@gosset] and is given by $$f=\frac{3}{4}{\left(1-\nu\right)}^{1/2}{\left(\frac{1-\mu}{\nu}\right)}^{2} -\frac{1}{3}{\left[3{\left(1-\nu\right)}^{1/2}-1\right]} {\left(\frac{1-\mu}{\nu}\right)}^{3}$$ In eqn(3), $\mu=\frac{b}{R_1+R_2}$ and $\nu=\frac{R_1}{R_1+R_2}$ where $b$ is the impact parameter. $R_1$ and $R_2$ are the radii of the strangelet and the nucleus of the atmospheric atom, respectively. $f$ is initially small but grows larger and reaches the limiting value 1 when the strangelet grows more massive. The above considerations lead us to a set of differential equations of the form $$\frac{d\vec{v}}{dt}=-\vec{g}+\frac{q}{m_S}\left ( \vec{v} \times \vec{B} \right ) - \frac{\vec{v}}{m_S}\frac{dm_S}{dt}$$ In eqn(4), $-\vec{g}$ represents the acceleration due to gravity, $\vec{B}$ is the terrestrial magnetic field, $q=Ze$ and $\vec{v}$ represents the velocity of the strangelet. These equations were solved by the $4^{th}$ order Runge Kutta Method for the set of initial conditions described above. The results are shown in figs.  1 – 5. Figure 1 shows the variation of Altitude with time, the zero of time being at 25 km. The time required to reach a place which is about 3.6 km above the sea level (height of a typical north east Indian peak like ‘Sandakphu’, where an experiment to detect strangelets in cosmic rays using a large detector array is being set up [@saha]) is indicated in the figure. The next figure (Fig. 2) shows the variation of the mass of the strangelet with time and figure 3 shows the variation of the strangelet mass with altitude. It can be seen from the figures that the expected mass at the aforementioned altitude comes out to be about 340 amu or so. Figure 4 shows the variation of the mean free time with altitude. Finally, figure 5 shows the variation of $\beta=v/c$ with time, showing the expected decrease of the speed with time, which also justifies the adequacy of the nonrelativistic treatment that we have applied. In this letter, we have proposed a dynamical model of the propagation of strangelets through the atmosphere of the Earth. Although the model is based on simple assumptions, it is realistic enough to include the difference in the interaction process which is expected when SQM, and not ordinary nuclei, collide with the atmospheric nuclei. The effects of Earth’s gravitational and magnetic fields are also included in the equations of motion so that meaningful information can be extracted directly from the resulting trajectory. The main conclusion of the model is that the exotic cosmic ray events with very small $Z/A$ ratios at mountain altitudes could result from SQM droplets (strangelets) which need not be too large initially. Thus the flux of strangelets in the cosmic rays may indeed be appreciable enough to make their detection by a large area detector at mountain altitudes a real possibility. The works of SB and SKG were supported in part by the Council of Scientific & Industrial Research, Govt. of India, New Delhi. [99]{} E. Witten, *Phys. Rev.* **D30**, 272 (1984) J. Alam, S. Raha and B. Sinha, *Ap. J.* (in press) E. Farhi and R. L. Jaffe, *Phys. Rev.* **D30**, 2379 (1984) E. Farhi and R. L. Jaffe, *Phys. Rev.* **D30**, 1307 (1986) E. P. Gilson and R. L. Jaffe, *Phys. Rev. Lett* **71**, 332 (1993) M. G. Mustafa and A. Ansari, *Phys. Rev.* **D53**, 5136 (1996) M. G. Mustafa and A. Ansari, *Phys.Rev.* **C55**, 2005 (1995) Jes Madsen,*astro-ph* **9809032**; to appear in **Hadrons in Dense Matter and Hadrosynthesis**, *Lecture Notes in Physics*, Springer Verlag, Heidelberg. M. Kasuya *et al*., *Phys. Rev.* **D47**, 2153 (1993) M. Ichimura *et al*., *Nuovo Cim.* **A106**, 843 (1993) T. Saito, *Proc.$24^{th}$ ICRC Rome* **1**, 898 (1995) O. Miyamura, *Proc.$24^{th}$ ICRC Rome* **1**, 890 (1995) J. N. Capdeville *et al*., *Proc.$24^{th}$ ICRC Rome* **1**, 910 (1995) G. Wilk and Z. Wlodarczyk, *J. Phys.* **G22**, L105 (1996) E. Gadysz-Dziadus and Z.Wlodarczyk, *J. Phys.* **G23**, 2057 (1996) G. Wilk and Z. Wlodarczyk, *Nucl. Phys. (Proc. Suppl.)* **52B**, 215 (1997) G. Wilk and Z. Wlodarczyk, *hep-ph* **9606401** **Fundamental Formulas of Physics, Vol. 2**, *Donald H. Menzel (Ed)*, p. 560, Dover Publications, New York (1960) J. Gosset *et al*., *Phys. Rev.* **C16**, 629 (1977) **Science at High Altitudes**, *S. Raha, P. K. Ray and B. Sinha (Eds.)*, Allied Publishers, New Delhi (1998) [^1]: email: phys@boseinst.ernet.in [^2]: email: phys@boseinst.ernet.in [^3]: email: sibaji@boseinst.ernet.in
--- abstract: 'The relativistic six-quark amplitudes of the nonstrange baryonia with the open charm are calculated. The poles of these amplitudes determine the masses of baryonia. 9 masses of baryonia are predicted.' author: - 'S.M. Gerasyuta' - 'E.E. Matskevich' title: '**Nonstrange baryonia with the open charm**' --- Hadron spectroscopy has always played an important role in the revealing mechanisms underlying the dynamic of strong interactions. The heavy hadron containing a single heavy quark is particularly interesting. The light degrees of freedom (quarks, antiquarks and gluons) circle around the nearby static heavy quark. Such a system behaves as the QCD analog of familar hydrogen bound by the electromagnetic interaction. In Refs. [@1; @2] relativistic generalization of the three-body Faddeev equations was obtained in the form of dispersion relations in the pair energy of two interacting quarks. The mass spectrum of $S$-wave baryons including $u$, $d$, $s$ quarks was calculated by a method based on isolating the leading singularities in the amplitude. We searched for the approximate solution of integral three-quark equations by taking into account two-particle and triangle singularities, all the weaker ones being neglected. If we considered such an approximation, which corresponds to taking into account two-body and triangle singularities, and defined all the smooth functions of the subenergy variables (as compared with the singular part of the amplitude) in the middle point of the physical region of Dalitz-plot, then the problem was reduced to the one of solving a system of simple algebraic equations. In Ref. [@3] the relativistic six-quark equations are found in the framework of coupled-channel formalism. The dynamical mixing between the subamplitudes of hexaquark are considered. The six-quark amplitudes of dibaryons are calculated. The poles of these amplitudes determine the masses of dibaryons. We calculated the contribution of six-quark subamplitudes to the hexaquark amplitudes. In the present paper the six-quark equations for the nonstrange baryonia with the open charm are found. The nonstrange baryonia $B\bar B_c$ are constructed without the mixing of the quarks and antiquarks. The six-quark amplitudes of baryonia are constructed. The relativistic six-quark equations are obtained in the form of the dispersion relations over the two-body subenergy. The approximate solutions of these equations using the method based on the extraction of leading singularities of the amplitude are obtained. The paper is devoted to the calculation results for the baryonia mass spectrum (Table \[tab1\]). In conclusion, the status of the considered model is discussed. The model in question has only two parameters of previous paper [@4]: gluon coupling constant $g_0=0.314$ and cutoff parameter $\Lambda_q=11$. We used the cutoff $\Lambda_{qc}=5.18$ which is determined by $M=4100\, MeV$ (the threshold is equal to $4130\, MeV$). The quark masses of the model are $m_q=495\, MeV$ and $m_c=1655\, MeV$. The estimation of theoretical error on the $S$-wave hexaquarks masses is $1\, MeV$. This results was obtained by the choice of model parameters. We consider 9 baryonia with the content $qqQ\bar q\bar q\bar q$ and the spin-parities $J^P=0^-$, $1^-$, $2^-$. The isospins are equal to $\frac{1}{2}$, $\frac{3}{2}$, $\frac{5}{2}$ (Table \[tab1\]). We have predicted the masses of baryonia containing $c$ quark using the coupled-channel formalism. We believe that the prediction for the $S$-wave charmed baryonia is based on the relativistic kinematics and dynamics which allow us to take into account the relativistic corrections. Quark content $I$ $J$ Baryonium Mass (MeV) ------------------------------------------------------------- ------------------------------ ------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------ $uuc\,\, \bar u\bar u\bar u$, $ddc\,\, \bar d\bar d\bar d$, $\frac{1}{2}$; $\frac{5}{2}$ 0 $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3305 $uuu\,\, \bar u\bar u\bar c$, $ddd\,\, \bar d\bar d\bar c$; 1 $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3293 $uuc\,\, \bar d\bar d\bar d$, $ddc\,\, \bar u\bar u\bar u$, 2 $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3303 $ddd\,\, \bar u\bar u\bar c$, $uuu\,\, \bar d\bar d\bar c$ $uuc\,\, \bar u\bar u\bar d$, $ddc\,\, \bar u\bar d\bar d$, $\frac{1}{2}$; $\frac{3}{2}$ 0 $\Sigma_c \bar N$, $N \bar \Sigma_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3317 $uud\,\, \bar u\bar u\bar c$, $udd\,\, \bar d\bar d\bar c$; 1 $\Sigma_c \bar N$, $N \bar \Sigma_c$, $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar N$, $N \bar \Sigma^*_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3316 $uuc\,\, \bar u\bar d\bar d$, $ddc\,\, \bar u\bar u\bar d$, 2 $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar N$, $N \bar \Sigma^*_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3329 $udd\,\, \bar u\bar u\bar c$, $uud\,\, \bar d\bar d\bar c$ $udc\,\, \bar u\bar u\bar u$, $udc\,\, \bar d\bar d\bar d$, $\frac{3}{2}$ 0 $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3338 $uuu\,\, \bar u\bar d\bar c$, $ddd\,\, \bar u\bar d\bar c$ 1, 2 $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$, $\Lambda_c \bar \Delta$, $\Delta \bar \Lambda_c$ 3309 $udc\,\, \bar u\bar u\bar d$, $udc\,\, \bar u\bar d\bar d$, $\frac{1}{2}$ 0 $\Sigma_c \bar N$, $N \bar \Sigma_c$, $\Lambda_c \bar N$, $N \bar \Lambda_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$ 3331 $uud\,\, \bar u\bar d\bar c$, $udd\,\, \bar u\bar d\bar c$ 1 $\Sigma_c \bar N$, $N \bar \Sigma_c$, $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar N$, $N \bar \Sigma^*_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$, $\Lambda_c \bar N$, $N \bar \Lambda_c$, $\Lambda_c \bar \Delta$, $\Delta \bar \Lambda_c$ 3331 2 $\Sigma_c \bar \Delta$, $\Delta \bar \Sigma_c$, $\Sigma^*_c \bar N$, $N \bar \Sigma^*_c$, $\Sigma^*_c \bar \Delta$, $\Delta \bar \Sigma^*_c$, $\Lambda_c \bar \Delta$, $\Delta \bar \Lambda_c$ 3361 : $qqQ\bar q\bar q\bar q$, $q=u, d$, $Q=c$. Parameters of model: cutoff $\Lambda=11.0$, $\Lambda_{qc}=5.18$, gluon coupling constant $g=0.314$. Quark masses $m_q=495\, MeV$, $m_c=1655\, MeV$.[]{data-label="tab1"} The quark pairs $Qq$ use there not so many, therefore the baryonium masses cannot increase enough with the decreasing of the cutoff $\Lambda_{qc}$. The degeneration of baryonium masses with the different spin-parities $J^P=0^-$, $1^-$ was obtained. We cannot also calculate the bound states of baryonia with $J^P=3^-$. The baryonium state $\Sigma_c \bar \Delta$ $(uuc \, \bar d \bar d \bar d)$ for the spin-parities $J^P=0^-$, $1^-$, $2^-$ is calculated with the nine subamplitudes: seven $\alpha_1$ (similar to $\alpha_1^{1^{uu}}$) and two $\alpha_2^{1^{uu}1^{\bar d \bar d}}$, $\alpha_2^{0^{uc}1^{\bar d \bar d}}$. The baryonium $\Sigma_c \bar \Delta$ $(uuc \, \bar u \bar d \bar d)$ consists of 16 subamplitudes with the spin-parities $J^P=0^-$, $1^-$; 12 $\alpha_1$ and 4 $\alpha_2$: $\alpha_2^{1^{uu}1^{\bar d \bar d}}$, $\alpha_2^{1^{uu}0^{\bar u \bar d}}$, $\alpha_2^{0^{uc}1^{\bar d \bar d}}$, $\alpha_2^{0^{uc}0^{\bar u \bar d}}$. For the case of the spin-parity $J^P=2^-$ the subamplitude $\alpha_2^{0^{uc}0^{\bar u \bar d}}$ is absent. The states with spin-parities $J^P=0^-$, $1^-$, $2^-$ $(udc \, \bar u \bar u \bar u)$ are constructed with 13 subamplitudes: 10 $\alpha_1$ and 3 $\alpha_2$: $\alpha_2^{0^{ud}1^{\bar u \bar u}}$, $\alpha_2^{0^{uc}1^{\bar u \bar u}}$, $\alpha_2^{0^{dc}1^{\bar u \bar u}}$. The baryonium $udc \, \bar u \bar u \bar d$ for the spin-parities $J^P=0^-$, $1^-$ takes into account 23 subamplitudes: 17 $\alpha_1$ and 6 $\alpha_2$; $\alpha_2^{0^{ud}0^{\bar u \bar d}}$, $\alpha_2^{0^{uc}0^{\bar u \bar d}}$, $\alpha_2^{0^{dc}0^{\bar u \bar d}}$, $\alpha_2^{0^{ud}1^{\bar u \bar u}}$, $\alpha_2^{0^{uc}1^{\bar u \bar u}}$, $\alpha_2^{0^{dc}1^{\bar u \bar u}}$. For the case $J^P=2^-$ the subamplitudes $\alpha_2^{0^{ud}0^{\bar u \bar d}}$, $\alpha_2^{0^{uc}0^{\bar u \bar d}}$, $\alpha_2^{0^{dc}0^{\bar u \bar d}}$ are absent. The system of equations of the baryonium $\Sigma_c \bar \Delta$ $(uuc \, \bar d \bar d \bar d)$ for the spin-parity $J^P=1^-$ (as the example) was constructed: $$\begin{aligned} %1 \label{1} \alpha_1^{1^{uu}}&=&\lambda+2\, \alpha_1^{0^{uc}} I_1(1^{uu}0^{uc}) +6\, \alpha_1^{1^{u\bar d}} I_1(1^{uu}1^{u\bar d}) +6\, \alpha_1^{0^{u\bar d}} I_1(1^{uu}0^{u\bar d})\, , \\ &&\nonumber\\ %2 \label{2} \alpha_1^{0^{uc}}&=&\lambda+\alpha_1^{1^{uu}} I_1(0^{uc}1^{uu}) +\alpha_1^{0^{uc}} I_1(0^{uc}0^{uc}) +3\, \alpha_1^{1^{u\bar d}} I_1(0^{uc}1^{u\bar d}) +3\, \alpha_1^{0^{u\bar d}} I_1(0^{uc}0^{u\bar d}) \nonumber\\ &&\nonumber\\ &+&3\, \alpha_1^{1^{c\bar d}} I_1(0^{uc}1^{c\bar d}) +3\, \alpha_1^{0^{c\bar d}} I_1(0^{uc}0^{c\bar d})\, , \\ &&\nonumber\\ %3 \label{3} \alpha_1^{1^{\bar d\bar d}}&=&\lambda+2\, \alpha_1^{1^{\bar d\bar d}} I_1(1^{\bar d\bar d}1^{\bar d\bar d}) +4\, \alpha_1^{1^{u\bar d}} I_1(1^{\bar d\bar d}1^{u\bar d}) +4\, \alpha_1^{0^{u\bar d}} I_1(1^{\bar d\bar d}0^{u\bar d}) +2\, \alpha_1^{1^{c\bar d}} I_1(1^{\bar d\bar d}1^{c\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_1^{0^{c\bar d}} I_1(1^{\bar d\bar d}0^{c\bar d})\, , \\ &&\nonumber\\ %4 \label{4} \alpha_1^{1^{u\bar d}}&=&\lambda+\alpha_1^{1^{uu}} I_1(1^{u\bar d}1^{uu})+\alpha_1^{0^{uc}} I_1(1^{u\bar d}0^{uc}) +2\, \alpha_1^{1^{\bar d\bar d}} I_1(1^{u\bar d}1^{\bar d\bar d}) +3\, \alpha_1^{1^{u\bar d}} I_1(1^{u\bar d}1^{u\bar d}) \nonumber\\ &&\nonumber\\ &+&3\, \alpha_1^{0^{u\bar d}} I_1(1^{u\bar d}0^{u\bar d}) +\alpha_1^{1^{c\bar d}} I_1(1^{u\bar d}1^{c\bar d})+\alpha_1^{0^{c\bar d}} I_1(1^{u\bar d}0^{c\bar d}) +2\, \alpha_2^{1^{uu}1^{\bar d\bar d}} I_2(1^{u\bar d}1^{uu}1^{\bar d\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_2(1^{u\bar d}0^{uc}1^{\bar d\bar d})\, , \\ &&\nonumber\\ %5 \label{5} \alpha_1^{0^{u\bar d}}&=&\lambda+\alpha_1^{1^{uu}} I_1(0^{u\bar d}1^{uu})+\alpha_1^{0^{uc}} I_1(0^{u\bar d}0^{uc}) +2\, \alpha_1^{1^{\bar d\bar d}} I_1(0^{u\bar d}1^{\bar d\bar d}) +3\, \alpha_1^{1^{u\bar d}} I_1(0^{u\bar d}1^{u\bar d}) \nonumber\\ &&\nonumber\\ &+&3\, \alpha_1^{0^{u\bar d}} I_1(0^{u\bar d}0^{u\bar d}) +\alpha_1^{1^{c\bar d}} I_1(0^{u\bar d}1^{c\bar d})+\alpha_1^{0^{c\bar d}} I_1(0^{u\bar d}0^{c\bar d}) +2\, \alpha_2^{1^{uu}1^{\bar d\bar d}} I_2(0^{u\bar d}1^{uu}1^{\bar d\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_2(0^{u\bar d}0^{uc}1^{\bar d\bar d})\, , \\ &&\nonumber\\ %6 \label{6} \alpha_1^{1^{c\bar d}}&=&\lambda+2\, \alpha_1^{1^{\bar d\bar d}} I_1(1^{c\bar d}1^{\bar d\bar d}) +2\, \alpha_1^{1^{u\bar d}} I_1(1^{c\bar d}1^{u\bar d}) +2\, \alpha_1^{0^{u\bar d}} I_1(1^{c\bar d}0^{u\bar d}) +2\, \alpha_1^{1^{c\bar d}} I_1(1^{c\bar d}1^{c\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_1^{0^{c\bar d}} I_1(1^{c\bar d}0^{c\bar d}) +4\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_2(1^{c\bar d}0^{uc}1^{\bar d\bar d})\, , \\ &&\nonumber\\ %7 \label{7} \alpha_1^{0^{c\bar d}}&=&\lambda+2\, \alpha_1^{1^{\bar d\bar d}} I_1(0^{c\bar d}1^{\bar d\bar d}) +2\, \alpha_1^{1^{u\bar d}} I_1(0^{c\bar d}1^{u\bar d}) +2\, \alpha_1^{0^{u\bar d}} I_1(0^{c\bar d}0^{u\bar d}) +2\, \alpha_1^{1^{c\bar d}} I_1(0^{c\bar d}1^{c\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_1^{0^{c\bar d}} I_1(0^{c\bar d}0^{c\bar d}) +4\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_2(0^{c\bar d}0^{uc}1^{\bar d\bar d})\, , \\ &&\nonumber\\ %8 \label{8} \alpha_2^{1^{uu}1^{\bar d\bar d}}&=&\lambda+2\, \alpha_1^{0^{uc}} I_4(1^{uu}1^{\bar d\bar d}0^{uc}) +2\, \alpha_1^{1^{\bar d\bar d}} I_4(1^{uu}1^{\bar d\bar d}1^{\bar d\bar d}) +4\, \alpha_1^{1^{u\bar d}} I_3(1^{uu}1^{\bar d\bar d}1^{u\bar d}) \nonumber\\ &&\nonumber\\ &+&4\, \alpha_1^{0^{u\bar d}} I_3(1^{uu}1^{\bar d\bar d}0^{u\bar d}) +4\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_6(1^{uu}1^{\bar d\bar d}0^{uc}1^{\bar d\bar d})\, , \\ &&\nonumber\\ %9 \label{9} \alpha_2^{0^{uc}1^{\bar d\bar d}}&=&\lambda+\alpha_1^{1^{uu}} I_4(0^{uc}1^{\bar d\bar d}1^{uu}) +\alpha_1^{0^{uc}} I_4(0^{uc}1^{\bar d\bar d}0^{uc}) +2\, \alpha_1^{1^{\bar d\bar d}} I_4(1^{\bar d\bar d}0^{uc}1^{\bar d\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_1^{1^{u\bar d}} I_3(0^{uc}1^{\bar d\bar d}1^{u\bar d}) +2\, \alpha_1^{0^{u\bar d}} I_3(0^{uc}1^{\bar d\bar d}0^{u\bar d}) +2\, \alpha_1^{1^{c\bar d}} I_3(0^{uc}1^{\bar d\bar d}1^{c\bar d}) \nonumber\\ &&\nonumber\\ &+&2\, \alpha_1^{0^{c\bar d}} I_3(0^{uc}1^{\bar d\bar d}0^{c\bar d}) +2\, \alpha_2^{1^{uu}1^{\bar d\bar d}} I_6(0^{uc}1^{\bar d\bar d}1^{uu}1^{\bar d\bar d}) +2\, \alpha_2^{0^{uc}1^{\bar d\bar d}} I_6(0^{uc}1^{\bar d\bar d}0^{uc}1^{\bar d\bar d})\, .\end{aligned}$$ We used the functions $I_1$, $I_2$, $I_3$, $I_4$, $I_6$ similar to the paper [@3]. The poles of the reduced amplitudes $\alpha_l$ correspond to the bound states and determine the masses of the charmed baryonia. In Table \[tab1\] the calculated masses of nonstrange baryonia with the open charm are shown. We predict the mass of lowest charmed baryonium with the isospin $I=\frac{1}{2}$ and the spin-parity $J^P=1^-$ ($M=3293\, MeV$). The known way with which to calculate the low-energy properties of hadronic systems rigorously is Lattice QCD (LQCD) [@5; @6]. In LQCD calculations, the quark and gluon fields are defined on a discretized space-time of finite volume of the lattice volume, such deviation can be systematically removed by reducing the lattice spacing, increasing the lattice volume and extrapolating to the continuum and infinite volume limits using the known dependences determined with effective field theory (EFT) [@7; @8; @9]. We try to consider the tasks which are similar to the Lattice calculations. The work was carried out with the support of the Russian Ministry of Education (grant 2.1.1.68.26). [99]{} S.M. Gerasyuta, Sov. J. Nucl. Phys. [**55**]{}, 1693 (1992). S.M. Gerasyuta, Z. Phys. C[**60**]{}, 683 (1993). S.M. Gerasyuta and E.E. Matskevich, Phys. Rev. D[**82**]{}, 056002 (2010). S.M. Gerasyuta and E.E. Matskevich, Int. J. Mod. Phys. E[**20**]{}, 1419 (2011). S.R. Beane et al., Phys. Rev. Lett. [**106**]{}, 162001 (2011). T. Inoue et al., Phys. Rev. Lett. [**106**]{}, 162002 (2011). H. Georgi, Phys. Lett. B[**240**]{}, 447 (1990). A.V. Manohar, arXiv: hep-ph/9606222. J. Polchinski, arXiv: hep-th/9210046.
--- abstract: 'We study the electron dynamics at a monocrystalline Pd(111) surface with stepped vicinal nanostructures modeled in a simple Kronig-Penney scheme. The unoccupied bands of the surface are resonantly excited *via* the resonant charge transfer (RCT) interaction of the surface with a hydrogen anion reflected at grazing angles. The interaction dynamics is simulated numerically in a quantum mechanical wave packet propagation approach. Visualization of the wave packet density shows that, when the electron is transferred to the metal, the surface and image subband states are the most likely locations of the electron as it evolves through the superlattice. The survival probability of the interacting ion exhibits strong modulations as a function of the vicinal-terrace size and shows peaks at those energies that access the image state subband dispersions. A simple square well model producing standing waves between the steps on the surface suggests the application of such ion-scattering at shallow angles to map electronic substructures in vicinal surfaces. The work also serves as the first proof-of-principle in the utility of our computational method to address, via RCT, surfaces with nanometric patterns.' author: - John Shaw - David Monismith - Yixao Zhang - Danielle Doerr - 'Himadri S. Chakraborty' title: 'Ion survival in grazing collisions of H$^-$ with vicinal nanosurfaces probes subband electronic structures' --- Introduction ============ A vicinal surface consists of a thin-film substrate for which the normal to the surface deviates slightly from a major crystallographic axis. Vicinal surfaces of regular arrays of linear steps, prepared by cutting and polishing a single crystal followed by ultra-high vacuum methods like ion sputtering, are the simplest models of lateral nanostructures on the surface. These high Miller index surfaces are thought to mimic more closely rough regions of industrial surfaces. Such surfaces can be critical for their catalytic properties, without losing a well-defined lattice periodicity [@pratt]. The electronic motions in these surfaces can be particularly fascinating because of the possibility of electron scattering at step edges to induce one- or two-dimensional confinement. Many metallic vicinal surfaces are of eminent interest due to the presence of a Shockley surface state and image states, respectively, on and above the corresponding flat surface. These states arise from a broken translational symmetry along the surface normal which confines the state in this direction by the crystal band gap. It is therefore expected that vicinal nano-stepping will modify the electronic properties of the surface and image state *via* the superlattice effects. Significant alterations of the electronic properties of surface electrons have been detected for Cu and Au vicinals by angle-resolved photoemissions in momentum space with synchrotron radiation [@mugarza; @ortega] and scanning tunneling microscopy in the real space [@didiot; @suzuki; @bolz]. Additionally, ultraviolet photoelectron spectroscopy has indicated unique two-dimensional Shockley surface states on (332) and (221) vicinals of Cu [@baumberger; @greber]. Furthermore, quantized states are known to form in front of surfaces due to the polarizing image-interaction of an external electron. The investigation of these image states is a powerful tool for probing a variety of physical and chemical phenomena on the nanometer scale. In particular, for metallic vicinals both the confinement and superlattice effects can produce significant image-band splittings and anticrossings from lateral back scatterings at the step edges as predicted theoretically within a static impenetrable surface model [@lei]. Therefore, to gain insights in the electronic motions and excitations in such materials, it becomes essential to devise and implement an appropriate theoretical methodology. This is necessary to describe and simulate the details of processes in the band structure altered by modifications on the surface. One theoretically amenable way to do this is to simulate the motion of active electrons in the presence of an impinging negative ion. These results can be tested experimentally and can be utilized to guide future theoretical modeling. The charge transfer interaction dynamics of an atomic or molecular ion with a surface is highly sensitive to the surface electronic band structure. From a fundamental science perspective, the understanding of the dynamics is useful to describe phenomena, namely, scattering, sputtering, adsorption, and molecular dissociation [@rabalais03]. Technologically, this process is a crucial intermediate step in (i) analysis, characterization, and manipulation of surfaces [@stout00], (ii) micro-fabrication based on reactive ion etching and ion lithography [@campbell01], and (iii) semiconductor miniaturization and the production of self-assembled nanodevices [@korkin07]. In recent past, electron transfer between various atomic species with surface containing nanosystems has become a topic of particular interest. Instances include probing effects of the nanosystem’s size and shape to determine their electronic structures [@canario03]. The energy conserving transfer of a single electron, the resonant charge transfer (RCT), occurs when the shift of the ion affinity-level enables the transfer to (from) an unoccupied (occupied) resonant state of the substrate. The RCT process in ion-scattering has been the focus of a number of experimental studies on mono- and polycrystalline metal surfaces [@bahrim; @yang; @hecht; @guillemot; @sanchez]. A wave packet propagation method was used to access RCT dynamics between excited states of Na nanoislands on the Cu(111) substrate [@hakala07]. Also, recent theoretical research studied the RCT interactions of negative ions with nanoisland films [@gainullin15]. In previous years, we undertook a full quantum mechanical wave packet propagation approach to perform detailed theoretical studies of RCT ion-scattering and associated ion-neutralization processes in the light of altering crystallographic properties of atomically flat (i.e. low Miller index) surfaces [@chak70; @chak69; @chak241; @schmitz], with success in agreeing with measurements [@chak69]. For surfaces with periodic arrays of terraces, the confinement driven reflections of electrons from steps can further structure the free electron dispersion into subband dispersion to enrich the RCT process. Therefore, in the present study, the same techniques are applied to vicinally stepped Pd(111) surfaces to calculate the hydrogen negative-ion ($H^-$) survival probability. Even though we use a rather simple model for the vicinal corrugation, the previous utilization of this model in interpreting angle-resolved photoemission measurements on such surfaces to access surface electronic states [@mugarza] provides some confidence in probing, at least qualitatively, the ion-vicinal RCT process in a fully quantum mechanical time propagation treatment. The calculations are done for different $H^-$ collision velocities parallel to the surface at shallow incident angles as well as for different distances between the steps on the surface. The electron wave packet probability density was calculated at all points in space at each time interval. Visualization of this data showed that, when the electron transferred from the ion to the metal, it most likely transferred to both the surface and image superlattice states when the ions approach velocity perpendicular to the surface is slow enough. For a given distance between the steps on the surface, the ion survival probability shows peaks at certain velocities. It is shown that these peaks appear when the kinetic energy of the electron transferred to an image state matches the subband dispersions supported by the periodic vicinal steps. The result suggest a possibility of accessing superlattice band structures *via* anion-scattering in experiments. Atomic units (a.u.) are used in the description of the work, unless mentioned otherwise. Essentials of the Method ======================== Surface Model ------------- ![(Color online) (a) The one-dimensional Kronig-Penney potential [@mugarza] and the vicinal surface it models with terrace width ($L$) and step height ($h$). (b) The solid line is the one-dimensional pseudopotential of flat Pd(111) [@chulkov], while the dashed line shows the addition of the potential, as in (a), of the vicinal surface at one of its peaks, scaled by a factor of 2.8 and duly attenuated (see text). This curve is a $z$-section of Fig. 2[]{data-label="fig:fig1"}](Fig1a.eps){width="8.3cm"} ![(Color online) (a) The one-dimensional Kronig-Penney potential [@mugarza] and the vicinal surface it models with terrace width ($L$) and step height ($h$). (b) The solid line is the one-dimensional pseudopotential of flat Pd(111) [@chulkov], while the dashed line shows the addition of the potential, as in (a), of the vicinal surface at one of its peaks, scaled by a factor of 2.8 and duly attenuated (see text). This curve is a $z$-section of Fig. 2[]{data-label="fig:fig1"}](Fig1b.eps){width="8.3cm"} ![image](Fig2.eps){width="17cm"} Due to the repulsive step-step interactions, vicinal corrugations generally appear regularly spaced [@swamy99]. The size of the terrace is determined by the terrace width (vicinal miscut) $L$ and height $h$ of the step. We made use of a two-dimensional model of the metal surface which includes the ion approach direction ($z$) normal to the primordial flat surface Pd(111) and only one direction ($x$) on the flat surface along the vicinal steps. We used the Kronig-Penney (KP) potential to mimic the periodic potential array formed by the step superlattice in the $x$-coordinate shown in Fig. 1(a). The steps in the metal surface are mimicked by the peaks in the KP potential with height $U_0$ and width $w$, and the step separation $d$ (distance between adjacent steps). This model is used as it was successful at describing the experimental photoemission spectra for stepped vicinal metal surfaces [@mugarza]. Experiments used vicinal surfaces with the step height of a single atomic layer cut. When fitting the data to the KP model, the largest value of the product $U_0w$ was found to be 0.054 a.u. in Ref. \[\]. Therefore, we chose $U_0$ = 0.054 a.u. with $w$ = 1 a.u. for the results presented in this article. We note that there is no exact correspondence between $U_0$ and $h$. This is because our model does not represent vicinal steps at the atomistic level which needs a full 3D structure calculation. Rather, in the spirit of Ref. \[\], the model mimics the effect of vicinal steps using a flat surface with a potential array for which the strength $U_0w$ correlates the electrostatic strength of a step. We should also note, as discussed in Ref. \[\], that approximating the periodic potential as a Dirac $\delta$-array $\sum _n U_0w \delta(x-nd)$, or other combinations of $U_0$ and $w$ producing the same barrier strength $U_0w$ will not change the model. Indeed, for all our calculations a different combination, namely $U_0$ = 0.027 a.u. and $w$ = 2 a.u., fully reproduced the results. We emphasize that in the absence of any parametric form of the vicinal surface potential derived from abinitio calculations, we use a scheme of combining this one-dimensional array potential with a known parametric representation based on an abinitio flat surface potential as described below. Wave packet propagation ----------------------- The details of the propagation methodology are given in Ref. [@chak70]. The time-dependent electronic wave packet $\Phi\left(\vec{r},t;D\right)$ for the ion-surface combined system is a solution of the time-dependent Schrödinger equation $$\label{tdse} i{\hbar}\frac{\partial}{\partial t}\Phi\left(\vec{r},t;D\right) = H\Phi\left(\vec{r},t;D\right),$$ with the general form of the Hamiltonian as $$\label{hamil} H = -\frac{1}{2}\frac{d^{2}}{dz^{2}}-\frac{1}{2}\frac{d^{2}}{dx^{2}}+V_{\mbox{\scriptsize vi-surf}} (x,z) + V_{\mbox{\scriptsize ion}} (x,z)$$ where $D(t)$ is the dynamically changing perpendicular distance between the $z=0$ line (see below) of fixed-in-space metal surface and the ion moving along a trajectory. The potential, $V_{\mbox{\scriptsize vi-surf}}$, of the vicinal surface will have two components: (i) A one-dimensional potential model in $z$ obtained from pseudopotential local-density-approximation calculations for simple and noble metal surfaces [@chulkov] that represents the flat precursor substrate. This is the model, with free electron motion in the $x$ direction, that was employed in our previous work with flat surfaces [@chak70; @chak69; @chak241; @schmitz]. A graph of this potential appears in Fig. 1(b). The lattice points of the topmost layer of the substrate is taken at $z=0$ so the peaks in the potential for $z<0$ are the centers of the atomic layers going into the bulk and separated by the interatomic lattice spacing. (ii) Superimposed on (i) is the KP potential model described above that mimics the regular vicinal array of terraces $L$ on the surface along $x$. To limit the vicinal effect far from the surface, the KP potential is attenuated exponentially in both the positive (vacuum side) and negative (bulk side) $z$-direction. A curve representing a section of $V_{\mbox{\scriptsize vi-surf}}$ along $z$-direction through a vicinal peak is also included in Fig. 1(b). Fig. 2, on the other hand, provides an illustration of the full 2D $V_{\mbox{\scriptsize vi-surf}}$ used in our simulation. The peaks of the KP potential, as seen in both Fig. 1(b) and Fig. 2, are set at their maximum (unattenuated) value at $z=0$. This must be the case if the atomic layer positions of the precursor flat surface are not to be altered by the vicinal potential. Therefore, the top atomic layer used in the flat Pd(111) potential in Fig. 1(b) defines $z=0$. Smaller peaks are barely visible in Fig. 2 on the top of the potential for the next atomic layer inside the surface which is due to the effect of the exponential attenuation. Small peaks can also be seen at the bottom of the dips. The same attenuation is present for $z>0$ but is not as clearly visible in Fig. 2. The $H^-$ ion will be described by a single-electron model potential, $V_{\mbox{\scriptsize ion}}$, which includes the interaction of a polarizable hydrogen core with the active electron [@ermoshin]. However, we employ an appropriately re-parametrized version of this potential [@chak70], as was also applied in our other publications [@chak69; @chak241; @schmitz]. This form is commensurate with our two-dimensional propagation scheme and produces the correct ion affinity level energy = 0.0275 a.u. (0.75 eV). The propagation by one time step $\Delta t$ will yield $$\label{propa} \Phi(\vec{r},t+\Delta t;D) = exp[iH(D)\Delta t]\Phi(\vec{r},t;D)$$ where the asymptotic initial packet $\Phi_{\mbox{\scriptsize ion}}(\vec{r},t=0,D=\infty)$ is the unperturbed $H^-$ wave function $\Phi_{\mbox{\scriptsize ion}}(\vec{r},D)$. The ion-survival amplitude, or autocorrelation, is then calculated by the overlap $$\label{auto} A(t) = \left\langle \Phi(\vec{r},t)|\Phi_{\mbox{\scriptsize ion}}(\vec{r})\right\rangle.$$ We employ the split-operator Crank-Nicholson propagation method in conjunction with the unitary and unconditionally stable Cayley scheme to evaluate $\Phi(\vec{r},t;D)$ in successive time steps [@chak70; @press]. Obviously, the propagation limits the motion of the active electron to the scattering plane of the ion. We will assume that when the ion reflects from the surface, the angle of reflection is the same as the angle of incidence measured from the flat substrate plane. The Hydrogen ion impinges on the surface at shallow angles with respect to $x$. Inputs to the calculation are the component of ion velocity normal ($z$) to the surface, ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$, and the component of velocity parallel ($x$) to the surface, ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. The computer program aims the ion at a point halfway between two steps as well as directly on a step. The incident ion will decelerate along the $z$ direction, close to the surface, due to the net repulsive interaction between the ion core and the surface atoms while it will stay constant in ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. For a given initial velocity, we will simulate a classical trajectory based on Biersack-Ziegler (BZ) interatomic potentials to model the repulsion [@chak70; @biersack]. The slowdown of ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ to zero at the point of closest approach (the turning point) and its subsequent gradual regaining of the initial speed at the initial distance while reflecting symmetrically at constant ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ makes the resulting trajectory somewhat parabolic. This ionic motion is then incorporated in the propagation by adding the translational phase $({\mbox{$v_{\mbox{\scriptsize nor}}$}}z + {\mbox{$v_{\mbox{\scriptsize par}}$}}x + v^2t/2)$ [@chak70] as well as by shifting the center of the ion potential in [Eq. (\[hamil\])]{} to follow the BZ trajectory that corresponds to evolving $D(t)$. We note that to formulate the trajectory we calculated the BZ potential as if the surface was flat at $z =0$. But this limitation of our BZ trajectory, blind to vicinal shapes, is not expected to qualitatively affect the main results which should predominantly depend on the RCT process. We remark that in the grazing scattering, the H$^-$ neutralization on a palladium surface (which is different from cation neutralization that has a strong capture rate from metals Fermi sea) can avoid direct consequence of the so called parallel velocity effect (which causes a shift of the Fermi sphere [@borisov96]) over a good range of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ due to the following reason. The Pd Fermi energy from the bottom of the valence band is about 0.26 a.u. [@mueller70] which corresponds to the magnitude of Fermi wave vector of about $k_f=$ 0.85 a.u. after accounting in a 41% raise in the electron effective mass [@mueller70]. This implies that the observed Fermi energy $(k_f-{\mbox{$v_{\mbox{\scriptsize par}}$}})^2/2$ from the ion’s moving frame [@winter91] may not influence the energy range of the current RCT process \[which is very close to the Pd(111) image state energies as discussed in the following section\] at least, if not farther, in the range up to about ${\mbox{$v_{\mbox{\scriptsize par}}$}}= 0.5$ a.u. within which strong structures in the ion survival occur (see Fig.3). In our simulations, the distance of closest approach to the substrate will be kept fixed. This is reasonable as the initial value of ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ at $z=20$ a.u. will be constant at a small value 0.03 a.u. and thereby largely omits the variation of the dynamics as a function of ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$. We are interested in changes of survival probability due to the surface vicinal structure by varying ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. Long after the ion’s reflection, the final ion-survival probability will be obtained by $$\label{surv} P=\lim_{t\rightarrow\infty}\left|A\left(t\right)\right|^{2},$$ which corresponds to fractions of the survived incoming ions that an experiment can measure [@guillemot]. To calculate the final ion-survival probability, the computer program calculates the electron wave packet density at all points in space at each time interval. This data was used to produce detailed animations of the changing electron wave packet density with time. Though the initial and final value of $z$ was fixed at 20 a.u., the initial and final values of $x$ varied with ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. Due to a small ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ value and many values of relatively fast ${\mbox{$v_{\mbox{\scriptsize par}}$}}$, the ion’s flyby was nearly grazing the surface. Consequently, the size of the surface, $|x_{\mbox{\scriptsize final}} - x_{\mbox{\scriptsize initial}}|$, entered in to the propagation was large, resulting in the execution time of the computer program to be very long. As a result, thread-based parallel computing using OpenMP was employed. To accumulate the amount of data we used, parameter sweeps were done with the program on the Stampede supercomputer at the University of Texas at Austin. More than 500 survival probabilities were calculated in this way for stepped Pd(111) surfaces considered in this paper. It should be noted that calculations were done in the 2D model described above and therefore all figures present 2D model results. Results and Discussions ======================= Ion survival ------------ Hydrogen ion survival probabilities on vicinally stepped Pd(111), after the ion returns to the initial $z$ = 20 a.u. $(t\rightarrow\infty)$, were calculated for parallel velocities, ${\mbox{$v_{\mbox{\scriptsize par}}$}}$, ranging from 0.2 to 1.0 a.u. in steps of 0.05 a.u. This is equivalent to a range of ion scattering angles from 8.53 to 1.72 degrees with respect to the (111) direction. The survival probabilities were calculated for the distance between steps ($d$) of 5, 7, 9, 11 and 13 lattice spacings ($a_s$), where $a_s$ = 4.25 a.u. for Pd(111). The results of the hydrogen ion survival probability are shown in Figs. 3 both for the ion heading toward the center of a terrace (solid line) and directly toward a step (dashed line). The graphs were fit to the calculated data points using a cubic spline. The ion survival result, labeled “F” in the legend, in front of the flat Pd(111) under the same kinematic conditions is also shown for comparison. As seen, the vicinal steps cause significant modulations in the ion survival probability, whereas the flat Pd(111) result is smooth. Note that for $d= 5a_s$ in Fig. 3(a) the step-induced modulations are the same whether the ion approaches the center of a terrace or a step, except for extremely low values of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. As the distance between steps increases, the lowest value of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ for which the center-approaching and the step-approaching results merge becomes gradually larger. For instance, as seen in Fig. 3(b), for $d = 13a_s$, the widest terrace considered, the curves merge at ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ = 0.63 a.u. We also note that for each choice of $d$ both the curves still produce qualitatively similar structures even over the lower ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ values where their magnitudes do not agree. This trend in the results has an important implication. Since the scattering angles are small, the ion in general interacts with a large number of steps during its grazing flyby populating the vicinal surface superlattice levels. For the shortest $d$ considered, the number of steps interacting with the ion becomes so large that the level-population and subsequent electron-recapture by the ion becomes practically insensitive to the ion’s approach site, center or step. As a result, the corresponding ion-survival results are identical. For gradually larger vicinal steps, however, this condition of approach-site independence is barely met at progressively higher ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ corresponding to lower (more grazing) scattering angles that results in just enough number of steps to be included in the interaction. Thus, as $d$ increases, a corresponding decrease in the grazing angle is needed for a “perfect” excitation of vicinal subbands when the site on the surface the ion is heading to does not matter anymore. Consequently, meeting this condition may bring benefit in conducting experiments where one may not need to worry about the vicinal surface location the ion beam aims at. In principle, the ion survival on a flat surface should be smooth as a function of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ for a fixed ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ owing to the parabolic free electron dispersion of a flat surface. Moreover, for a flat surface with a projected band gap in the direction normal to the surface, like in Pd(111) [@chulkov], that resists decay in $z$-direction, the survival should be almost steady as a function of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$. However, our flat Pd(111) surface result in Figs. 3 is showing a slow monotonic decrease which is likely because of a numerical artifact due to slightly imperfect boundary absorbers used in the simulation to mimic an infinite surface. It is then expected that this overall background artifact also exists for various vicinal surface results, but that does not alter the main results of modulations in the ion survival arising from the RCT electrons’ access to the subband structure. The reason for this is that the error introduced by the boundary absorbers is a systematic error. It will effect all results by the same degree. This will not change the position of the peaks in the survival probability, only their relative amplitudes. We also found (not shown here) that if the product $U_0w$ was half as great, that is 0.027 a.u., the survival peaks were in the same location but half as high. This is due to the fact that the electron transmits more easily to the next well for smaller $U_0w$. Specifically, with $U_0w$ being halved, the transmission probability doubles, lowering the survival peaks as it did. However, the fact that the peaks were in the same location indicates that the distance between steps is the only surface structure property that effects the position of the peaks. It is true that the results in Fig. 3 show variations of about $\pm$0.2% around a small average H$^-$ survival probability (ion fraction) of roughly 2%. However, experimentally an ion fraction as little as 0.1% has been measured for flat surfaces within an error range of about $\pm$0.1% [@guillemot] over similar ion-velocities used in this work. Therefore, we believe that structures in our current predictions should be accessible in the experiment. A final note concerning the use of a 2D model. It is our purpose to qualitatively understand the main physics happening during the ion-vicinal surface interaction. A 3D model would change the magnitudes of the survival probabilities but would not change the physics we are proposing to account for the structures observed in these survival probabilities. ![(Color online) Hydrogen ion survival probability for one atomic-layer high vicinally stepped Pd(111) as a function of ion parallel velocity (${\mbox{$v_{\mbox{\scriptsize par}}$}}$) as $H^-$ approaching the center of a terrace (solid line) and a step (dashed line) for terrace width ($L$) of 5, 7, and 9 lattice spacings (a) and for terrace widths of 11 and 13 lattice spacings (b).[]{data-label="fig:fig3"}](Fig3a.eps){width="8.3cm"} ![(Color online) Hydrogen ion survival probability for one atomic-layer high vicinally stepped Pd(111) as a function of ion parallel velocity (${\mbox{$v_{\mbox{\scriptsize par}}$}}$) as $H^-$ approaching the center of a terrace (solid line) and a step (dashed line) for terrace width ($L$) of 5, 7, and 9 lattice spacings (a) and for terrace widths of 11 and 13 lattice spacings (b).[]{data-label="fig:fig3"}](Fig3b.eps){width="8.3cm"} ![image](Fig4a.eps){width="8cm"} ![image](Fig4c.eps){width="8cm"} ![image](Fig4b.eps){width="8cm"} ![image](Fig4d.eps){width="8cm"} Wave packet density dynamics ---------------------------- The projected band gap of Pd(111) in the $z$ direction exists across the vacuum level energy $E_v$ = 0 and extends from its lower edge -0.163 a.u. to the upper edge 0.0794 a.u. [@schmitz; @chulkov]. There also exists a Shockley surface state localized inside the gap at = -0.152 a.u. and Rydberg series of image states converging to the vacuum level with the energies of the first two image states being = -0.0202 a.u. and = -0.00625 a.u. It is therefore expected that the incoming ion at its closest approach to the surface will resonantly populate both the surface and image state bands [@schmitz]. The ion survival amplitude, [Eq. (\[auto\])]{}, dynamically evolves as the ion approaches the surface. When the ion is close to the surface, there are one or more positions for which the amplitude is found zero. This means that at close distances the electron transfers back and forth between the ion and the metal. Therefore, it is important to probe the RCT dynamics involving the metal states that the ion populates. The vicinal texture is expected to induce superimposed subband modulations to produce survival peaks in Fig. 3. Therefore, to understand the surface state and image state RCT dynamics in an uncluttered manner, the wave packet densities on flat Pd(111) are analyzed in Fig. 4(a) and the density for vicinal Pd(111) is addressed in Fig. 4(b), 4(c) and 4(d). Electrons transferring from the ion to a much deeper surface state of Pd(111) will acquire large kinetic energy resulting in a maximum speed $\sqrt{2({\mbox{$E_{\mbox{\scriptsize ion}}$}}-{\mbox{$E_{\mbox{\scriptsize ss}}$}})} \approx 0.5$ a.u. along $x$-axis on the surface to either direction from the closest approach; we neglect the energy from very slow ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ = 0.03 a.u. throughout this article and assume strong resistance to decay into the bulk due to the band gap. Indeed, animations produced by the wave packet probability density as a function of time confirm this later assumption. We present some time snapshots from our animation for ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ = 0.2 a.u. and $d = 5a_s$ in Figs. 4 which show the electron wave packet density as a function of $x$ and $z$. The $x$-axis is across the page and the $z$-axis is into the page for Fig. 4(a) for flat Pd(111). For this figure, the ion starts at the right side of the graph and heads into the paper toward a terrace center and to the left, approaching closest to the surface at $x$ = 0. Thus, since ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ is toward left, the resonantly transferred electrons to the surface state will move in a speed of $0.5+{\mbox{$v_{\mbox{\scriptsize par}}$}}$ along the negative $x$-axis and in a slower speed of $0.5-{\mbox{$v_{\mbox{\scriptsize par}}$}}$ along the positive $x$-axis. In Fig. 4(a), the narrow peak in the middle is the location of the ion. But the faster moving surface state density to the left is mostly absorbed by the numerical absorber at the grid-edge built into the code to simulate infinite surface, while its slower moving counterpart to the right labeled 0 is still visible. Consequently, the fast moving surface state density, rapidly separating from the ion as seen in Fig. 4(a), makes it impossible for the depleted ion to recapture electrons. Therefore, while the propagation of the full wave packet uncovers the evolution of the populated surface band, this band, forbidding recapture, plays virtually no role in determining the structures in the final ion survival, except that it drains the ion significantly. On the other hand, image state energies being very close to will have small enough excess energy to enable the populated image densities to stay closer to the moving ion. As seen in Fig. 4(a), of the two density components of image electrons closely flanking the ion the feature on the right is labeled as 1. In fact, it is known that a largely slowed down ion near the surface will mimic adiabatic type interactions causing the affinity level and the surface-state level to repel each other [@chak69; @chak70]. This will further increase and decrease, respectively, the ion-surface state and ion-image states transition energies to more favor the dynamics described above. For a surface with vicinal steps, the RCT electrons from ion to surface will therefore evolve through the subband structures of these states. Fig. 4(b), for vicinally textured Pd(111) but at the same instant as of Fig. 4(a), confirms such structured density distributions. These structured distributions possess peaks and valleys that one would expect from an interference pattern produced by probability waves reflecting back and forth between the vicinal steps of the surface. Figures 4(c) and 4(d) show the view along the $z$-axis for vicinal Pd(111). The ion and peaks are labeled in these figures as 0 and 1 as well. In Fig. 4(c), the tall peak in front of the surface is the location of the ion. The peak, labeled 0, also in front of the surface, occurs due to the surface state. The probability of finding the electron in this state is the greatest of all other states. Note the decay of the surface state density amplitudes into the bulk, which is a direct consequence of the projected band gap dispersion along $z$ direction, ensuring its localized nature. In Fig. 4(d), which is at a later time than Fig. 4(c), the surface state feature is seen to have moved out along the $x$-axis whereas a smaller peak from image state densities, labeled 1, remains close to the ion. These are the same peaks seen in Figs. 4(a) and 4(b) from a different perspective. We further note that Figs. 4(c) and 4(d) appear relatively smoother, since the density structures, as in Fig. 4(b), due to vicinal patterns are only visible along the $x$ direction. In any case, these visualizations of evolving wave packet density thus suggest that while the RCT transferred electrons to the surface of vicinal steps will excite both surface and image state subbands, the image densities, by following the ion from the closest proximity and thereby having a large wave function overlap, will mainly influence the recapture by the outgoing ion and hence its final survival probability [@schmitz; @chak241]. Consequently, even though bulk of the electrons are decaying *via* the surface state subband, the modulation in ion’s survival as a function of its speed, as seen in Figs. 3, will result from the ion’s interaction with the image subband as we discuss below. ![(Color online) (a) A schematic of Kronig-Penney superlattice subband dispersions with finite potential barrier representing the vicinal steps. The reciprocal superlattice vector $2\pi/d$ of the subbands and flat energy levels that subbands become for the infinite square well are shown. The corresponding flat surface dispersion is also sketched. (b) Comparison of the ion survival peak positions in the parallel velocity scale for various values of step separation $d$ with an analytic square well potential model.[]{data-label="fig:fig5"}](Fig5a-alt.eps){width="8.3cm"} ![(Color online) (a) A schematic of Kronig-Penney superlattice subband dispersions with finite potential barrier representing the vicinal steps. The reciprocal superlattice vector $2\pi/d$ of the subbands and flat energy levels that subbands become for the infinite square well are shown. The corresponding flat surface dispersion is also sketched. (b) Comparison of the ion survival peak positions in the parallel velocity scale for various values of step separation $d$ with an analytic square well potential model.[]{data-label="fig:fig5"}](Fig5b.eps){width="8.3cm"} Superlattice states from lateral confinement -------------------------------------------- The electrons in flat Pd(111) exhibit free electron dispersion. The parallel velocity of the moving ion along grazing trajectories predominantly follows the dispersion energy of the RCT electrons into the image states in a parabolic form $({\mbox{$v_{\mbox{\scriptsize par}}$}})^2/2$, since the transition energy from the affinity to, for instance, the first image level is a miniscule ${\mbox{$E_{\mbox{\scriptsize ion}}$}}-{\mbox{$E_{\mbox{\scriptsize im}}^{(1)}$}}$ = 0.0073 a.u., and ${\mbox{$v_{\mbox{\scriptsize nor}}$}}$ is chosen to be too small in this work. This would ensure a monotonic recapture rate by the ion and therefore a rather structureless ion survival probability appears as a function of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ for flat Pd(111), as is seen in Figs. 3. However, the periodic array of potential barriers with finite $U_ow$ and period step separation $d$, representing the vicinal terraces on Pd(111), will induce both reflections and transmissions, respectively from and through the steps, of electrons propagating along the surface. This will give rise to subbands and even forbidden gaps in $x$-direction with the reciprocal vector $2\pi/d$, which zone-folds the subband [@mugarza]; a schematic of the subband dispersions is also shown in Fig. 5(a). It is therefore expected that, roughly, whenever the ion’s parallel energy, represented by the broad parabolic dispersion in Fig. 5(a) will intersect a subband dispersion curve, a resonance-type condition is reached, and the recapture rate by the ion will increase. This will produce peaks in the survival probability, as our main results show in Fig. 3. One simple way to verify that these subband states from surface-parallel confinements induce survival peaks is to compare our numerical results with the analytic infinite-barrier square-well model. This is because ultimately in the infinite barrier limit, these subband dispersions will turn into flat quantum levels in Fig. 5(a), and therefore can still qualitatively guide in capturing the positions of the survival peaks as we shall show below. For the infinite square well potential, the allowed electron de Broglie wavelengths are given by $\lambda_{n}=2d/n$, where $n$ is a positive non-zero integer we shall call the quantum number and $d$ is the width of the well. From this, it is straightforward to show that the quantized kinetic energy of an electron in the well, in atomic units, is $\left(n \pi/d \sqrt{2}\right)^{2}$. By RCT energy conservation, setting this kinetic energy to be equal to the ion’s parallel kinetic energy $({\mbox{$v_{\mbox{\scriptsize par}}$}})^2/2$, [*plus*]{} the transition energy from the ion level to an image level, one solves for ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ as $$\label{sq-well} {\mbox{$v_{\mbox{\scriptsize par}}$}}=\sqrt{2\left((n \pi/d \sqrt{2})^{2}+{\mbox{$E_{\mbox{\scriptsize ion}}$}}-E\right)}$$ that will give standing electron probability density, where $E$ is an image state energy for the flat surface. All quantities are in atomic units. [Eq. (\[sq-well\])]{} is plotted for the first (solid lines) and second (dashed lines) image states in Fig. 5(b) for five values of $d$ considered in this work. The values of ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ for which peaks occur in Fig. 3 are also plotted as symbols on this same graph for simulations when the ion aims both at the center of a terrace (solid symbols) and a step (hollow symbols). The vast majority of the symbols fall reasonably close to corresponding lines obtained from the model, guiding the broad underlying physics. There are a few symbols which are less close to a line than others. The level of agreement is, however, reasonably good, given the realities that (i) the ion interacts with a subband dispersion process but not with an ideal well of infinite height and (ii) the image population density spreads over the entire image state Rydberg series inducing some uncertainty in the transition energy. Furthermore, the pattern of the quantum numbers of the peaks is what we expect. When $d$ increases, we expect the quantum number of standing probability wave that will fit this distance to increase. We also expect that for a given velocity, the wavelength does not change. Similarly, as the velocity, ${\mbox{$v_{\mbox{\scriptsize par}}$}}$, increases, the wavelength will decrease and so we expect the quantum number of standing probability wave that matches this velocity to increase as well. Those are precisely the patterns observed in Fig. 5(b). One discrepancy between the model and the numerical results can be noted: For $d$ = $9a_s$ and $13a_s$ some bands are intermittently missing. This is likely due to forbidden gaps in the subband structure that exist for these vicinal widths within the ${\mbox{$v_{\mbox{\scriptsize par}}$}}$ range considered which, however, is the part of finer details of the band structure. To this end, our results indicate that one interesting spectroscopic pathway to probe subband dispersions in vicinal superlattice is RCT studies of negative ions in grazing scattering off stepped metal surfaces. As we show, the simpler approach to that goal is to choose the impact energies and scattering angles in such a way that the ion velocity perpendicular to the flat surface direction stays constant while varying in the parallel direction. This will keep the interaction largely free of alterations from the band structure perpendicular to the precursor flat surface. Conclusion ========== With the example of Pd(111) vicinally miscut in a selection of terrace sizes within a few nanometers we simulate the dynamics of the active-electron in a hydrogen anion projectile impinging at shallow angles to the surface. To do so, we used a fully quantum mechanical wave packet propagation methodology. The electron dynamics is visualized in details by animating the wave packet probability density in real time. The results, for the first time, show structures in the ion survival probability due to the image state subband dispersion introduced by the vicinal superlattice. In the absence of a fully abinitio potential, we use a simple but successful model of vicinal stepping. This produces structures in the ion survival as a function of terrace sizes, proving the ability of our wavepacket propagation methodology to study RCT tunneling between anions and nanostructured surfaces for the first time. Even though the calculation is based on one-active-electron model, effects of electron correlations due to occupied valence band can only enhance the recapture rate by impeding decay into the bulk from the Pauli blockade. The effects uncovered can and should be observed in grazing ion spectroscopy within the current laboratory technology, although accessing effects of the azimuthal orientation of the scattering plane will require full 3D simulation. The invariance of the results to the strike location on the surface over the higher parallel velocity range suggests that for a sufficiently grazing flyby aiming the ion beam is likely irrelevant, providing some experimental freedom. Comparisons of results among other metal surfaces with vicinal textures, a part of the outlook of our research, will be published elsewhere. The authors would like to acknowledge the support received by an Extreme Science and Engineering Discovery Environment (XSEDE) allocation Grant for high performance computation, which is supported by National Science Foundation (NSF) grant number ACI-1053575. The research was also supported in part by NSF Grant Numbers PHY-1413799 and PHY-1806206. [99]{} S. J. Pratt and S. J. Jenkins, *Beyond the surface atlas: A roadmap and gazetteer for surface symmetry and structure*, Surf. Sci. Rep. **62**, 373 (2007). A. Mugarza and J. E. Ortega, *Electronic states at vicinal surfaces*, J. Phys.: Condens. Matter **15**, S3281 (2003). J. E. Ortega, S. Speller, A. R. Bachmann, A. Mascaraque, E. G. Michel, A. Narmann, A. Mugarza, A. Rubio and F. J. Himpsel, *Electronic wave function at vicinal surfaces: Switch from terrace to step modulation*, **84**, 6110 (2000). C. Didiot, S. Pons, B. Kierren, Y. Fagot-Revurat and D. Malterre, *Nanopatterning the electronic properties of gold surfaces with self-organized superlattices of metallic nanostructures*, Nature Nanotech. **2**, 617 (2007). K. Suzuki, K. Kanisawa, C. Janer, S. Perraud, K. Takashina, T. Fujisawa and Y. Hirayama, *Spatial imaging of two-dimensional electronic states in semiconductor quantum wells*, **98**, 136802 (2007). A. Bolz, C. Meyer, C. Heyn, W. Hansen, M. Morgenstern and R. Wiesendanger, *Wave-function mapping of InAs quantum dots by scanning tunneling spectroscopy*, **98**, 196804 (2007). F. Baumberger, T. Greber and J. Osterwalder, *Fermi surfaces of the two-dimensional surface states on vicinal Cu(111)*, **64**, 195411 (2001). F. Baumberger, T. Greber and J. Osterwalder, *Step-induced one-dimensional surface state on Cu(332)*, **62**, 15431, (2000). J. Lei, H. Sun, K. W. Yu, S.G. Louie and M. L. Cohen, *Image potential states on periodically corrugated metal surfaces*, **63**, 045408 (2001). J.W. Rabalais, [*Principles and Applications of Ion Scattering Spectrometry: Surface Chemical and Structural Analysis*]{}, (Wiley-Interscience, Hoboken, New Jersey, 2003). K.J. Stout and L. Blunt, [*Three-Dimensional Surface Topography*]{}, (Penton Press, London, 2000). S.A. Campbell, [*The Science and Engineering of Microelectronic Fabrication*]{}, (Oxford University Press, New York, 2001). , edited by A. Korkin, J. Labanowski, E. Gusev, and S. Luryi, (Springer, New York, 2007). A. R. Canario, E. A. Sanchez, Yu. Bandurin, and V. A. Esaulov, *Growth of Ag nanostructures on TiO$_2$(110)*, Surf. Sci. Lett. **547**, L887 (2003). B. Bahrim, B. Makarenko and J. W. Rabalais, *Band gap effect on $H^-$ ion survival near Cu surfaces*, Surf. Sci. **594**, 62 (2005). Y. Yang and J. A. Yarmoff, *Charge exchange in Li scattering in Si surfaces*, **89**, 196102 (2002). T. Hecht, H. Winter, A. G. Borisov, J. P. Gauyacq and A. K. Kazansky, *Role of the 2D surface state continuum and projected band gap in charge transfer in front of Cu(111) surface*, **84**, 2517 (2000). L. Guillemot and V. A. Esaulov, *Interaction time dependence of electron tunneling processes between an atom and a surface*, **82**, 4552 (1999). E. Sanchez, L. Guillemot and V. A. Esaulov, *Electron transfer in the interaction of Flourine and Hydrogen with Pd(100)*, **83**, 428 (1999). T. Hakala, M. J. Puska, A. G. Borisov, V. M. Silkin, N. Zabala, and E. V. Chulkov, *Excited states of Na nanoislands on the Cu(111) surface*, **75**, 165419 (2007). I. K. Gainullin and M. A. Sonkin, *Three-dimensional effects in resonant charge transfer between atomic particles and nanosystems*, **92**, 022710 (2015). H. S. Chakraborty, T. Niederhausen and U. Thumm, *Resonant neutralization of $H^-$ near Cu surfaces: Effects of the surface symmetry and ion trajectory*, **70**, 052903 (2004). H. S. Chakraborty, T. Niederhausen and U. Thumm, *Effects of the surface Miller index on the resonant neutralization of hydrogen anions near Ag surfaces*, **69**, 052901 (2004). H. S. Chakraborty, T. Niederhausen and U. Thumm, *On the effect of image states on resonant neutralization of hydrogen anions near metal surfaces*, Nucl. Instrum. Methods B **241**, 43 (2005). A. Schmitz, J. Shaw, H. S. Chakraborty and U. Thumm, *Band-gap-confinement and image-state-recapture effects in the survival of anions scattered from metal surfaces*, **81**, 042901 (2010). K. Swamy, E. Bertel, and I. Vilfan, *Step interaction and relaxation at steps: Pt(110)*, Surf. Sci. **425**, 369 (1999). E. V. Chulkov, V. M. Silkin and P. M. Echenique, *Image potential states of metal surfaces: Binding energies and wave functions*, Surf. Sci. **437**, 330 (1999). V. A. Ermoshin and A. K. Kazansky, *Wave packet study of $H^-$ decay in front of a metal surface*, Phys. Letts. A **218**, 99 (1996). W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, *Numerical Recipes: The Art of Scientific Computing* (Cambridge University Press, Cambridge, 2007). J. P. Biersack and J. F. Ziegler, *Refined universal potentials in atomic collisions*, Nucl. Instrum. Methods **194**, 93 (1982). A. G. Borisov, D. Teillet-Billy, J. P. Gauyacq, H. Winter, and G. Dierkes, *Resonant charge transfer in grazing scattering of alkali-metal ions from an Al(111) surface*, **54**, 17166 (1996). F. M. Mueller, A. J. Freemant, J. O. Dimmock, and A. M. Furdyna, *Electronic structure of palladium*, **1**, 4617 (1970). H. Winter, *Charge transfer in grazing ion-surface scattering*, Commts. At. Mol. Phys. **26**, 287 (1970).
--- author: - 'B. Akinsanmi , S. C. C. Barros, N. C. Santos, A. C. M. Correia, P. F. L. Maxted, G. Boué and J. Laskar' bibliography: - 'references.bib' date: 'Received September 10, 2018; accepted December 06, 2018' title: 'Detectability of shape deformation in short-period exoplanets' --- [Short-period planets suffer from extreme tidal forces from their parent stars. These forces deform the planets causing them to attain non-spherical shapes. The non-spherical shapes, modeled here as triaxial ellipsoids, can have an impact on the observed transit light-curves and the parameters derived for these planets. ]{} [We investigate the detectability of tidal deformation in short-period planets from their transit light curves and the instrumental precision needed. We also aim to show how detecting planet deformation allows us to obtain an observational estimate of the second fluid Love number from the light curve which gives valuable information about the planet’s internal structure.]{} [We adopted a model to calculate the shape of a planet due to the external potentials acting on it and used this model to modify the *ellc* transit tool. We used the modified *ellc* to generate the transit light curve for a deformed planet. Our model is parameterized by the Love number, hence for a given light curve we can derive the value of the Love number that best matches the observations.]{} [ We simulated the known cases of WASP-103b and WASP-121b which are expected to be highly deformed. Our analyses showed that instrumental precision $\leq$50ppm/min is required to reliably estimate the Love number and detect tidal deformation. This precision can be achieved for WASP-103b in $\sim$40 transits using the Hubble Space Telescope and in $\sim$300 transits using the forthcoming CHEOPS instrument. However, fewer transits will be required for short-period planets that may be found around bright stars in the TESS and PLATO survey missions. The unprecedented precisions expected from PLATO and JWST can permit the detection of shape deformation with a single transit observation. However, the effects of instrumental and astrophysical noise must be well-considered as they can increase the number of transits required to reach the 50ppm/min detection limit. We also show that improper modeling of limb darkening can act to bury signals related to shape of the planet thereby leading us to infer sphericity for a deformed planet. Therefore accurate determination of the limb darkening coefficients is required to confirm planet deformation.]{} Introduction ============ The existence of planets with short-period orbits around their stars came as a surprise at the inception of exoplanet discoveries especially because the first case was a gas giant [@mayor] bearing no resemblance to the planet configuration in our Solar System. Several of these planets have now been found as they represent some of the most easily detected planets using both the transit and radial velocity methods. Planets reach their final shapes having attained hydrostatic equilibrium from balancing gravitational, pressure and other external forces acting on them. Planet shapes are often assumed to be spherical for simplicity but they are triaxial in reality. For very short-period planets (P &lt; 1-2 days), the close proximity to their stars exposes them to strong tidal forces which deforms them and increases the triaxiality of their equilibrium shapes. Contribution to deformation can also come from planet’s rotation which makes the planet oblate [@barnes03]. Planet shape can have noticeable effects on the light curve obtained from transit observations [@seagerhui; @carterwinnb; @carterwinna]. Analysis of transit light curve of planets assuming planet sphericity allows for a spherical radius $R_{spr}$ to be obtained. However, @leconte showed that planet deformation due to tidal and rotational forces lowers the observed transit depth in comparison to a spherical planet. This causes an underestimation of the planet’s radius when sphericity is assumed in the transit light curve analysis of a deformed planet. Since the planet’s density is calculated from the assumed spherical radius, the obtained density will consequently be overestimated. @burton thus provided density corrections for some short-period planets expected to be tidally deformed based on the Roche approximation [@chand]. Tidal deformation is particularly significant for planets orbiting close to their stellar Roche limits and a number of planets have been discovered to orbit so close to this limit that they are at edge of tidal disruption (e.g. @gillon14 [@delrez]). For some of these planets, theoretical calculations have been done using the Roche model by @budaj to estimate the planet shape and correct the derived spherical radii and densities for the expected planet deformation (e.g. @southworth15 [@delrez]). @correia14 formulated an analytical model for computing the shape of a deformed planet based on the fluid second Love number and also showed the difference between light curves of deformed and spherical planets. Despite these efforts, there has been no observational detection of tidal deformation in short-period planets which would provide better estimates of their parameters. We therefore investigate the possibility of detecting deformation in the transit light curve of short-period planets with some current and near-future observational instruments. We modify the *ellc* transit tool by @maxted [^1] to incorporate the planet shape model by @correia14. The modified *ellc* is used to generate the light curve for a deformed planet based on its fluid second Love number. This allows us to obtain an estimate for the planet’s Love number that best matches the transit observations which provides insights on the internal structure differentiation of the planet. In Sect. 2, we summarize the model used to compute the shape of the planet and modification of the transit tool used to generate the light curves. In Sect. 3 we apply the modified tool to investigate the detectability of planet deformation taking case study of a known short-period planet. In Sect. 4, we discuss the results and some useful considerations for detecting planet deformation. We present our conclusions in the last section. ![Schematic of triaxial ellipsoid centered on the origin of the Cartesian coordinate system ($X, Y, Z$) with positive X-axis pointing towards the star.[]{data-label="coord"}](coord.pdf){width="1\linewidth"} Modeling transit of deformed planets ==================================== Planet shape ------------ Modeling the shape of a deformed planet follows the analytical formulation by @correia14 in which the planet is described by a triaxial ellipsoid centered at the origin of a Cartesian coordinate. As shown in Fig. \[coord\], the semi-principal axes ($a, b, c$) of the ellipsoid are aligned with the $X, Y, Z$ axes of the coordinate system respectively. The equilibrium shape and mass distribution of a planet depends on the forces acting on it which are the planet’s self gravity and other perturbing potentials. The planet can deform under the influence of centrifugal and tidal potentials. For a tidally locked close-in planet with circularized orbit of radius $r_{0}$, @corr13 give the non-spherical contribution of the perturbing potential on the planet’s surface as $$V_{p}=\frac{1}{2}\Omega^{2}Z^2-\frac{3GM_{\ast}}{2r_{0}^3}X^2,$$ where G is the gravitational constant. The first term on the right-hand-side is the deformation contribution from the centrifugal potential resulting from planet’s coplanar and synchronous rotation rate $\Omega$ about the Z-axis. The second term refers to the tidal contribution to the deformation along the X-axis by a star of mass $M_{\ast}$. Following @love, @correia14 describes this deformation using a Love number approach such that the fluid second Love number for radial displacement $h_{f}$ is related to the radial deformation of the planet $\Delta R$. The equilibrium surface deformation is thus given by $$\Delta R= -h_{f}V_{p}/g,$$ where $g$ is the average surface gravity of the planet. $h_{f}$ is a dimensionless quantity that quantifies a planet’s response (deformation) to a perturbing potential[^2]. The magnitude of $h_{f}$ depends on the mass distribution of the planet. More homogeneous planets have higher $h_{f}$ whereas planets that are more centrally condensed have lower $h_{f}$ [@kramm11; @kramm12]. For an incompressible homogeneous planet, $h_{f}$ = 2.5 which is the theoretical maximum value [@leconte; @correia14]. The physical values of $h_{f}$ range from 1 to 2.5 where $h_{f}=1$ would represent highly differentiated bodies with high core mass like FGK stars and $h_{f}=2.5$ is only possible for significantly homogeneous bodies like asteroids. In comparison Jupiter has $h_{f}\approx1.5$ and Earth has $h_{f}\approx2$ [@yoder]. First observational measurement of Saturn’s Love number was recently obtained by @LAINEY2017 leading to a value of $h_{f}=1.39$ (from $k_{f}=0.39$). Due to the synchronous rotation, the semi-principal axis $a$ of the planet always points in the direction of the star leading to a tidal deformation along $a$. The shape of the planet is such that $a>b>c$ and the deformation is kept constant along the circularized orbit. For the ellipsoid, we can define also the radius of a sphere that will enclose the same volume as the ellipsoid so that $R_{v}=(abc)^{1/3}$. According to the formulation by @correia14, the semi-principal axes are related as $a=b\,(1+3q)$ and $c=b\,(1-q)$. We can then write $b$ as a function of $R_{v}$ to first order in the parameter $q$ as $$\label{b} b\simeq R_{v}\,\left( 1-\frac{2}{3}q\right)\,,$$ so that $$\label{a} a=b\,(1+3q)\simeq R_{v}\,\left( 1+\frac{7}{3}q\right)$$ and $$\label{c} c=b\,(1-q) \simeq R_{v}\,\left( 1-\frac{5}{3}q\right),$$ where $q$ is an asymmetry parameter that relates to $h_f$ according to $$\label{qq} q=\frac{h_{f}}{2}\frac{M_{\ast}}{m_{p}}\left(\frac{R_{v}}{r_{0}}\right)^3.$$ The asymmetry parameter $q$ quantifies the deformation of a planet, i.e., the difference between the ellipsoid’s semi-principal axes. Maximum deformation (hence maximum $q$) is attained for a given planet when it orbits at the Roche radius ($r_{0}=r_{R}=2.46\,R_{v}[M_{\ast}/m_{p}]^{1/3}$). Therefore, for maximum $h_{f}$ = 2.5, we have $q_{max} \simeq 0.083$. The equilibrium shape of a planet thus depends on its radius, the planet’s fluid second Love number $h_{f}$, the mass ratio between star and planet $M_{\ast}/m_{p}$ and also the planet distance $r_{0}$ from the star. Figure \[q\_comp2\] shows how tidal deformation becomes negligible with semi-major axis (in units of its Roche radii) for a given body with $h_{f} = 2.5$ and again with Jupiter’s $h_{f} = 1.5$. We see that far away from the star, irrespective of the value of $h_{f}$, the planet does not deform ($q\simeq0$) and so its shape remains largely spherical ($a\simeq b\simeq c$ from Eqs. \[b\]-\[c\]). In general, Eq. \[qq\] shows that tidal deformation is more relevant for large planets orbiting very close to their Roche radii. Planets with the highest absolute deformation (highest product $q\times R_{v}$) present the best chances to detect deformation. ![Quantification of tidal deformation as a function of distance to the star for two different $h_{f}$ values.[]{data-label="q_comp2"}](q_comp2n.pdf){width="1\linewidth"} Transit model ------------- Planetary features that change the shape of a planet (oblateness or rings) have the effect of modifying the transit light curves (e.g. @barnes03 [@akin18]). In the same vein, tidal deformation of a planet can modify the observed transit light curve. To model the transit of a deformed planet, the above ellipsoidal shape model by @correia14 was incorporated as a subroutine into a new version of the *ellc* transit tool by @maxted. The *ellc* light curve model allows the projection of the ellipsoid and generation of the corresponding transit light curve. The projected shape of the ellipsoid on the stellar disk is an ellipse whose dimensions depend on the phase of the planet due to rotation of the ellipsoid with phase (*see* Fig. A.1 in @correia14). The rotation of the ellipsoid causes the cross-section of the planet to vary during transit. It should be noted that the shape correction model by @budaj does not account for the varying ellipsoidal cross-section during transit thereby making *ellc* a more complete model involving this observational effect. Detailed descriptions of the *ellc* tool and the input parameters can be found in @maxted. The modified transit model, in addition to usual transit parameters, takes the value of $h_{f}$ and the ellipsoid’s volumetric radius $R_{v}$ as inputs. Therefore, by fitting the ellipsoidal model to the transit observation, all the parameters of the transit, including the shape of planet, can be obtained and $h_{f}$ is estimated from the best fit of the model. So rather than obtaining the usual transit radius $R_{spr}$ from spherical planet models, we obtain the best-match dimensions $a, b, c$ of the ellipsoidal planet and calculate $R_{v}$. ![Comparison of ellipsoidal model light curves of different $h_{f}$ values with spherical model light curve for WASP-103b.[]{data-label="hfcomp"}](hfcomp_new.pdf){width="1\linewidth"} \[parameters\] ------------------------------------------------- ------------ ---------- Planet radius [\[]{}$R_{\odot}$[\]]{} $R_{spr}$ 0.1604 \[3pt\] Planet mass [\[]{}$M_{\odot}$[\]]{} $m_{p}$ 0.0014 \[3pt\] Stellar radius [\[]{}$R_{\odot}$[\]]{} $R_{\ast}$ $1.4130$ \[3pt\] Stellar mass [\[]{}$M_{\odot}$[\]]{} $M_{\ast}$ $1.2050$ \[3pt\] Semi-major axis [\[]{}$R_{\odot}$[\]]{} $r_{0}$ 4.2555 \[3pt\] Roche radius [\[]{}$R_{\odot}$[\]]{} $r_{R}$ 3.7534 ------------------------------------------------- ------------ ---------- : System parameters for WASP-103b. The case of WASP-103b --------------------- To illustrate the output of *ellc* for an ellipsoidal planet, we take the case of WASP-103b, an ultra-short-period planet (P=22.2hr) reported to be on the edge of tidal disruption [@gillon14] making it an ideal candidate to detect deformation. Based on revised parameters by @southworth16, it has an average radius of 1.596$R_{Jup}$ and mass of 1.47$M_{Jup}$ (Table \[parameters\]). It orbits its star at a semi-major axis ($r_{0}$) of 0.01979AU and an inclination ($inc$) of $88.2^{\mathrm{o}}$. It is assumed to be on the edge of tidal disruption due to its semi-major axis of only 1.13 times its Roche radius. Taking the quoted radius as the volumetric radius of the ellipsoid, Fig. \[hfcomp\] compares the spherical planet light curve for WASP-103b to its ellipsoidal counterparts with different $h_f$ values. It is seen that the light-curve of the ellipsoidal model changes noticeably for different values of $h_{f}$ and also compared to the spherical case. This is because the ellipsoidal planet projects only a small cross-section of its shape during the transit thereby leading to a lower transit depth when compared to the spherical planet. The mid-transit phase has the smallest ellipsoidal cross-section of $bc\simeq R_{v}^{2}(1-7q/3)$ which is less than the cross-section $R_{spr}^2$ if the planet were spherical. Therefore, if a spherical model is used to make a fit to the observation of an ellipsoidal planet, the spherical radius $R_{spr}$ derived will be smaller than the actual volumetric radius $R_{v}=(abc)^{1/3}$ of the ellipsoid (see Fig. \[sphrfit\]). This is in agreement with the result from @leconte. Differences in transit depth as $h_{f}$ varies in Fig. \[hfcomp\] is because higher $h_{f}$ for the same planet causes more deformation which leads to even smaller projected cross-sectional area. In our code, we allow for a case where $h_{f}$=0 (although not physical) to imply no deformation for the planet so that the ellipsoidal planet model is equivalent to that of a spherical planet and they produce the same light-curve with $R_{v}=R_{spr}$. This is important for the analysis we perform in the next section and allows us to use the same model to explain both a deformed and a spherical planet. @maxted already showed that the spherical light curve of *ellc* is in agreement with other transit tools like *BATMAN* [@kreid]. ![Spherical fit to simulated deformed WASP-103b light curve. The bottom plot shows the residual representing the signature of deformation with amplitude quoted as the maximum absolute residual (max\_abs\_res). All length measurements are given in units of Solar radii.[]{data-label="sphrfit"}](sphrfit_new2v_sphr.pdf){width="1\linewidth"} Signature of deformation in transit light curves {#2.4} ------------------------------------------------ Figure 1 in @correia14 showed difference plots between ellipsoidal and spherical light curves assuming both planets cover the same stellar area at the start of transit (full ingress). This perfectly captures the flux variation induced by deformation as both planets transit but is not the signature one will obtain from real observations since the transit parameters will be initially unknown and must be determined from a fitting process. The observable signature of planet deformation is the residual between the deformed planet’s light curve and the best-fit spherical model. In Fig. \[sphrfit\], we simulated the light-curve of deformed WASP-103b using our ellipsoidal model with parameters given in Table \[parameters\] and performed least-squares fitting using a spherical planet model. The residual from the fit is shown in the bottom panel and it represents the signature of deformation for the simulated planet.The parameters derived from the fitting process are systematically wrong as they adjust to mimic the signature of deformation. This also shows that the assumption of sphericity for a planet affects not only the radius derived but also the other transit parameters and models that adjust only this radius are incomplete. We see in the residuals that the signature of deformation manifests in two regions. First is at ingress and egress phases owing to oblateness ($b > c$) of the planet as identified in previous studies (e.g. @seagerhui [@barnes03]). A second prominent feature is seen as a bump centered on the mid transit phase due to the varying star eclipsed area caused by ellipsoid rotation as it transits. This second feature is as a result of tidal deformation which was not accounted for in the previous studies mentioned but manisfests in our model due to full projection of the ellipsoidal shape as it rotates with phase [@correia14]. To compare the deformation signal obtained from the fitting process with the flux difference plot in @correia14, we perform spherical fits to the ellipsoidal simulation of other short-period giant planets WASP-19b, WASP-12b, WASP-4b and WASP-121b that were presented in the study and expected to be deformed. The residuals are shown in Fig. \[diff\_fit\]. We see from Figures \[sphrfit\] and \[diff\_fit\] that the amplitude of the deformation signature is just about 40ppm for the most deformed planets (WASP-103b and WASP-121b) while the amplitude from the difference curves in @correia14 are up to 100ppm. We reiterate that the latter should not be taken to imply high signal detectability. WASP-103b, WASP-121b and WASP-12b have the highest residual amplitudes and therefore present the best possibility of detecting deformation. Other planets likely to be deformed are HATS-18b, WASP-76b and WASP-33b but have lower residual amplitudes of 20, 14 and 12ppm respectively. ![Residuals from spherical fit to ellipsoidal simulations of different short-period planets in comparison to WASP-19b, 12b and 4b from @correia14[]{data-label="diff_fit"}](others2.pdf){width="1\linewidth"} Detectability of planet deformation and measurement of planet Love number ========================================================================= The residuals of the spherical fit to a deformed planet’s light curve is informative in detecting deformation as it shows that the spherical model does not fully explain the observation. However, some of the signature of the deformation is masked in the errors of the parameters obtained. To correctly estimate the planet transit parameters, our ellipsoidal model can be used to fit the transit observation. In doing so, we also obtain a value for the Love number that best fits the observation if there is enough precision in the data. The benefit of this approach is that we can fit the ellipsoidal model to any transit observation and, by the value of $h_{f}$ recovered, ascertain if planet deformation is detectable or not. If we cannot detect the deformation, we get $h_{f} \approx 0$ which as shown in Fig. \[hfcomp\] is equivalent to the fit of a spherical planet model. ![Detectability of deformation in WASP-103b considering different noise levels. The black dashed line is the simulated $h_{f}$ value. The points are the median of the $h_{f}$ samples at each noise level. The red error bars show the 68% credible interval ($\simeq \pm1\sigma$) while the blue error bars show the 99.7% credible interval ($\simeq \pm3\sigma$). []{data-label="detect"}](detect2_new2.pdf){width="1\linewidth"} Therefore, detectability of tidal deformation using the ellipsoidal model relies on the ability to recover a non-zero value of $h_{f}$ with statistical significance from a fitting process. Despite being able to infer deformation with only detection of $h_{f} \gg 0$, we will need to have $h_{f} \geq 1$ with some significance where the values give actual physical interpretation to astronomical bodies. To illustrate the detectability, we created simulated observations of deformed WASP-103b with 1min cadence using its parameters as stated above with $h_{f}$=1.5. We used the Limb darkening toolkit (*ldtk*) by @pav15 to compute quadratic limb darkening coefficients of \[0.5343, 0.1299\] and their uncertainties \[0.0012, 0.0027\] in the CHEOPS bandpass for the star with stellar parameters given in @gillon14. We added random gaussian noise of different levels to the simulated data in each test run. We then investigated how well we can recover the value of $h_{f}$ and at what noise level it would be impossible to distinguish between the light curve of a spherical planet and that of a deformed planet. This is important to know the instrumental precision necessary to detect deformation in close-in planets. We performed Markov Chain Monte Carlo (MCMC) analysis to estimate the transit parameters and their uncertainties using the *emcee* package [@foreman] with uniform priors on $h_{f}$ in the range \[0, 2.5\]. As shown in Appendix \[corn\], when a noise level of 30ppm is added to the simulated observation, $h_{f}$ is reliably recovered with 99.7% of its samples (within $\simeq \pm3\sigma$) greater than 1. This proves that the result is statistically significant and implies that the planet is indeed deformed. Moreover, the residual from the fit does not show any structure related to the deformation signal. However, when a noise level of 100ppm is added to the observation the median of the distribution suggests a deformed planet but because its width encompasses $h_{f} = 0$ (spherical model), planet deformation cannot be asserted (Appendix \[corn100\]). Figure \[detect\] shows the detectability plot summarizing the results for the different noise levels added to the observation. We see that the significance of $h_{f}$ detection above 1 reduces as the noise level of the observation increases. For instance, at 50ppm noise level, $h_{f}$ samples are well above 0 implying that the ellipsoidal model provides a better fit than the spherical model. However, the samples with $h_{f}$ &lt; 1 do not represent physical values for a planet but the detection still gives $\sim 95\%$ of the samples above 1. Beyond 50ppm, fitting the observation with a spherical model becomes increasingly more probable. With noise levels as high as 100ppm, the spherical and ellipsoidal models produce comparable fits. ------------ -------- ----- ---- -------- ----- 6.5 150ppm 9 8 62ppm 2 \[5pt\] 8 186ppm 14 10 209ppm 17 \[5pt\] 10 319ppm 40 11 263ppm 28 \[5pt\] 12 855ppm 293 13 619ppm 153 ------------ -------- ----- ---- -------- ----- : Number of transits required to reach 50ppm/min noise level with CHEOPS and PLATO for different stellar magnitudes.[]{data-label="noise"} Discussion ========== The results show that noise levels below 30ppm offer the best chance at detecting deformation for our test case of WASP-103b since we retrieve $h_{f}$ with $\geq3\sigma$ significance above 1. However, we could define a lower limit on our detection confirmation such that we require to have ($h_{f}-1\sigma) \geq 1$ which puts 84% of the recovered $h_f$ samples in physical values expected for planets. This is satisfied for noise levels of 50ppm and below. A photometric precision of 50ppm/min is not yet attainable using current observational instruments. For our case system, WASP-103 is a $12^{th}$ magnitude star and the photometric precision to be attained by the near-future instrument CHEOPS for this star is 855ppm per minute. Attaining a reduced photon noise level of 50ppm/min for this star using CHEOPS requires $\sim$293 transit observations of WASP-103b. For the interesting candidate WASP-121b which orbits its star of magnitude $m_{V}$=10 [@delrez], our analysis also showed detectability of deformation with 50ppm/min noise level. CHEOPS precision for a $10^{th}$ magnitude star is 319ppm/min thereby requiring only 40 transit observations to detect deformation in this planet. Although information from the CHEOPS consortium indicates that WASP-121 might not be in the visibility region, new interesting planet candidates with short period orbits may appear from future surveys targeting bright stars, such as PLATO [@rauer] and TESS [@ricker]. For these planets around stars brighter than $m_{V} = 9$, we expect photon noise levels as low as 150ppm/min with CHEOPS [@broeg] and &lt;62ppm/min with PLATO [@rauer] and thus require fewer transits to reach the 50ppm limit needed to detect planet deformation as reported in Table \[noise\]. For these stars, TESS will have a relatively higher noise level of 464ppm/min [@sullivan] which is not desirable for detecting deformation. Observations with the forthcoming JWST will also be immensely beneficial as it is expected to attain photon-noise floor $\sim$40ppm (65secs) on its NIRCam instrument amongst others [@beichman]. Attainment of this noise level implies that only one transit observation will be required in order to detect tidal deformation in a suitable short-period planet. Unfortunately, interesting short-period planets expected to be significantly deformed were not found within the original *Kepler* survey field which would have allowed several transit observations of any found target. The WFC3 instrument on the Hubble Space Telescope (HST) achieved noise level of 172ppm (103secs) for 2 full-orbit observations of WASP-103 [@kreid18]. Therefore, with $\sim$40 transits of WASP-103b using HST, we can attain the required precision of 50ppm/min. However, some factors can still affect the detectability of deformation, some of which are mentioned below.\ *Temporal Resolution*: The temporal resolution of the observation can affect the detectability. We have used 1min cadence in our simulations to enable good resolution of the ingress and egress phases which have short durations especially for these short period planets. A lower cadence than this reduces the precision with which $h_{f}$ and other parameters are recovered. At the 30ppm noise level, changing the cadence from 1min to 4mins and 8mins increases the error on $h_{f}$ from $\pm0.12$ to $\pm0.23$ and $\pm0.38$ respectively.\ *Orbital inclination*: The inclination of the orbit plays a role in the signature of deformation. Lower inclinations indicate a shorter transit duration so the effects referred to in residuals of Fig. \[sphrfit\] and Sect. \[2.4\] will be shorter in time making them more difficult to be temporally resolved especially at the ingress and egress phases. In addition, a longer transit duration allows the projected ellipse area to vary more (longer phase rotation of ellipsoid) making the light-curve more markedly different from that of the spherical planet thereby leading to a higher amplitude bump around the mid transit (*see also* Fig. A.1 in @correia14). The effects of deformation in light curves is maximum at inclination of $90^{\mathrm{o}}$ where $h_{f}$ is recovered with the best precision.\ *Limb darkening coefficients (LDCs)*: As shown in Fig. \[sphrfit\], the signature of deformation is prominent at ingress and egress phases with a bump centered around the mid transit phase. The stellar limb darkening affects light curves similarly in these regions (see effects of LDC modeling in @neilson), so we tested the impact of inaccurate estimation of limb darkening coefficients on the recovery of $h_f$ from the light curve. This was attempted on the 30ppm noise level simulation in two ways and the result summarized in Table \[LDCtests\]. First we fixed the limb darkening coefficients to wrong values that are slightly different from the true values used to generate the simulated observation. We found that for wrongly fixed LDC values which are smaller than the true values, the signature of deformation gets damped as we recover lower $h_{f}$ values than simulated. When the values are fixed at values up to 0.01 smaller than the truth, the entire $h_{f}$ distribution falls around 0 and we infer a spherical planet (see left plot in Fig. \[LDCs\]). On the other hand, $h_{f}$ values are amplified when LDCs are fixed at values higher than the truth. For LDC values fixed at 0.015 higher than the truth, the recovered $h_{f}$ distribution is pushed towards the maximum of 2.5. In the later case, we can infer that the planet is deformed but cannot ascertain the extent of deformation due to inaccurate estimation of $h_{f}$ which is evident from the obtained marginalized distribution (see right plot in Fig. \[LDCs\]). The other attempt was to fit the LDCs by including them in the hyperparameters. We use a gaussian prior with the true LDC values as mean and $\sigma=0.01$. The MCMC sampling produced a wide $h_f$ distribution centered close to the true value but with errors as large as $\pm 0.4$ (left plot in Fig. \[LDC\_priors\]) making it difficult to ascertain planet shape. However, when tighter priors (e.g. using errors obtained from deriving LDCs with *ldtk*) are imposed on the LDCs, $h_{f}$ is well-recovered with errors of just $\pm0.18$ to infer deformation (right plot in Fig. \[LDC\_priors\]). It should be noted that the LDC error estimates from *ldtk* are very small and have often had to be inflated in literature during fitting to account for systematic errors in the atmospheric models (e.g. @raynard [@maxted18b]). Alternatively, the power-2 limb darkening law has been recommended for the analysis of transit light curves as it has been shown to provide remarkable agreement between stellar atmospheric models and observations, particularly for cool stars [@morello; @maxted18]. The transformation of the two parameters of the power-2 law in [@maxted18] reduces the correlation between them during fitting and small errors of \[0.011, 0.045\] can be obtained on them. The fitting process can attempt different LDC laws so that the law with the best match to the observation and that produces the least errors on the derived parameters will be preferred.\ [llc]{} LDC tests & Values & $h_{f}$ recovered\ Fixed at 0.01 below & [\[]{}0.5243,0.1199[\]]{} & $0.12^{+0.11}_{-0.08}$\ Fixed at 0.015 higher & [\[]{}0.5493,0.1449[\]]{} & $2.44^{+0.04}_{-0.06}$\ Gaussian priors & --------------------------------- Mean=[\[]{}0.5343,0.1299[\]]{}, $\sigma$=[\[]{}0.01, 0.01[\]]{} --------------------------------- : Results of LDC tests and $h_{f}$ values recovered. The plots are shown in Figures \[LDCs\] and \[LDC\_priors\][]{data-label="LDCtests"} & $1.56^{+0.31}_{-0.53}$\ Gaussian priors & ------------------------------------ Mean=[\[]{}0.5343,0.1299[\]]{}, $\sigma$=[\[]{}0.0012,0.0027[\]]{} ------------------------------------ : Results of LDC tests and $h_{f}$ values recovered. The plots are shown in Figures \[LDCs\] and \[LDC\_priors\][]{data-label="LDCtests"} & $1.59^{+0.18}_{-0.17}$\ *Other noise sources*: Our simulations have considered the ideal situation where only photon (white) noise is present thereby allowing easy scaling of the noise with the number of observations/transits. However in practice, other sources of noise [@pont] will impact the estimates we have given above and act to increase the number of transits required to detect deformation. These other noise sources can be from instrumental effects (e.g. satellite jitter and thermal instability) and also from astrophysical sources such as stellar activity (occulted or unocculted active regions @oshagh2013), stellar oscillations and granulation [@chiavassa]. These effects always have to be mitigated in transit analysis [@oshagh18; @barros14] but will still impact the detectability of shape deformation. Recent development in gaussian process analysis also provides a method for tackling astrophysical noise (e.g. @dfm17 [@serrano]). Conclusion ========== Short period planets, especially within 2 Roche radii from the host star, suffer from extreme tidal forces causing their shapes to depart from sphericity in a way that is difficult to detect in transit observations. With the increasing observational precision of near-future instruments, detecting deformation becomes more feasible as the planet shape will have a higher impact on the observed transit light curves. We have demonstrated detectability of deformation for WASP-103b and WASP-121b (which have the highest deformation signatures as seen in Sect. \[2.4\] and regarded as some of the most deformed planets [@delrez]) by employing a formulation from literature in a way that allows an observational estimate of the planet’s fluid Love number to be obtained. Because the Love number tells us how a planet deforms in response to perturbing potentials, we used it as a measure of deformation in the planet. Detecting and measuring planet deformation provides more accurate estimations of the radius and density of these planets as opposed to the estimates derived from spherical models or corrections calculated from only expectation of deformation. Additionally, measuring the Love number gives us an information about the interior structure of the planet. We showed that the instrumental precision needed to be attained to detect tidal deformation is $\leq$ 50ppm which can be attained by CHEOPS with about 300 transits for WASP-103b and 40 transits for WASP-121b. HST can also attain this precision for WASP-103b in $\sim$40 transit observations. Fewer transit observations will be required if such short-period planets are found transiting very bright stars. Additionally, the precision expected from JWST will present the best opportunity to detect tidal deformation since only one transit of a suitable planet will be required.\ The chances of detecting deformation is increased for planets with inclinations of $90^{\mathrm{o}}$ and also when the observations are taken with temporal resolution of $\sim$1min. However detection can be severely hampered by improper modeling of the limb darkening which, in some cases, can cause the signature of deformation to be subdued leading us to infer sphericity from the observations. Using the quadratic limb darkening law, LDC errors smaller than 0.01 is required in order to confirm planet deformation. Proper treatment of noise sources will also be pertinent in order to identify the signature of shape deformation.\ This work was supported by Fundação para a Ciência e a Tecnologia (FCT, Portugal) through national funds and by FEDER through COMPETE2020 by these grants UID/FIS/04434/2013 & UID/MAT/04106/2013 and POCI-01-0145-FEDER-007672 & PTDC/FIS-AST/1526/2014 & POCI-01-0145-FEDER-016886 & POCI-01-0145-FEDER-022217 & POCI-01-0145-FEDER-029932 & POCI-01-0145-FEDER-028953 &l POCI-01-0145-FEDER-032113. We also acknowledge support from FCT (Portugal) and POPH/FSE (EC) through fellowship PD/BD/128119/2016. NCS and SCCB also acknowledge support from FCT through Investigador FCT contracts IF/00169/2012/CP0150/CT0002 and IF/01312/2014/CP1215/CT0004 respectively. BA acknowledges support from FCT through the FCT PhD programme PD/BD/135226/2017. BA also thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF grant \#1829740, and also the Brinson and the Moore Foundations, his time as a fellow has benefitted this work.\ Additional figures ================== [.53]{} ![image](30ppm_bv.pdf){width=".99\linewidth"} [.43]{} ![image](30ppm_b_fitv.pdf){width=".99\linewidth"} [.53]{} ![image](100ppm_bv.pdf){width=".99\linewidth"} [.43]{} ![image](100ppm_b_fitv.pdf){width=".99\linewidth"} [.50]{} ![image](less0_01_newv.pdf){width="1\linewidth"} [.50]{} ![image](more0_015_newv.pdf){width="1\linewidth"} [.51]{} ![image](prior0_01_newv.pdf){width=".99\linewidth"} [.51]{} ![image](priorldtk_newv.pdf){width=".99\linewidth"} [^1]: Available at [ https://pypi.org/project/ellc/](https://pypi.org/project/ellc/) [^2]: $h_{f}$=1+$k_{f}$ where $k_{f}$ is the fluid second Love number for potential [@corr_boue]. Calculation of the different Love numbers can be found in @sabadini.
--- author: - | Yuan Zhang, Jason Riesa, Daniel Gillick, Anton Bakalov, Jason Baldridge, David Weiss\ Google AI Language\ [{zhangyua, riesa, dgillick, abakalov, jridge, djweiss}@google.com]{}\ bibliography: - 'paper.bib' title: 'A Fast, Compact, Accurate Model for Language Identification of Codemixed Text' --- Acknowledgments {#acknowledgments .unnumbered} =============== We thank Emily Pitler, Slav Petrov, John Alex, Daniel Andor, Kellie Webster, Vera Axelrod, Kuzman Ganchev, Jan Botha, and Manaal Faruqui for helpful discussions during this work, and our anonymous reviewers for their thoughtful comments and suggestions. We also thank Elixabete Gomez, Héctor Alcalde, Knot Pipatsrisawat, and their stellar team of linguists who helped us to annotate and curate much of our data.
--- abstract: 'Cosmological models that are locally consistent with general relativity and the standard model in which an object transported around the universe undergoes $P, C$ and $CP$ transformations, are constructed. This leads to generalization of the gauge fields that describe electro-weak and strong interactions by enlarging the gauge groups to include anti-unitary transformations. Gedanken experiments show that if all interactions obey Einstein causality then P, C and CP cannot be violated in these models. But another model, which would violate charge superselection rule even for an isolated system, is allowed. It is suggested that the [*fundamental*]{} physical laws must have these discrete symmetries which are broken spontaneously, or they must be non causal.' address: - | Raman Research Institute\ C.V. Raman Avenue,Bangalore 560 080, India - and - | Department of Physics and Astronomy, University of South Carolina,\ Columbia, SC 29208, USA\ E-mail: jeeva@sc.edu author: - 'J. Anandan' date: 'March 21, 97, Revised July 15, 97' title: Global Topology and Local Violation of Discrete Symmetries --- =10000 .5cm .5cm The great success of the standard model has provided hardly any experimental motivation to modify it at present. I consider here some interesting physical consequences of generalizing the gravitational, electromagnetic, weak and strong fields, by modifying the global topology of an appropriate Kaluza- Klein space-time. These generalizations are locally consistent with the [*causal*]{} dynamics of the standard model and general relativity. Such topologies need to be studied also if in quantum gravity the Feynman amplitudes for all possible topologies are summed. And yet, it will be shown by gedanken experiments around global circuits in space-time that they are incompatible with the observed violations of parity ($P$), charge conjugation ($C$), and $CP$ symmetries. This suggests that the standard model should be modified as will be discussed at the end. Consider first a non orientable space. An example is obtained by identifying a pair of opposite faces of a rectangular box ( fig. 1) continuously so that $A,B,C,D$ become identified with $A',B',C',D'$, respectively. All sections parallel to $ABA'B'$ have the topology of the Mobius strip $M^2$. Now let $AB, BC$ become infinite in length while keeping $L=AB'$ large but finite. The Cartesian product of this space with the real line $R$ is a non orientable manifold $M^4 = M^2 \times R^2$. This amounts to the identification $(0,y,z,t)\leftrightarrow (L,y,-z,t)$. We may take this manifold endowed with a flat Minkowskian metric to be our space-time, denoted $S_1$. It trivially satisfies Einstein’s field equations in the absence of matter. In the presence of matter, we may consider the Einstein - de Sitter or the Friedmann-Robertson-Walker cosmological model with zero spatial curvature, with metric [@mtw] $$ds^2\equiv g_{\mu\nu}dx^\mu dx^\nu = -c^2 dt^2 + a^2(t) (dx^2 + dy^2 + dz^2).$$ and energy-momentum tensor $$T^{\mu\nu} = (\rho +P)u^\mu u^\nu + Pg^{\mu\nu},$$ where the density $\rho$ and the pressure $P$ are constant in each hypersurface orthogonal to $u^\mu$. Then (1) and (2) satisfy the Einstein’s field equations for appropriate choices of $a(t),\rho(t)$ and $P(t)$ with $u^\mu=\delta^\mu_0$. For example, for a pressure free universe (galaxies idealized as grains of dust with their random velocities neglected) $P=0$ and then $a(t)=At^{2/3}$, whereas for radiation $P={1\over 3}\rho$ and then $a(t)=Bt^{1/2}$, where $A$ and $B$ are constants [@mtw], assuming zero cosmological constant. All astrophysical evidence we have at present are consistent with (1). Again, this cosmology may be made non orientable by the identification described above (fig. 1) to obtain a space-time, denoted $S_2$ [@ei1997]. Note that when the triad $OXYZ$ is taken to $O'$, identified with $O$, its $z-$axis has reversed direction compared to an identical triad which was left at $O$, while the $X$ and $Y$ axes remain the same (fig. 1). So, the triad has changed handedness around this closed curve. A left handed glove taken around any such closed curve, denoted $\Gamma$, will return as a right handed glove. Another interesting aspect of this and other space-times discussed here is that they allow for some locally conserved quantities to be globally non conserved. For example, the momentum $\bf p$ of a free particle moving in the space $M^3=M^2 \times R$ described in fig. 1 is [*locally*]{} conserved, meaning that in any orientable neighborhood containing the particle $\bf p$ is conserved. But if it goes around $\Gamma$ then its momentum component $p_z$ would have reversed. Similarly, the angular momentum $\bf J$ of a torque free gyroscope would be locally conserved. Yet if it goes around $\Gamma$ then the $z-$ component $J_z$ of $\bf J$ would have reversed. Therefore, $p_z$ and $J_z$ are not globally conserved. To see how this is possible, note first that for every Killing field $\xi^\mu_a$, the law $\nabla_\nu T^{\mu\nu} =0$ implies, via Killing’s equation, the local conservation law $\nabla_\mu j_a^\mu =0$ where $j_a^\mu =T^{\mu\nu}\xi_{a\nu}$. The locally conserved momentum and angular momentum components correspond to the independent translational and rotational Killing fields of $M^3$. On defining the ‘charges’ inside an oriented manifold $V$ with boundary $\partial V$ by $Q_a = \int_V \sqrt{-g}j_a^0 d^3x$, from Gauss’ theorem, $${dQ_a\over dt} = \int_{\partial V} \sqrt{-g}j_a^i dS_i .$$ So, $Q_a$ is conserved iff the flux that is the RHS of (3) vanishes. If $V$ is taken to be the interior of the box in fig. 1, then as a particle goes out of $V$ through the end $A'B'C'D'$, because of the identification, it simultaneously comes into $V$ through the opposite end $ABCD$ in $M^3$. Since the identification reverses the $z-$ direction, it is clear that the contribution that this simultaneous exit and entry the particle makes to the RHS of (3) vanishes for $Q_a = p_x, p_y, J_x, J_y$ but not for $Q_a = p_z, J_z$. Even the definitions of $p_z, J_z$ depend on the chosen neighborhoods; the above argument shows that for the chosen maximal neighborhood they are not conserved, unlike $p_x, p_y, J_x$ and $J_y$. The local $U(1)$ gauge symmetry of electromagnetism implies that the electromagnetic $U(1)$ group acts locally at each point in space- time. This naturally leads to the 5 dimensional Kaluza-Klein (KK) geometry that is obtained from space-time by replacing each point by a circle on which the $U(1)$ group acts locally, and conversely. The electromagnetic field provides a $(1-1)$ correspondence between neighboring circles, called a connection. As we go around a closed space-time curve, denoted $\gamma$, beginning and ending at a point $o$, this correspondence leads to the rotation of the circle at $o$, called the holonomy transformation of the electromagnetic connection. When a wave function is taken around this curve it is acted upon by this rotation and acquires the phase factor $\exp (-ie\oint_\gamma A_\mu dx^\mu )$, which can be experimentally observed, where $A_\mu$ is the electromagnetic potential. If $e$ is the smallest unit of charge, then these phase factors for different closed curves $\gamma$ define the electromagnetic field [@wu1975]. Then, for a given $\gamma$ and electromagnetic field, these phase factors may be used to define the various charges that replace $e$. The electromagnetic field may now be generalized by allowing the identification of the $U(1)$ circles at $o$ to be in the opposite sense so that $\gamma$ is the projection of a Klein bottle in the KK space-time. In particular, if $(x,y,z,t,\phi)$ are the coordinates of the KK space- time, where $\phi$ is the angular variable in the fifth dimension, consider the slab of space-time $0\le x\le L$ with the identification of its ends by the homeomorphism $(0,y,z,t,\phi)\leftrightarrow (L,y,z,t,-\phi)$. Its projection on the usual space-time may be endowed with the metric (1). In this new KK space-time, denoted $S_3$, each two dimensional surface of constant $y,z,t$ is a Klein bottle $K^2$. So, $S_3$ is topologically $K^2\times R^3$. Such a generalization amounts to enlarging the electromagnetic gauge group $U(1)$ to $O(2)$ that is generated by $SO(2) = U(1)$ and the reflection $E$ in two dimensional real Euclidean space. Although $O(2)$ is non abelian, because it is one dimensional, the gauge field is still abelian. Suppose two observers start from the same point on $\gamma$ and go around $\gamma$ and meet. Each would then claim, with equal justification, that the charges of all the particles in the other observer have changed sign. So, it is not possible to determine unambiguously whether the sign of two charges at distinct points are the same. Because it is necessary to bring these charges to the same space-time point in order to compare them, and the result would depend on the paths they take. Only the absolute value of the ratio of the charges would be meaningful. Also, charge is locally conserved because of the $U(1)$ symmetry, but is not globally conserved. Because if two charges at the same space-time point are taken along different paths and brought together again, their sum may change. This is similar to the global non conservation of momentum and angular momentum mentioned above, and can be understood in the same way by means of Gauss’ theorem. Also, suppose a charged particle wave function is split into two wave functions that are made to interfere around a closed curve having the property that the generalized electromagnetic holonomy transformation associated with it is an improper $O(2)$ transformation. The superposed wave function then has the form $$\psi (x^\mu,\phi) = exp(ie\phi) \psi_1 (x^\mu) + exp(-ie\phi) \psi_2 (x^\mu),$$ in a local gauge. Now, $\psi^*\psi$ has a non trivial $\phi$ dependence that makes it spontaneously break the $O(2)$ symmetry down to the discrete group consisting of $E$ and the identity. The charge operator $Q=i{\partial\over \partial \phi}$. Since $\psi$ is a superposition of opposite charges, it violates the ‘charge superselection rule’. This shows that the often made claim that the $U(1)$ gauge symmetry implies the charge superselection rule is incorrect, because the $O(2)$ gauge symmetry here contains $U(1)$. When Aharonov and Susskind [@ah1967] refuted this claim, they showed how a subsystem may be in a superpostion of charge eigenstates, while the entire system does not violate the charge superselection rule. An example is the BCS ground state of a superconductor in which the Cooper pairs are in a superposition of different charge eigenstates, thereby breaking the electromagnetic $U(1)$ gauge symmetry spontaneously, while the entire superconductor may have a well defined charge and thus obey the charge superselection rule. But in the present case the entire system may be in superposition of charge eigenstates, and is therefore a stronger violation of this rule. In the usual electromagnetic theory, $Q$ commutes with the [*interactions*]{} so that the eigenstates of $Q$ form a ‘preferred basis’ in which the density matrix is diagonal. This gives an effective charge superselection rule. In the present more general electromagnetic theory, because the time evolution may contain $E$, which does not commute with $Q$ that generates the electromagnetic $U(1)$, it is ‘easy’ to produce a superposition of opposite charges, as in the above example. When such a superposition interacts with an apparatus, the apparatus wave function intensity also gets modulated correspondingly in the fifth dimension. [*This would make the fifth dimension observable*]{}, like the other four dimensions. This construction may be extended to the standard model for which the gauge group is $G= U(1)\times SU(2)\times SU(3)$. The $C$ transform of a spinor $\psi$ is $\psi^C = i\gamma^2 \psi^*$, where the $*$ denotes complex conjugation or Hermitian conjugation in quantum field theory. Therefore, as $\psi \rightarrow g\psi$ under $g\epsilon G$, $\psi^C \rightarrow g^* \psi^C$. In the above construction each Klein bottle may be replaced by a generalized Klein bottle that is closed by means of the automorphism $\alpha$ of $G$ that is the complex conjugation $\alpha (g) = g^*$ for every $g\epsilon G$, and the automorphism $\beta$ of the spinor Lorentz group $\Lambda$ defined by $\beta (S) = i\gamma^2 S^* ( i\gamma^2)^{-1}= -\gamma^2 S^* \gamma^2$ for every $S\epsilon \Lambda$. If each fiber is a homogeneous space $G/H$, where $H$ is a subgroup of $G$ such that $H^* = H$, then the new KK space-time, denoted $S_4$, is obtained by the identification $(0,y,z,t,Hg)\leftrightarrow (L,y,z,t,Hg^*)$. This performs a $C$ transformation on all the quantum numbers coupled to the gauge fields of $G$. Since the operational meaning of a particle is contained in all its interactions, a particle taken around $\gamma$ in $S_4$ would become its anti-particle, i.e. it would undergo a $C$ transformation. For example, when taken around $\gamma$ a neutrino will return as an anti-neutrino. This leads to a generalization of the standard model in which the gauge group $G$ is enlarged to a group $\tilde G$ that is generated by $G$ acting on itself on the right and the automorphism $\alpha$. Then $\tilde G = O(2)\times \tilde{SU}(2)\times \tilde{SU}(3)$, where $\tilde{SU}(n)$ denotes the group of unitary and anti-unitary transformations on an $n$ dimensional complex vector space that have determinant $1$. Even when $H$ is trivial so that $G/H = G$, $S_4$ is not a principal fiber bundle. Because the ‘twist’ in the generalized Klein bottle prevents the definition of the right action of $G$ everywhere. But $S_4$ is an associated bundle of a principal fiber bundle with structure group $\tilde G$ over the usual space-time as the common base manifold. Again by superposing $C$ eigenstates with distinct eigenvalues the extra dimensions become observable, as in the case of $S_3$ above. A CP transformation may also be implemented physically by identifying the opposite faces of the slab according to $(0,y,z,t,Hg)\leftrightarrow (L,y,-z,t,Hg^*)$. This new KK space-time will be denoted by $S_5$. Time reversal may be implemented by the identification $(0,y,z,t)\leftrightarrow (L,y,z,-t)$ so that space-time is time non orientable. But this would violate causality. It is not necessary to go all the way around the universe to obtain the above discrete transformations. Cosmic strings, which are predicted to occur in the early universe, have been characterized by proper orthochronous Poincare transformations of the affine holonomy group around it [@to1994; @an1996]. These solutions may be generalized to include also discrete transformations of the entire Poincare group as holonomy, e.g. reversal of the direction along the axis of the cosmic string. The discrete holonomy transformations would require taking out the axis of the cosmic string from space-time or turning it into a singularity. This would constitute a generalization of the gravitational field according to an earlier definition of the gravitational field [@an1996]. A generalized gauge field ‘flux’ may also be introduced into the string by letting the gauge field holonomy around the string to include the new anti-unitary transformation $\alpha$ introduced above. Except for $S_3$ (and its cosmic string analog) all the space-times discussed above are disallowed by the violation of discrete symmetries in weak interaction. In $S_1$ or $S_2$ consider two small capsules $U$ and $U'$ at the same location, each containing the apparatus for the $P$ violating experiment proposed by Lee and Yang [@le1956], and performed by Wu et al [@wu1957]. The magnetic coil, which orients the Co nuclei placed at the center of the coil, is in the x-y plane. When the nuclei undergo $\beta$ decay, let the intensity distribution of electrons be $f(\theta)$, where $\theta$ is the angle between the velocity of the emitted electron and the $z-$ axis. Then, $f(\theta)\ne f(180\deg -\theta)$, which violates $P$. Suppose now that the two capsules are taken along curves that form a circuit $\gamma$ such that the handedness changes during continuous transport around $\gamma$. Let there be two twins in the capsules performing the two respective experiments. When they meet again and compare their experiments they would find that the currents in the two coils in the $X-Y$ plane are flowing in the same direction. However, the distribution of the outgoing electrons in $U'$ is $f(\theta ')=f(180\deg -\theta)$, which would be in conflict with the distribution $f(\theta)$ obtained in the identical experiment performed in the capsule $U$. Unlike the “twin paradox” in special relativity (which is not a paradox), here there is perfect symmetry between the two twins: each twin would be justified in saying that it is the other who has undergone a $P$ transformation. But the above contradiction disallows $S_1$ and $S_2$. Since $\beta$-decay also violates $C$, $S_4$ is also disallowed by the above type of gedanken experiment. Similarly, $S_5$ is disallowed by doing identical experiments involving Kaon decay, which violates CP, in the two capsules. But $S_3$, with the generalized $O(2)$ electromagnetic gauge field introduced above, is consistent with all known phenomena. Because the charge reversal symmetry (or C restricted to purely electromagnetic phenomena) is an exact symmetry in all known phenomena. In an expanding universe, there may not be enough time for the capsules to go all the way around the universe and meet. But in principle, we can set up a ring of large number of capsules $\{U_1, U_2,...\}$ around the universe. Two identical capsules $V_n$, $V_{n+1}$ meet midway between two neighboring capsules $U_n$, $U_{n+1}$ at time $t=-T$; then $V_n$ meets $U_n$ and $V_{n+1}$ meets $U_{n+1}$ at $t=0>-T$. Finally, $V_n$ and $V_{n+1}$ meet again at $t=T$ to verify if the relevant experiments in $U_n$ and $U_{n+1}$ gave the same result at $t=0$. But in each of the above space-times, except $S_3$, there would then be some $n$ for which the experiments disagree, disallowing this space-time. The restrictions due to the standard model on the global topology of the universe that it should not allow $P, C$ or $CP$ relative transformations around closed circuits is puzzling in view of the fact that the dynamics of the standard model is local and causal. I am requiring here that [*any restriction on the boundary conditions must come from the laws of physics themselves*]{}. This raises the question of how the electroweak interaction, which appears to be local and causal, could influence the global topology of space-time or vice versa. As for the possibility of the former influence, as shown by the above examples of space-times, even Einstein’s field equations do not in general determine the global topology of space-time. As for the possibility of the reverse influence, how could a neutron ‘know’ the global topology of space-time so that it can safely decay in a $P$ violating way without leading to the above contradiction if it were taken around the universe? It appears that the simplest way of obtaining this connection is to suppose that $ P, C$ or $CP$ is not violated by the laws of physics at the most fundamental level, but that these symmetries are broken spontaneously. In the case of $P$ violation, there are then two sets of possible degenerate vacuua that are associated with the two possible equivalent orientations. However, if the boundary conditions in the early universe are such that space is non orientable then neither vacuum can be chosen all the way around the universe because this would result in a mismatch. So, the reflection symmetry is either not spontaneously broken, or broken in orientable domains in which the $P$ violation may be different corresponding to the two different possible orientations. But if the boundary conditions are such that space is orientable then a vacuum with the same orientation may be chosen everywhere so that $P$ is violated in the same manner. It is emphasized that the spontaneous breaking of symmetry may occur due to local, causal physics. Another possiblility is to give up Einstein causality in the fundamental physical laws, which was assumed to obtain the above contradictions. Since this principle is very well confirmed by all our experience, it appears that we could give it up only [@tachyon] in the early universe when quantum gravitational effects were important. This appears reasonable also because quantum gravity requires the quantization of space-time geometry including its causal structure, and is therefore inherently non causal and perhaps also non local. So, if quantum gravity violates $P, C$ and $CP$ intrinsically, then its laws could determine, in a non causal way, the global topology of the KK space-time to be compatible with these violations. Then the $P, C$ and $CP$ violating structure of the standard model would be obtained in the low energy limit of such a non causal quantum gravity. As for the first of the above two possibilities, purely left-right symmetric models with spontaneous violation of P have been proposed [@se1975]. But the above argument implies that, in this approach, $C$ and $CP$ should also be violated spontaneously. The second possibility, above, requires quantum gravity to be non causal and $P,C,CP$ violating, and would therefore heuristically guide us in the construction of a quantum theory of gravity. It has the advantage over the first possibility that the non causal nature could resolve another puzzle. This is the horizon problem [@mi1968], namely the fact that regions in the early universe which are causally unrelated nevertheless have similar properties, such as temperature and density, which appears to violate Einstein causality. If quantum gravity, which is expected to unify all the interactions, were to violate $P, T, C$ and $CP$, then it is not surprising that the electro-weak theory obtained as a low energy limit of quantum gravity should also violate these symmetries. It would equally not be surprising if the classical gravitational field, which is also a low energy limit of quantum gravity, should contain a residual violation of $P, C$ and $CP$ ( $C$ and $CP$ are the same as $PT$ and $T$, respectively, if we assume $CPT$ symmetry). It is therefore worthwhile to look for experimental evidence of violation of these discrete symmetries in the gravitational interaction [@da1976; @an1982]. I thank Yakir Aharonov, Ralph Howard, Simon Donaldson and Paule Saksidas for useful discussions. This work was supported by NSF grant PHY-9601280. [99]{} C. W. Misner, K. S. Thorne and J. A. Wheeler, [*Gravitation*]{} (W.H. Freeman, New York 1973). We may, alternatively, identify the three pairs of opposite faces in fig. 1 in the usual way to obtain a spatially closed orientable space [@mtw]. These slab-like spaces seem to be favored by recent evidence that the galaxy superclusters are arranged in the form of a three dimensional lattice with periodicity of approximately $120$ Mpc. J. Einasto et al, Nature, [**385,**]{} 139 (9 Jan. 1997). T. T. Wu and C. N. Yang, Phys. Rev. D [**12,**]{} 3843 (1975). Y. Aharonov and L. Susskind, Phys. Rev. [ **155,**]{} 1428 (1967). See also E. Lubkin, Annals of Phys. [**56,**]{} 69 (1970). K. P. Tod, Class. Quantum Grav. [**11,**]{} 1331 (1994). J. Anandan, Phys. Rev. D [**53,**]{} 779 (1996). T.D. Lee and C.N. Yang, Physical Review [ **104,**]{} 254 (1956). C.S. Wu, E. Ambler, R. W. Hayward, D.D. Hoppes and R.F. Hudson, Phys. Rev. [**105,**]{} 1413 (1957). An alternative way of having a non causal, although local, interaction is by means of tachyons. However, tachyons would make the vacuum unstable and therefore do not appear to be a realistic alternative. G. Senjanovic and R.N. Mohapatra, Phys. Rev. D [**12,**]{} 1502 (1975); R.N. Mohapatra and G. Senjanovic, Phys. Rev. D [ **23,**]{} 165 (1981); R. Mohapatra and Andrija Rasin, Phys. Rev. D [**54,**]{} 5835 (1996). These models are based on the suggestion of left-right symmetry by J.C. Pati and A. Salam, Phys. Rev. D [**10,**]{} 275 (1974); R.N. Mohapatra and J.C. Pati, Phys. Rev. D [**11,**]{} 566, 2558 (1975), which however had a slight difference in the masses of left and right Higgs multiplets. This would result in a contradiction if space were non orientable in an appropriate gedanken experiment involving continuous transport around $\gamma$, as discussed in the present letter. W. Rindler, Mon. Not. R. Astron. Soc. [ **116,**]{} 663 (1956); Charles W. Misner, Astrophys. J. [**151,**]{} 431 (1968); Charles W. Misner, Phys. Rev. Lett. [**22,**]{} 1071 (1969). N. D. Hari Dass, Phys. Rev. Lett. [**36,**]{} 393 (1976); Ann. Phys. [**107,**]{} 337 (1977). J. Anandan, Phys. Rev. Lett. [**48,**]{} 1660 (1982). .5cm
--- abstract: 'An algebraic cohomological characterization of a class of linearly broken Ward identities is provided. The examples of the topological vector supersymmetry and of the Landau ghost equation are discussed in detail. The existence of such a linearly broken Ward identities turns out to be related to BRST exact antifield dependent cocycles with negative ghost number.' author: - | **L.C.Q.Vilar, O. S. Ventura, C.A.G. Sasaki**\ CBPF, Centro Brasileiro de Pesquisas Físicas\ Rua Xavier Sigaud 150, 22290-180 Urca\ Rio de Janeiro, Brazil\ and - | **S.P. Sorella**\ UERJ, Universidade do Estado do Rio de Janeiro\ Departamento de Física Teórica\ Instituto de Física\ Rua São Francisco Xavier, 524\ 20550-013, Maracanã, Rio de Janeiro, Brazil - '**CBPF-NF-007/97**' - '**PACS: 11.10.Gh**' title: '**Algebraic Characterization of Vector Supersymmetry in Topological Field Theories** ' ---    Introduction ============= The topological theories [@bbrt] are known to be characterized, besides their BRST simmetry, by the so called *topological vectorsupersymmetry* [@brt; @dgs; @gms; @br]; an additional invariance possessing rather interesting properties. The first property is that the generators of the topological susy carry a Lorentz index and, together with the BRST operator, give rise to an algebra of the Wess-Zumino type which, closing on-shell on the space-time translations, allows for a supersymmetric interpretation [@brt; @dgs; @gms; @br]. The second feature of the topological susy is that it is present only after the introduction off all the ghost fields needed in order to quantize the model, *i.e.* it is an invariance of the fully quantized action and not only of its classical part as one can check, for instance, in the case of the Schwartz type topological $BF$ models [@gms]. It should also be remarked that this last feature is unavoidably related to the choice of the gauge fixing term. In other words, the vector susy can exist only for certain values of the gauge parameters present in the gauge fixing condition. As an example of this feature let us mention the three dimensional Chern-Simons model for which, among the class of the linear covariant gauge fixings, the vector susy turns out to select the Landau gauge [@brt; @dgs]. The third interesting aspect of the vector susy is that, after the introduction of the antifields (or BRST external sources), the algebra between the vector susy Ward operator and the BRST operator closes off-shell on the space-time translations, without making use of the equations of motion. Moreover, the vector susy loses now the property of being an exact invariance of the fully quantized action. Rather, it yields a broken Ward identity [@dgs; @gms; @br]. It is a remarkable feature, however, that the corresponding breaking term is in fact a classical breaking, *i.e.* a breaking which is purely linear in the quantum fields. It is known that such a kind of breaking does not get renormalized by the quantum corrections and does not spoil the usefulness of the corresponding Ward identity [@book]. The latter turns out not only to be free from anomalies at the quantum level, but it plays a crucial role in order to establish the ultraviolet finiteness of the topological models [@dgs; @gms; @br]. Let us also mention that the vector susy Ward operator can be introduced on a more abstract geometrical way for a large class of gauge models [@sslt; @wos; @womss; @bss; @cqs], independently from the fact that it is or not related to a (linearly broken) symmetry of the action. In this case the vector Ward operator plays the role of an algebraic operator which, thanks to the fact that it decomposes the space-time derivative as a BRST anticommutator, turns out to be very useful in order to solve the descent equations associated to the BRST cohomology classes for the anomalies and the invariant counterterms. In addition, it allows to encode all the relevant informations (BRST transformations of the fields, BRST cohomology classes, solutions of the descent equations,...) into a unique equation which takes the suggestive form of a generalized zero curvature condition [@zcurv; @sdeseq]. All these properties, if on the one hand make the vector susy quite interesting, on the other hand motivate further investigations about its origin. For instance, in the case of the Witten four dimensional topological Yang-Mills theory one may wonder about the possibility of performing a twist [@tym] of the generators of the corresponding N=2 supersymmetric Yang-Mills theory in order to obtain the vector susy. The situation is less clear for other topological models, especially for those belonging to the Schwartz class which are not manifestly related to an extended supersymmetric algebra. For these models the vector susy has been introduced essentially by hand [@brt; @dgs; @gms] and later on has been related to the existence of a conserved current related to the fact that the energy-momentum tensor of a topological theory can be expressed as a pure BRST variation [@ms; @book]. Moreover, a general set up accounting for all the features displayed by the vector susy has not yet been completely worked out. This is the aim of this paper,* i.e.* to provide a purely algebraic cohomological characterization of the existence of the vector susy and of the related linear classical breaking term. We shall also see that it is precisely the requirement of linearity in the quantum fields of the breaking term which selects a particular set of gauge parameters, clarifying thus the relationship between the vector susy and the gauge fixing condition. In particular, we shall be able to prove that the existence of the topological vector susy turns out to be deeply related to a vector BRST invariant antifield dependent cocycle with ghost number -1. The existence or not of a vector supersymmetry depends purely on the fact that such an antifield cocycle is cohomological trivial or not. When it is, the vector susy Ward identity is present and turns out to be necessarily accompanied by a breaking term linear in the quantum fields. On the other hand, when such an antifield cocycle in not BRST exact, the vector susy cannot be established and we are left with an example of a nontrivial antifield dependent BRST cohomology class. This algebraic framework is very related to the cohomological reformulation of the Noether theorem given by M. Henneaux et al. [@mh]. We shall see in fact that the aforementioned vector cocycle can be related to a set of currents among which one can identify the BRST invariant energy-momentum tensor, showing then that this antifield cocycle is related to the Poincaré transformations, according to the analysis of Henneaux et al. [@mh]. Let us also mention that, recently, this vector antifield cocycle has been considered in connection with the problem of including into a unique extended Slavnov-Taylor identity additional global invariances of the action [@glob]. The paper is organized as follows. In the Sect.2 we introduce the general algebraic set up, we present the relevant properties of the vector antifield cocycle and we discuss its relation with the BRST invariant energy-momentum tensor. In the Sect.3 the pure Yang-Mills model is considered. We shall prove that in this case the vector cocycle is not trivial, identifying then a BRST cohomology class. The Sect.4 is devoted to a detailed analysis of several examples of topological models. It will be proven that in these cases the BRST triviality of the vector cocycle is at the origin of the existence of the linearly broken topological vector susy Ward identities. Finally, in the Sect.5 we will present another example of a linearly broken Ward identity whose contact terms are generated, in complete analogy with the case of the topological models, by the BRST exactness of an antifield dependent cocycle possessing a group index. Such a Ward identity, known as the Landau ghost equation [@bps], turns out to be related to the rigid gauge invariance. General Notations and the Vector Cocycle ======================================== In order to present the general algebraic set up let us begin by fixing the notations. We shall work in a flat $D$-dimensional space-time equipped with a set of fields generically denoted by $\left\{ \varphi ^i\right\} $, $i$ labelling the different kinds of fields needed in order to properly quantize the model, *i.e.* gauge fields, ghosts, ghosts for ghosts, etc... Following the standard quantization procedure we introduce for each field $\varphi ^i$ of ghost number $\mathcal{N}_{\varphi ^i\text{ }}$ and dimension $d_{\varphi ^i}$, the corresponding antifield $\varphi ^{i*}$ with ghost number $-\left( 1+\mathcal{N}_{\varphi ^i\text{ }}\right) $ and dimension $\left( D-d_{\varphi ^i}\right) $. We shall also assume that the set of fields $\left\{ \varphi ^i\right\} $ do not explicitely contains the antighosts and their corresponding Lagrange multipliers which, being grouped in BRST doublets, do not contribute to the BRST cohomology [@ymcoh]. Accordingly, some of the antifields $\varphi ^{i*}$ have to be understood as shifted* *antifields [@book] which already take into account the antighosts. In order to define the model it remains now to introduce the classical gauge fixed *reduced* [^1] action [@book] $\Sigma \left( \varphi ,\varphi ^{*}\right) $. $\;$This is done by requiring that $\Sigma $ is$\;$power counting renormalizable and that it is the solution of the classical homogeneous Slavnov-Taylor (or master equation) identity [@bv] $$\int d^Dx\frac{\delta \Sigma }{\delta \varphi ^i}\frac{\delta \Sigma }{\delta \varphi ^{i*}}\;=\;\frac 12\mathcal{B}_\Sigma \Sigma \;=\;0\;\;, \label{slav-tayl}$$ where $\mathcal{B}_\Sigma \;$denotes the nilpotent linearized Slavnov-Taylor operator $$\mathcal{B}_\Sigma \;=\int d^Dx\,\left( \frac{\delta \Sigma }{\delta \varphi ^i}\frac \delta {\delta \varphi ^{i*}}\;+\;\frac{\delta \Sigma }{\delta \varphi ^{i*}}\frac \delta {\delta \varphi ^i}\right) \;\;, \label{lin-slav-tayl}$$ $$\mathcal{B}_\Sigma \;\mathcal{B}_\Sigma \;=\;0\;\;. \label{nilp}$$ As usual, the classical action $\Sigma \left( \varphi ,\varphi ^{*}\right) $ will be assumed to be invariant under the space-time translations, *i.e.* $$\mathcal{P}_\mu \Sigma \;=\;\int d^Dx\;\left( \partial _\mu \varphi ^i\frac{\delta \Sigma }{\delta \varphi ^i}\;+\;\partial _\mu \varphi ^{i*}\frac{\delta \Sigma }{\delta \varphi ^{i*}}\right) \;=\;0\;\;. \label{translations}$$ The classical Slavnov-Taylor identity $\left( \text{\ref{slav-tayl}}\right) $ and the translation invariance $\left( \text{\ref{translations}}\right) $ will be taken therefore as the basic starting points for the characterization of the classical action $\Sigma \left( \varphi ,\varphi ^{*}\right) $ and for the algebraic analysis which will be carried out in the next sections. Let us also remark that the requirement of the translation invariance, as expressed by the equation $\left( \text{\ref {translations}}\right) $, does not imply any further restriction on $\Sigma $ than those that are tacetely assumed in any local field theory. Let us introduce now the following integrated local polynomial, linear in the antifields $\varphi ^{i*}$, of ghost number $-1$, dimension $\left( D+1\right) $, and of the vector type $$\Omega _\nu ^{-1}=\;\int d^Dx\;\left[ \omega _\nu ^{-1}\right] _{D+1}\;\equiv \int d^Dx\;\left( -\right) ^{\left( 1+\mathcal{N}_{\varphi ^i\text{ }}\right) }\varphi ^i\;\partial _\nu \varphi ^{i*}\;\;. \label{vec-cocycle}$$ It is easily proven that the above expression is $\mathcal{B}_\Sigma -$invariant. In fact, one has $$\mathcal{B}_\Sigma \;\Omega _\nu ^{-1}\;=\int d^Dx\;\left( \partial _\nu \varphi ^i\frac{\delta \Sigma }{\delta \varphi ^i}\;+\;\partial _\nu \varphi ^{i*}\frac{\delta \Sigma }{\delta \varphi ^{i*}}\right) =\mathcal{P}_\nu \Sigma \;=0\;\;, \label{Om-inv}$$ due to the translation invariance of the calssical action $\Sigma \left( \varphi ,\varphi ^{*}\right) $. Having proven the invariance of $\Omega _\nu ^{-1}$ we have now to establish if it eventually identifies a cohomology class of the operator $\mathcal{B}_\Sigma $. We have in fact the following two possibilities, namely $1)$ $\Omega _\nu ^{-1}$ is $\mathcal{B}_\Sigma -$exact, *i.e.* $$\Omega _\nu ^{-1}=\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;\;,\;\; \label{triviality}$$ $\Xi _\nu ^{-2}\;$being an integrated local polynomial of ghost number $-2$ and dimension $\left( D+1\right) $. $2)$ $\Omega _\nu ^{-1}$ belongs to the integrated cohomology of $\mathcal{B}_\Sigma $, $$\Omega _\nu ^{-1}\neq \mathcal{B}_\Sigma \Xi _\nu ^{-2}\;. \label{nontriv}$$ The detailed analysis of these two possibilities will be the main subject of the next Sections. Let us limit here to underline that while the first possibility $\left( \ref{triviality}\right) $ turns out to be a feature of the topological theories, the second one $\left( \ref{nontriv}\right) $ is typical of a Yang-Mills theory and, more generally, of any model possessing an invariant energy-momentum tensor $T_{\mu \nu }$ which cannot be written as a pure $\mathcal{B}_\Sigma -$variation, *i.e.* $$\mathcal{B}_\Sigma T_{\mu \nu }=0\;\;,\;\;T_{\mu \nu }\neq \mathcal{B}_\Sigma \Lambda _{\mu \nu }\;\;. \label{nontriv-en}$$ In this case $\Omega _\nu ^{-1}$ provides an example of a nontrivial antifield dependent cocycle of the operator $\mathcal{B}_\Sigma $, giving thus an explicit realization of the general results of [@mh]. Concerning now the first possibility$\;\left( \ref{triviality}\right) $, it is worthwhile to recall that one of the peculiar property of the topological models is precisely that of having a nonphysical energy momentum tensor [@tym; @bbrt], *i.e.* the energy momentum tensor of a topological field theory $\;T_{\mu \nu }^{top}$ can be written as a pure $\mathcal{B}_\Sigma -$variation $$T_{\mu \nu }^{top}=\mathcal{B}_\Sigma \Lambda _{\mu \nu }\;\;, \label{triv-en}$$ for some local $\Lambda _{\mu \nu }$, yielding thus a more direct indication of the $\mathcal{B}_\Sigma -$exactness of the vector cocycle $\Omega _\nu ^{-1}$. For a better understanding of the relation between $\Omega _\nu ^{-1}\;$and the energy-momentum tensor, let us write down the descent equations [@ymcoh; @mh; @book] corresponding to the integrated invariance condition $\left( \ref{Om-inv}\right) $, *i.e.* $$\begin{aligned} \mathcal{B}_{\Sigma \;}\left[ \omega _\nu ^{-1}\right] _{D+1}\; &=&\;\partial ^{\mu _1}\left[ \omega _{\mu _1\nu }^0\right] _D\;\;, \label{desc-equ} \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\mu _1\nu }^0\right] _D\; &=&\;\partial ^{\mu _2}\left[ \omega _{\left[ \mu _1\mu _2\right] \nu }^1\right] _{D-1}\;\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _2\right] \nu }^1\right] _{D-1}\; &=&\;\partial ^{\mu _3}\left[ \omega _{\left[ \mu _1\mu _2\mu _3\right] \nu }^2\right] _{D-2}\;\;, \nonumber \\ &&..........\;\;, \\ &&..........\;\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _2...\mu _{D-1}\right] \nu }^{D-2}\right] _2\; &=&\;\partial ^{\mu _D}\left[ \omega _{\left[ \mu _1\mu _{2...}\mu _{D-1}\mu _D\right] \nu }^{D-1}\right] _1\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _{2...}\mu _{D-1}\mu _D\right] \nu }^{D-1}\right] _1\; &=&0\;\;, \nonumber\end{aligned}$$ where the $\left[ \omega _{\left[ \mu _1\mu _{2...}\mu _j\right] \nu }^{j-1}\right] _{D-j+1}$ with $\left( j=0,...,D\right) $ are local currents of ghost number $\left( j-1\right) $, antisymmetric in the lower indices$\;\left( \mu _{1,}\mu _{2,...,}\mu _j\right) $,$\;$and of dimension $\left( D-j+1\right) $. The usefulness of working with the system $\left( \ref {desc-equ}\right) \;$is due to the fact that these equations relate the local cocycle $\left[ \omega _\nu ^{-1}\right] _{D+1}$ with currents of lower dimension which can provide a more easy and transparent interpretation of the physical meaning of $\Omega _\nu ^{-1}$. To this purpose it should be remarked that the cocycle $\left[ \omega _{\mu _1\nu }^0\right] _D\;$entering the second equation of the system $\left( \ref{desc-equ}\right) \;$has the same quantum numbers of the energy momentum tensor, being of dimension $D$, of ghost number $0$, and possessing two Lorentz indices. In particular, from the general results on the BRST cohomology [@ymcoh] it follows that, independently from the fact that the cohomology of the operator $\mathcal{B}_{\Sigma \;}$is empty or not in the sectors with dimension lower than $D$, the existence of a nontrivial local cohomology in the sector of dimension $D$ and with two free Lorentz indices necessarily implies the nontriviality of the upper level of dimension $D+1$. It becomes apparent thus that the nontriviality of the BRST invariant energy-momentum tensor is deeply related to the existence of a nontrivial integrated cohomology in the sector of dimension $D+1\;$and negative ghost-number. In fact, as we shall see explicitily in the case of the Yang-Mills theory, the current$\;\left[ \omega _{\mu _1\nu }^0\right] _D$ turns out to be precisely the improved BRST invariant energy-momentum tensor. On the other hand, the use of the descent equations $\left( \ref{desc-equ}\right) $ provides a simple demonstration of the triviality of the vector cocycle $\Omega _\nu ^{-1}$ in the case of the topological theories. This is actually due to the fact that the field content of the topological models gives rise to BRST cohomology classes which are empty in the various sectors appearing in the descent equations $\left( \ref{desc-equ}\right) $, implying in particular that the corresponding energy momentum tensor is BRST trivial, according to eq.$\left( \ref{triv-en}\right) $. Moreover, as we shall see in detail later on, the right hand side of the equation $\left( \ref{triviality}\right) $ expressing the BRST triviality of $\Omega _\nu ^{-1}$ will provide the contact terms of the vector susy Ward identity of the topological theories. In this latter case we shall also check that the left hand side of eq.$\left( \ref{triviality}\right) $ reduces to a classical breaking, *i.e.* to a breaking purely linear in the quantum fields, showing then that the topological vector susy Ward identity is always linearly broken. Pure Yang-Mills Theory ====================== As a first application of the general algebraic set up discussed in the previous Section, let us now prove that in the case of pure Yang-Mills action the vector cocycle $\Omega _\nu ^{-1}$ cannot be written as an exact $\mathcal{B}_\Sigma -$term. Let us begin by considering the complete fully quantized gauge fixed Yang-Mills action which, choosing a Feynman gauge, reads $$\begin{aligned} \mathcal{S} &=&\int d^4x\;\left( -\frac 1{4g^2}F_{\mu \nu }^aF^{a\mu \nu }\;+\text{ }b^a\partial A^a+\frac \alpha 2b^ab^a\;+\;\partial ^\mu \overline{c}^a(D_\mu c)^a\right) \label{y-m-action} \\ &&+\int d^4x\;\hat{A}_\mu ^{a*}(D^\mu c)^a-\frac 12\;f_{abc}C^{*a}c^bc^c\;, \nonumber\end{aligned}$$ where $\left( c,\overline{c},b\right) \;$denote respectively the ghost, the antighost and the Lagrange multiplier fields and $$(D_\mu c)^a=\;\partial _\mu c^a\;+f_{\;bc}^a\;A_\mu ^bc^c\;\;, \label{cov-der}$$ is the covariant derivative with $f_{abc}$ the totally antisymmetric structure constants of a compact semisimple Lie group $G$. The two antifields $\left( \hat{A}^{*},C^{*}\right) \;$are introduced in order to properly define the nonlinear BRST transformations of the gauge field $A$ and of the Faddeev-Popov ghost $c$. The quantum numbers, *i.e.* the dimensions and the ghost numbers, of all the fields and antifields are assigned as in the following table $A$ ${\hat{A}}^{*}$ $c$ $C^{*}$ $\bar{c}$ $b$ ----------- ----- ----------------- ----- --------- ----------- ----- $dim$ $1$ $3$ $0$ $4$ $2$ $2$ $gh-numb$ $0$ $-1$ $1$ $-2$ $-1$ $0$ : dimension and ghost number[]{data-label="ym-table"} \ As it is well known [@book], the complete action $\mathcal{S}$ $\;$is characterized by the classical Slavnov-Taylor identity, expressing the invariance of $\left( \ref{y-m-action}\right) $ under the BRST transformations, *i.e.* $$\int d^4x\left( \frac{\delta \mathcal{S}}{\delta A_\mu ^a}\frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}+\frac{\delta \mathcal{S}}{\delta c^a}\frac{\delta \mathcal{S}}{\delta C^{*a}}+b^a\frac{\delta \mathcal{S}}{\delta \overline{c}^a}\right) \;=\;0\;\;, \label{y-m-slav-tayl}$$ and by the linear gauge fixing condition [@book] $$\frac{\delta \mathcal{S}}{\delta b^a}\;=\;\partial A^a+\alpha b^a\;\;. \label{feynm-gauge-fix}$$ A further identity, the antighost equation [@book] $$\frac{\delta \mathcal{S}}{\delta \overline{c}^a}\;+\;\partial ^\mu \frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}\;=\;0\;\; \label{ant-ghost-eq}$$ follows from the gauge fixing condition $\left( \ref{feynm-gauge-fix}\right) $ and from the Slavnov-Taylor identity $\left( \ref{y-m-slav-tayl}\right) $, implying that the antighost $\overline{c}$ and the antifield $\hat{A}^{*}$ can enter only through the shifted* *antifield $A^{*a\mu }\;$of ghost number -1 and dimension 3 $$A^{*a\mu }\;=\;\hat{A}^{*a\mu }\;+\;\partial ^\mu \overline{c}^a\;\;. \label{shifted-antif}$$ Introducing now the reduced Yang-Mills action $\Sigma \left( A,A^{*},c,C^{*}\right) $ defined by the identities $\left( \ref {feynm-gauge-fix}\right) $ and $\left( \ref{ant-ghost-eq}\right) $ $$\mathcal{S}\;=\;\Sigma \;+\int d^4x\;\left( \text{ }b^a\partial A^a+\frac \alpha 2b^ab^a\right) \;\;\;, \label{def-red-action}$$ *i.e.* $$\Sigma \;=\int d^4x\;\left( -\frac 1{4g^2}F_{\mu \nu }^aF^{a\mu \nu }\;+\;A^{*a\mu }(D_\mu c)^a-\frac 12\;f_{abc}C^{*a}c^bc^c\;\right) \;\;, \label{red-ym-action}$$ it is easily verified that $\Sigma $ obeys the homogeneous Slavnov-Taylor identity $$\int d^4x\left( \frac{\delta \Sigma }{\delta A_\mu ^a}\frac{\delta \Sigma }{\delta A^{*a\mu }}\;+\frac{\delta \Sigma }{\delta c^a}\frac{\delta \Sigma }{\delta C^{*a}}\;\right) =\;\frac 12\mathcal{B}_\Sigma \Sigma \;=\;0\;\;, \label{y-m-hom-st}$$ with $$\mathcal{B}_\Sigma \;=\;\int d^4x\left( \frac{\delta \Sigma }{\delta A_\mu ^a}\frac \delta {\delta A^{*a\mu }}+\frac{\delta \Sigma }{\delta A^{*a\mu }}\frac \delta {\delta A^{a\mu }}+\frac{\delta \Sigma }{\delta c^a}\frac \delta {\delta C^{*a}}\;+\frac{\delta \Sigma }{\delta C^{*a}}\frac \delta {\delta c^a}\right) \;\;, \label{y-m-linear}$$ and $$\mathcal{B}_\Sigma \mathcal{B}_\Sigma \;=\;0\;\;. \label{y-m-linear-nilp}$$ For the vector cocycle $\Omega _\nu ^{-1}$ of eq. $\left( \ref{vec-cocycle}\right) $ we have now $$\Omega _\nu ^{-1}=\;\int d^4x\;\;\left[ \omega _\nu ^{-1}\right] _5\;\equiv \;\int d^4x\;\left( c^a\partial _\nu C^{*a}-A_\mu ^a\partial _\nu A^{*a\mu }\;\right) \;, \label{ym-vect-cocycle}$$ which, according to the equation $\left( \ref{Om-inv}\right) $, turns out to be $\mathcal{B}_\Sigma -$invariant $$\mathcal{B}_\Sigma \;\Omega _\nu ^{-1}=\;\mathcal{P}_\nu \Sigma \;=0\;\;. \label{ym-Om-inv}$$ We are now ready to prove that in the case of Yang-Mills theory the vector cocycle $\Omega _\nu ^{-1}$ of eq.$\left( \ref{ym-vect-cocycle}\right) $ is nontrivial. Let us proceed by assuming the converse, *i.e.* let us suppose that $\Omega _\nu ^{-1}$ can be written as an exact $\mathcal{B}_\Sigma -$term $$\Omega _\nu ^{-1}=\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;\;,\;\; \label{ym-triviality}$$ for some integrated local polynomial $\Xi _\nu ^{-2}\;$of ghost number $-2$ and dimension $5$. From the Table 1 it follows that the most general form for $\Xi _\nu ^{-2}$ is given by $$\Xi _\nu ^{-2}\;=\beta \int d^4x\;C^{*a}A_\nu ^a\;\;, \label{ym-exactness}$$ where $\beta $ is an arbitrary free parameter which has to be fixed by the exactness condition $\left( \ref{ym-triviality}\right) $. Thus, from eq.$\left( \ref{ym-triviality}\right) $, we should have the algebraic equality $$\int d^4x\left( c^a\partial _\nu C^{*a}-A_\mu ^a\partial _\nu A^{*a\mu }\right) =\beta \int d^4x\left( C^{*a}\partial _\nu c^a+A_\nu ^a\left( D_\mu A^{*\mu }\right) ^a\;\right) \;\;. \label{ym-eq}$$ However it is almost immediate to check that the above equation has no solution for $\beta $, showing then that $\Omega _\nu ^{-1}$cannot be written as a pure $\mathcal{B}_\Sigma -$variation, $$\Omega _\nu ^{-1}\neq \mathcal{B}_\Sigma \Xi _\nu ^{-2}\;\;. \label{ym-nontriviality}$$ Therefore, in the case of Yang-Mills theory the vector cocycle $\Omega _\nu ^{-1}$ identifies a cohomology class of the operator $\mathcal{B}_\Sigma \;$in the sector of the integrated local polynomials of ghost number -1 and dimension 5, providing thus an explicit example of an antifield dependent cohomology class of $\mathcal{B}_\Sigma $. For a better understanding of the nontriviality of the vector cocycle $\Omega _\nu ^{-1}$ it remains now to discuss its relation with the energy-momentum tensor of the model. To this purpose we analyse the descent equations corresponding to the local polynomial $\left[ \omega _\nu ^{-1}\right] _5$, *i.e.* $$\begin{aligned} \mathcal{B}_{\Sigma \;}\left[ \omega _\nu ^{-1}\right] _5\; &=&\;\partial ^{\mu _1}\left[ \omega _{\mu _1\nu }^0\right] _4\;\;, \label{ym-desc-equ} \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\mu _1\nu }^0\right] _4\; &=&\;\partial ^{\mu _2}\left[ \omega _{\left[ \mu _1\mu _2\right] \nu }^1\right] _3\;\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _2\right] \nu }^1\right] _3\; &=&\;\partial ^{\mu _3}\left[ \omega _{\left[ \mu _1\mu _2\mu _3\right] \nu }^2\right] _2\;\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _2\mu _3\right] \nu }^2\right] _2\; &=&\;\partial ^{\mu _4}\left[ \omega _{\left[ \mu _1\mu _2\mu _3\mu _4\right] \nu }^3\right] _1\;, \nonumber \\ \mathcal{B}_{\Sigma \;}\left[ \omega _{\left[ \mu _1\mu _2\mu _3\mu _4\right] \nu }^3\right] _1\; &=&0\;\;. \nonumber\end{aligned}$$  After some straightforward algebraic manipulations, for the local currents $\left( \omega _{\left[ \mu _1..\mu _j\right] \nu }^{j-1},\;j=1,..4\right) $ one finds $$\left[ \omega _{\mu _1\nu }^0\right] _4\;=\frac 1{g^2}\;\left( F_{\mu _1\sigma }^aF_\nu ^{a\;\sigma }\;-\;\frac 14g_{\mu _1\nu }F_{\rho \sigma }^aF^{a\rho \sigma }\right) +\mathcal{B}_\Sigma \left( A_{\mu _1}^{*a}A_\nu ^a\right) \;\;,\; \label{ym-en-mom}$$ $$\left[ \omega _{\left[ \mu _1\mu _2\right] \nu }^1\right] _3\;=\varepsilon _{\mu _1\mu _2}^{\;\;\;\;\;\;\;\mu _3\mu _4}\;\partial _{\mu _3}\left[ \tilde{\omega}_{\mu _4\nu }^1\right] _2\;\;, \label{ot-1}$$ $$\left[ \omega _{\left[ \mu _1\mu _2\mu _3\right] \nu }^2\right] _2\;=\;\varepsilon _{\mu _1\mu _2\mu _3}^{\;\;\;\;\;\;\;\;\;\;\;\mu _4}\mathcal{B}_\Sigma \left[ \tilde{\omega}_{\mu _4\nu }^1\right] _2\;\;+\varepsilon _{\mu _1\mu _2\mu _3}^{\;\;\;\;\;\;\;\;\;\;\;\mu _4}\partial _{\mu _4}\left[ \tilde{\omega}_\nu ^2\right] _1\;\;, \label{ot-2}$$ $$\left[ \omega _{\left[ \mu _1\mu _2\mu _3\mu _4\right] \nu }^3\right] _1\;=\varepsilon _{\mu _1\mu _2\mu _3\mu _4}^{\;\;\;\;\;\;\;}\mathcal{B}_\Sigma \left[ \tilde{\omega}_\nu ^2\right] _1\;\;, \label{ot-3}$$ with $\left[ \tilde{\omega}_{\mu _4\nu }^1\right] _2\;$and $\left[ \tilde{\omega}_\nu ^2\right] _1$ local arbitrary polynomials. From the above expressions we observe that while the last three currents are trivial solutions of the descent equations $\left( \ref{ym-desc-equ}\right) $, the first one, *i.e.* $\left[ \omega _{\mu _1\nu }^0\right] _4$,* * yields the familiar expression of the BRST invariant improved energy-momentum tensor which, as it is well known , belongs to the cohomology of $\mathcal{B}_\Sigma $ [@ymcoh]. One sees thus that, as already remarked in Sect.1, the existence of a nontrivial invariant energy-momentum tensor is at the origin of the nontriviality of the vector cocycle $\Omega _\nu ^{-1}$. Let us conclude this Section by remarking that the exact term $\mathcal{B}_\Sigma \left( A_{\mu _1}^{*a}A_\nu ^a\right) $ which naturally appears in the right hand side of the equation $\left( \ref {ym-en-mom}\right) $ is needed in order to ensure the off-shell conservation of the improved Yang-Mills energy-momentum tensor $T_{\mu \nu }^{YM}=F_{\mu \sigma }^aF_\nu ^{a\;\sigma }-\frac 14g_{\mu \nu }F_{\rho \sigma }^aF^{a\rho \sigma }$. Of course, the same conclusions hold if the pure Yang-Mills action is supplemented with the introduction of matter fields. The Case of the Topological Field Theories ========================================== Having proven the nontriviality of $\Omega _\nu ^{-1}$ for Yang-Mills type theories, let us now turn to analyse the vector cocycle $\left( \ref {vec-cocycle}\right) $ in the context of the topological models. In this case, as already mentioned in the Sect.2, it turns out that $\Omega _\nu ^{-1}$ is always BRST exact, *i.e.* $$\Omega _\nu ^{-1}=\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;\;,\;\; \label{top-triviality}$$ for some integrated local field polynomial $\Xi _\nu ^{-2}$ with ghost number $-2$. The reason of the exactness of $\Omega _\nu ^{-1}$ for the topological theories relies on their field content which, as proven by several authors [@osv; @kk; @bci; @dmpw; @gms; @br; @book], does not allow for nontrivial BRST cohomology classes with free Lorentz indices, as the ones needed in order to have nontrivial solutions of the descent equations $\left( \ref{desc-equ}\right) $. As it is well known, the topological models can be basically divided in two classes [@bbrt], yielding respectively the so called cohomological and Schwartz type theories, both having a BRST trivial energy-momentum tensor. The models belonging to the first class are identified by the fact that the gauge fixed classical action can be expressed as a pure BRST variation* *and that it is invariant under the so called topological shift symmetry [@bbrt; @osv; @kk; @bci; @dmpw; @bs; @lp]. In the second case the invariant action is not a pure BRST variation, although it depends on the metric tensor only through the unphysical gauge fixing term. Examples of theories belonging to the first class are given by the Witten’s topological Yang-Mills theory [@tym] and by the topological sigma model [@tsm]. The three-dimensional Chern-Simons model [@bbrt; @brt; @dgs] and the $BF$ systems [@bbrt; @gms] provide examples of Schwartz type topological theories. Without entering into the details, let us limit here to mention that in the case of the cohomological models the topological shift symmetry implies the emptiness of the BRST cohomology[^2] [@bbrt; @br; @osv; @kk; @bci] and therefore the triviality of the vector cocycle $\Omega _\nu ^{-1}$. Concerning now the Schwartz type models, it can be proven that the field content of these theories allows for nontrivial BRSTcohomology classes [@dgs; @gms; @book]. Moreover, as observed in [@zcurv], these models can be formulated in a pure geometrical way due to the fact that all the fields (gauge fields, ghosts, ghosts for ghosts, etc...) can be viewed as being the components of a generalized gauge connection obeying a zero curvature condition. This structure, also called complete* *ladder* *structure, implies that all nontrivial BRST cohomology classes can be identified with invariant polynomials built up with the undifferentiated dimensionless scalar ghosts present in the model [@dgs; @gms; @book]. In particular, from this result it follows that the BRST cohomology classes entering the descent equations $\left( \ref{desc-equ}\right) $ are empty, meaning thus that also in the case of the Schwartz type topological theories the vector cocycle $\Omega _\nu ^{-1}$ is BRST exact . Although the equation $\left( \ref{top-triviality}\right) $ could seem empty of any interesting information because of its BRST exactness, it turns out however that its content is far from being irrelevant. In fact, as we shall see explicitely in the models considered below, the right hand side of the eq.$\left( \ref{top-triviality}\right) $, due to the form $\left( \ref {lin-slav-tayl}\right) $ of the operator $\mathcal{B}_\Sigma $, is easily seen to provide the contact terms of a Ward identity. This identity, due to the presence of the vector cocycle $\Omega _\nu ^{-1}$ in the left hand side of $\left( \ref{top-triviality}\right) $, cannot express an exact invariance of the theory. Instead, eq.$\left( \ref{top-triviality}\right) \;$will have the meaning of a broken Ward identity. Moreover, it is not difficult to convince oneself that such a breaking term is in fact a classical breaking, *i.e.* a breaking which is linear in the quantum fields. Therefore it does not get renormalized by the radiative corrections and does not spoil the usefulness of the corresponding broken Ward identity. As one can easily understand, the unavoidable existence of this classical breaking term is a consequence of the fact that the general expression of the vector cocycle $\Omega _\nu ^{-1}$ given in eq.$\left( \ref{vec-cocycle}\right) $ is linear in the quantum fields, apart possible quadratic terms coming from the fact that some of the antifields $\varphi ^{i*}$ have been shifted in order to take into account the antighosts. However we shall check that these quadratic terms turn out to be handled in a very simple way, being reinterpreted as pure contact terms by means of an appropriate choice of the nonphysical gauge parameters present in the gauge condition. This means that the requirement of having breaking terms at most linear in the quantum fields will fix the gauge parameters, implying therefore that the corresponding Ward identity is consistent with the Quantum Action Principles [@qap] only for certain classes of gauge fixings. We shall give explicit examples of this feature later on in the cases of the three dimensional Chern-Simons model and of the Landau ghost equation derived in the Sect.5. The equation $\left( \ref{top-triviality}\right) $ has thus the meaning of a purely algebraic cohomological characterization of a Ward identity. In fact, once one is able to prove that the vector cocycle $\Omega _\nu ^{-1}$ is trivial, one knows authomatically that the equation $\left( \ref {top-triviality}\right) $ has to be necessarily satisfied for some local polynomial $\Xi _\nu ^{-2}$. The strategy to be followed now is almost trivial. One writes down the most general expression for $\Xi _\nu ^{-2}$ compatible with the dimension of the space-time and with the field content of the model under consideration. Such an expression will, in general, depend on a set of free global coefficients. These coefficients are fixed by demanding that eq.$\left( \ref{top-triviality}\right) $ is valid. Moreover, recalling that the functional form of the operator $\mathcal{B}_\Sigma $ (see eq. $\left( \ref{lin-slav-tayl}\right) $) depends on the reduced action $\Sigma $, it is apparent to see that the coefficients of $\Xi _\nu ^{-2}$ as well as the contact terms of the corresponding Ward identity, are uniquely determined by the form of the classical action $\Sigma $. This means that, whenever the cocycle $\Omega _\nu ^{-1}$ is trivial, the equation $\left( \ref{top-triviality}\right) $ tells us that the classical action $\Sigma $, in addition to the Slavnov-Taylor identity, obeys a further Ward identity whose contact terms are precisely given by the equation $\left( \ref{top-triviality}\right) $, providing thus a simple mechanism for an algebraic characterization of a new linearly broken symmetry of the action. Of course such an additional Ward identity will have the same quantum numbers of the vector cocycle $\Omega _\nu ^{-1}$, *i.e.* it will carry a Lorentz index and will have a negative ghost number. This vector identity, always present in the topological field theories, is known in the litterature as the topological vector supersymmetry [@brt; @dgs; @gms; @br]. * * It is important at this point to spend some words on the possibility of generalizing this purely algebraic mechanism in order to find other kinds of unknown symmetries eventually present in the model. What seems to emerge from the previous analysis is that the antifield dependent cocycles which are relevant for the existence of a *class of linearly broken* new Ward identities are those which are linear in the fields $\left\{ \varphi ^i\right\} $ and in the antifields $\left\{ \varphi ^{i*}\right\} $ and that are BRST exact. It is in fact this last property which when cast in the form of the equation $\left( \ref{top-triviality}\right) $ allows us to identify the contact terms of the corresponding Ward identity thanks to the form of the linearized Slavnov-Taylor operator. It is worthwhile to remark that an equation of the kind of $\left( \ref{top-triviality}\right) $ implies that the associate Ward identity is always broken, due to the existence of a nonvanishing left hand side term. Moreover, being this term linear in the quantum fields, the resulting breaking term is classical. In the next chapter we will discuss another interesting example of a classically linearly broken Ward identity, associated to the rigid gauge invariance, whose contact terms can be characterized in a purely algebraic way by means of an equation of the type of $\left( \ref{top-triviality}\right) $. Let us focus, for the time being, on the discussion of the vector susy Ward identity in the case of the topological field theories. In order to present in an explicit way the previous algebraic mechanism, we shall analyse in detail the examples of the three dimensional Chern-Simons gauge model, of the four dimensional $BF$ model and of the so called two dimensional $b$-$c$ ghost system. The argument can be easily repeated and adapetd to other kinds of topological models, such as the Witten’s cohomological theories and the higher dimensional $BF$ models. The three dimensional Chern-Simons model ---------------------------------------- Using the same notations of the Sect.3, for the complete three dimensional Chern-Simons action [@bbrt] quantized in the Landau gauge we have $$\mathcal{S}=\mathcal{S}_{cs}+\int d^3x\;\left( b^a\partial A^a+\partial ^\mu \overline{c}^a(D_\mu c)^a+\;\hat{A}_\mu ^{*}(D^\mu c)^a-\frac 12\;f_{abc}C^{*a}c^bc^c\right) \;, \label{c-s-quant}$$ where $\mathcal{S}_{cs}\;$is given by $$\mathcal{S}_{cs}=-\frac k{4\pi }\int d^3x\;\varepsilon ^{\mu \nu \rho }\left( A_\mu ^a\partial _\nu A_\rho ^a+\frac 13\;f_{abc}\text{ }A_\mu ^aA_\nu ^bA_\rho ^c\right) \;, \label{c-s-action}$$ $k$ identifying the inverse of the coupling constant. The fields and the antifields have now the following quantum numbers $A$ ${\hat{A}}^{*}$ $c$ $C^{*}$ $\bar{c}$ $b$ ----------- ----- ----------------- ----- --------- ----------- ----- $dim$ $1$ $2$ $0$ $3$ $1$ $1$ $gh-numb$ $0$ $-1$ $1$ $-2$ $-1$ $0$ : dimension and ghost number[]{data-label="cstable"} \ For the classical Slavnov-Taylor identity one has $$\int d^3x\left( \frac{\delta \mathcal{S}}{\delta A_\mu ^a}\frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}+\frac{\delta \mathcal{S}}{\delta c^a}\frac{\delta \mathcal{S}}{\delta C^{*a}}+b^a\frac{\delta \mathcal{S}}{\delta \overline{c}^a}\right) \;=\;0\;\;. \label{c-s-slav-tayl}$$ As before, making use of the Landau gauge condition $$\frac{\delta \mathcal{S}}{\delta b^a}\;=\;\partial A^a\;\;, \label{landau-gauge-fix}$$ and of the antighost equation $$\frac{\delta \mathcal{S}}{\delta \overline{c}^a}\;+\;\partial ^\mu \frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}\;=\;0\;\;, \label{c-s-ant-ghost-eq}$$ for the reduced Chern-Simons action $\Sigma \left( A,A^{*},c,C^{*}\right) $ $$\mathcal{S}\;=\;\Sigma \;+\int d^3x\;\text{ }b^a\partial A^a\;\;, \label{c-s-def-red-action}$$ we obtain $$\Sigma \;=\mathcal{S}_{cs}+\int d^3x\;\left( A_\mu ^{a*}(D^\mu c)^a-\frac 12\;f_{abc}C^{*a}c^bc^c\;\right) \;, \label{red-cs-action}$$ where, as usual, $A_\mu ^{a*}\;$is the shifted antifield $$A^{*a\mu }\;=\;\hat{A}^{*a\mu }\;+\;\partial ^\mu \overline{c}^a\;\;. \label{shifted-cs-antif}$$ Finally, taking into account the eqs. $\left( \ref{landau-gauge-fix}\right) $, $\left( \ref{c-s-ant-ghost-eq}\right) $ and the expression $\left( \ref {red-cs-action}\right) $, the Slavnov-Taylor identity $\left( \ref {c-s-slav-tayl}\right) $ becomes $$\int d^3x\left( \frac{\delta \Sigma }{\delta A_\mu ^a}\frac{\delta \Sigma }{\delta A^{*a\mu }}\;+\frac{\delta \Sigma }{\delta c^a}\frac{\delta \Sigma }{\delta C^{*a}}\right) =\;\frac 12\mathcal{B}_\Sigma \Sigma \;=\;0\;\;, \label{c-s-hom-st}$$ with $$\mathcal{B}_\Sigma \;=\;\int d^3x\left( \frac{\delta \Sigma }{\delta A_\mu ^a}\frac \delta {\delta A^{*a\mu }}+\frac{\delta \Sigma }{\delta A^{*a\mu }}\frac \delta {\delta A^{a\mu }}+\frac{\delta \Sigma }{\delta c^a}\frac \delta {\delta C^{*a}}\;+\frac{\delta \Sigma }{\delta C^{*a}}\frac \delta {\delta c^a}\right) \;\;, \label{c-s-linear}$$ and $$\mathcal{B}_\Sigma \mathcal{B}_\Sigma \;=\;0\;\;. \label{c-s-linear-nilp}$$ Repeating the same procedure done in the case of Yang-Mills theory, for the vector cocycle $\Omega _\nu ^{-1}$ we can write $$\Omega _\nu ^{-1}=\;\int d^3x\;\;\left[ \omega _\nu ^{-1}\right] _4\;\equiv \;\int d^3x\;\left( c^a\partial _\nu C^{*a}-A_\mu ^a\partial _\nu A^{*a\mu }\;\right) \;. \label{cs-vect-cocycle}$$ and $$\mathcal{B}_\Sigma \;\Omega _\nu ^{-1}=\mathcal{P}_\nu \Sigma \;=0\;\;. \label{Om-inv1}$$ In this case, as already mentioned, $\Omega _\nu ^{-1}$ turns out to be exact $$\Omega _\nu ^{-1}=\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;, \label{cs-exact}$$ where $\Xi _\nu ^{-2}$ is an integrated local polynomial of dimension 4 and ghost number -2. From the Table $\ref{cstable}$ it follows that the most general expression for $\Xi _\nu ^{-2}$ is given by $$\Xi _\nu ^{-2}=\int d^3x\;\left( \gamma C^{*a}A_\nu ^a+\;\frac \beta 2\varepsilon _{\nu \sigma \tau }A^{*a\sigma }A^{*a\tau }\right) \;, \label{cs-xi-cocycle}$$ where $\gamma $ and $\beta $ are free arbitrary parameters. Using now the expressions of the linearized Slavnov-Taylor operator $\left( \ref {c-s-linear}\right) $ and of the reduced Chern-Simons action $\left( \ref {red-cs-action}\right) $, for the right hand side of eq. $\left( \ref {cs-exact}\right) $ we have $$\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;=\int d^3x\;\left( \gamma A_\nu ^a\frac{\delta \Sigma }{\delta c^a}+\gamma C^{*a}\frac{\delta \Sigma }{\delta A^{*a\nu }}-\beta \varepsilon _{\nu \sigma \tau }A^{*a\sigma }\frac{\delta \Sigma }{\delta A_\tau ^a}\right) \;. \label{cs-xi-express}$$ Comparing then both sides of eq.$\left( \ref{cs-exact}\right) $, for the coeffiecients $\gamma $ and $\beta $ we get $$\gamma =-1\;,\;\;\;\;\;\beta =\frac{2\pi }k\;, \label{cs-coeff}$$ so that eq.$\left( \ref{cs-exact}\right) $ can be rewritten as $$\int d^3x\;\left( A_\nu ^a\frac{\delta \Sigma }{\delta c^a}+C^{*a}\frac{\delta \Sigma }{\delta A^{*a\nu }}+\frac{2\pi }k\varepsilon _{\nu \sigma \tau }A^{*a\sigma }\frac{\delta \Sigma }{\delta A_\tau ^a}+c^a\partial _\nu C^{*a}-A_\mu ^a\partial _\nu A^{*a\mu }\right) =0\;. \label{cs-red-ward-id}$$ Finally, moving from the reduced action $\Sigma \;$to the complete action $\mathcal{S}$ of eq. $\left( \ref{c-s-quant}\right) $ and making use of the gauge condition $\left( \ref{landau-gauge-fix}\right) $, the identity $\left( \ref{cs-red-ward-id}\right) $ can be cast in the form of a linearly broken Ward identity, namely $$\mathcal{W}_\nu \mathcal{S=}\Delta _\nu ^{cl}\;, \label{cs-susy}$$ with $$\mathcal{W}_\nu =\int d^3x\left( A_\nu ^a\frac \delta {\delta c^a}+C^{*a}\frac \delta {\delta \hat{A}^{*a\nu }}+\frac{2\pi }k\varepsilon _{\nu \sigma \tau }\left( \hat{A}^{*a\sigma }+\partial ^\sigma \overline{c}^a\right) \frac \delta {\delta A_\tau ^a}+\partial _\nu \overline{c}^a\frac \delta {\delta b^a}\right) \;, \label{cs-W-op}$$ and $$\Delta _\nu ^{cl}=\int d^3x\;\left( C^{*a}\partial _\nu c^a-\hat{A}^{*a\mu }\partial _\nu A_\mu ^a\;-\frac{2\pi }k\varepsilon _{\nu \sigma \tau }\hat{A}^{*a\sigma }\partial ^\tau b^a\right) \;. \label{cs-breaking}$$ The equation $\left( \ref{cs-susy}\right) $ is easily recognized to be the well known topological vector susy Ward identity of the three dimensional Chern-Simons theory [@brt; @dgs]. Let us conclude this section by observing that in the case in which instead of a Landau gauge fixing condition we had adopted a Feynman gauge (see eq.$\left( \ref {feynm-gauge-fix}\right) $), the right hand side of the eq. $\left( \ref {cs-susy}\right) $ would have been modified by the additional term $$\alpha \int d^3x\;b^a\partial _\nu \overline{c}^a\;, \label{quadr-break}$$ which, being quadratic in the quantum fields, would have spoiled the usefulness of the identity $\left( \ref{cs-susy}\right) $. We see therefore that, as already remarked, the requirement that the breaking term $\Delta _\nu ^{cl}$ is a classical breaking, *i.e.* at most linear in the quantum fields, forces the gauge parameter $\alpha $ to vanish, *i.e.* $\alpha =0$, picking up thus the Landau gauge. The four dimensional BF model ----------------------------- As the second example, we shall present the case of the four dimensional $BF$ model [@bbrt] whose classical invariant action is given by $$\mathcal{S}_{BF}=-\frac 14\int d^4x\;\varepsilon ^{\mu \nu \rho \sigma }F_{\mu \nu }^aB_{\rho \sigma }^a\;. \label{bf-action}$$ The quantization of this model requires the Batalin-Vilkoviski procedure [@bv] due to the presence of ghosts for ghosts. We shall limit here only to report the final result, reminding the reader to the numerous references [@bbrt] for the technical details. In particular, using the same notations of ref. [@gms], in order to gauge fix the invariant action $\left( \ref{bf-action}\right) $ we introduce a set of lagrangian multipliers $\left( b^a,h_\mu ^a,\omega ^a,\lambda ^a\right) $, a set of antighosts $\left( \overline{c}^a,\overline{\xi }_\mu ^a,\overline{\phi }^a,e^a\right) $, and a triple of ghosts $\left( c^a,\xi _\mu ^a,\phi ^a\right) $. For the gauge fixing action $\mathcal{S}_{gf}$ we have $$\begin{array}{ll} \mathcal{S}_{gf}\;=\displaystyle\int d^4x & \left( b^a\partial A^a-\partial ^\mu \overline{c}^a(D_\mu c)^a+h_\nu ^a\partial _\mu B^{a\mu \nu }+\omega ^a\partial \xi ^a+h^{a\mu }\partial _\mu e^a+\omega ^a\lambda ^a\right. \\ & -\partial ^\mu \overline{\phi }^a\left( (D_\mu \phi )^a+f_{abc}c^b\xi _\mu ^c\right) +\displaystyle\frac 12f_{abc}\varepsilon ^{\mu \nu \rho \sigma }(\partial _\mu \overline{\xi }_\nu ^a)\ (\partial _\rho \overline{\xi }_\sigma ^b)\phi ^c \\ & \left. -\partial ^\mu \overline{\xi }^{a\nu }\left( (D_\mu \xi _\nu )^a-(D_\nu \xi _\mu )^a+f_{abc}B_{\mu \nu }^bc^c\right) -\lambda ^a\partial \overline{\xi }^a\;\right) \;\;. \end{array} \label{bf-gauge-fix}$$ The ghost numbers and the dimensions of all the fields and ghosts are assigned as follows $A$ ${B}$ $c$ $\xi $ $\phi $ $\bar{c}$ $\overline{\xi }$ $\overline{\phi }$ $e$ $b$ $h$ $\omega $ $\lambda $ ----------- ----- ------- ----- -------- --------- ----------- ------------------- -------------------- ----- ----- ------ ----------- ------------ $dim$ $1$ $2$ $0$ $1$ $0$ $2$ $1$ $2$ $2$ $2$ $1$ $2$ $2$ $gh-numb$ $0$ $0$ $1$ $1$ $2$ $-1$ $-1$ $-2$ $0$ $0$ $0 $ $-1$ $1$ : dimension and ghost number[]{data-label="bftable"} Introducing now a set of antifields $\left( \;\hat{A}^{*a\mu },\hat{B}^{*a\mu \nu },C^{*a},\phi ^{*a},\hat{\xi}^{*a\mu }\right) $ associated respectively to the fields $\left( A^{a\mu },B^{a\mu \nu },c^a,\phi ^a,\xi ^{a\mu }\right) $, *i.e.* $$\begin{array}{ll} \mathcal{S}_{ext}=\displaystyle\int d^4x & \left( \displaystyle\frac 12\hat{B}^{*a\mu \nu }\left( (D_\nu \xi _\mu )^a-(D_\mu \xi _\nu )^a-f_{abc}B_{\mu \nu }^bc^c+f_{abc}\varepsilon _{\mu \nu \rho \sigma }\partial ^\rho \overline{\xi }^{b\sigma }\phi ^c\right) \right. \\ & +\hat{\xi}^{*a\mu }\left( (D_\mu \phi )^a+f_{abc}c^b\xi _\mu ^c\right) +\displaystyle\frac 18f_{abc}\varepsilon _{\mu \nu \rho \sigma }\hat{B}^{*a\mu \nu }\hat{B}^{*b\rho \sigma }\phi ^c \\ & \left. -\hat{A}^{*a\mu }(D_\mu c)^a+\displaystyle\frac 12f_{abc}C^{*a}c^bc^c+f_{abc}\phi ^{*a}c^b\phi ^c\right) \;\;\;, \end{array} \label{bf-ext-fields}$$ $\hat{A}^{*}$ $\hat{B}^{*}$ $C^{*}$ $\phi ^{*}$ $\hat{\xi}^{*}$ ----------- --------------- --------------- --------- ------------- ----------------- $dim$ $3$ $2$ $4$ $4$ $3$ $gh-numb$ $-1$ $-1$ $-2$ $-3$ $-2$ : dimension and ghost number[]{data-label="bf-ext-table"} we have that the complete action $\mathcal{S}$ $$\mathcal{S=S}_{BF}+\mathcal{S}_{gf}+\mathcal{S}_{ext}\;\;, \label{bf-comp-act}$$ obeys the following Slavnov-Taylor identity $$\begin{array}{ll} \displaystyle\int d^4x & \left( \displaystyle\frac{\delta \mathcal{S}}{\delta A_\mu ^a}\displaystyle\frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}+\displaystyle\frac{\delta \mathcal{S}}{\delta c^a}\displaystyle\frac{\delta \mathcal{S}}{\delta C^{*a}}+\displaystyle\frac 12\displaystyle\frac{\delta \mathcal{S}}{\delta B_{\mu \nu }^a}\displaystyle\frac{\delta \mathcal{S}}{\delta \hat{B}^{*a\mu \nu }}+\displaystyle\frac{\delta \mathcal{S}}{\delta \phi ^a}\displaystyle\frac{\delta \mathcal{S}}{\delta \phi ^{*a}}\right. \\ & \left. +\displaystyle\frac{\delta \mathcal{S}}{\delta \xi _\mu ^a}\displaystyle\frac{\delta \mathcal{S}}{\delta \hat{\xi}^{*a\mu }}+h_\mu ^a\displaystyle\frac{\delta \mathcal{S}}{\delta \overline{\xi }_\mu ^a}+b^a\displaystyle\frac{\delta \mathcal{S}}{\delta \overline{c}^a}+\omega ^a\displaystyle\frac{\delta \mathcal{S}}{\delta \overline{\phi }^a}+\lambda ^a\displaystyle\frac{\delta \mathcal{S}}{\delta e^a}\right) \;\;=\;0\;. \end{array} \label{bf-slav-tayl}$$ In order to define the reduced action for the $BF$ model let us write down the gauge-fixing conditions $$\begin{array}{ll} \displaystyle\frac{\delta \mathcal{S}}{\delta b^a}\;=\;\partial A^a\;\;, & \displaystyle\frac{\delta \mathcal{S}}{\delta h^{a\mu }}=\partial _\mu e^a\;+\;\partial ^\nu B_{\nu \mu \;}^a,\;\;\; \\ & \\ \displaystyle\frac{\delta \mathcal{S}}{\delta \omega ^a}=\;\lambda ^a\;+\;\partial \xi ^a\;, & \displaystyle\frac{\delta \mathcal{S}}{\delta \lambda ^a}\;=\;-\partial \overline{\xi }^a\;-\;\omega ^a\;. \end{array} \label{bf-gauge-fix}$$ Commuting now the above conditions with the Slavnov-Taylor identity $\left( \ref{bf-slav-tayl}\right) $ we get the antighost equations $$\begin{array}{ll} \displaystyle\frac{\delta \mathcal{S}}{\delta \overline{c}^a}\;+\;\partial ^\mu \displaystyle\frac{\delta \mathcal{S}}{\delta \hat{A}^{*a\mu }}\;=\;0\;, & \displaystyle\frac{\delta \mathcal{S}}{\delta \overline{\phi }^a}\;-\;\partial ^\mu \displaystyle\frac{\delta \mathcal{S}}{\delta \hat{\xi}^{*a\mu }}\;=\;0\;\;, \\ & \\ \displaystyle\frac{\delta \mathcal{S}}{\delta e^a}\;=\;-\partial h^a\;\;, & \displaystyle\frac{\delta \mathcal{S}}{\delta \overline{\xi }^{a\nu }}\;+\;\partial ^\mu \displaystyle\frac{\delta \mathcal{S}}{\delta \hat{B}^{*a\mu \nu }}\;=\;-\partial _\nu \lambda ^a\;\;, \end{array} \label{bf-antgh-eq}$$ so that, introducing the following shifted antifields $$\begin{aligned} A^{*a\mu } &=&\;\;\hat{A}^{*a\mu }+\partial ^\mu \overline{c}^a\;\;,\;\;\xi ^{*a\mu }=\hat{\xi}^{*a\mu }-\partial ^\mu \overline{\phi }^a\;, \label{bf-shif-antf} \\ B^{*a\mu \nu } &=&\hat{B}^{*a\mu \nu }+\left( \partial ^\mu \overline{\xi }^{a\nu }-\partial ^\nu \overline{\xi }^{a\mu }\right) \;, \nonumber\end{aligned}$$ for the reduced $BF\;$action $\Sigma $ we get $$\mathcal{S}=\Sigma +\displaystyle\int d^4x\left( b^a\partial A^a+h_\nu ^a\partial _\mu B^{a\mu \nu }+\omega ^a\partial \xi ^a+h^{a\mu }\partial _\mu e^a+\omega ^a\lambda ^a+-\lambda ^a\partial \overline{\xi }^a\right) \;, \label{bf-act-red}$$ and $$\begin{array}{ll} \Sigma =\displaystyle\int d^4x & \left( -\displaystyle\frac 14\;\varepsilon ^{\mu \nu \rho \sigma }F_{\mu \nu }^aB_{\rho \sigma }^a\right. \; \\ & +\displaystyle\frac 12B^{*a\mu \nu }\left( (D_\nu \xi _\mu )^a-(D_\mu \xi _\nu )^a-f_{abc}B_{\mu \nu }^bc^c+f_{abc}\varepsilon _{\mu \nu \rho \sigma }\partial ^\rho \overline{\xi }^{b\sigma }\phi ^c\right) \\ & +\xi ^{*a\mu }\left( (D_\mu \phi )^a+f_{abc}c^b\xi _\mu ^c\right) +\displaystyle\frac 18f_{abc}\varepsilon _{\mu \nu \rho \sigma }B^{*a\mu \nu }B^{*b\rho \sigma }\phi ^c \\ & \left. -A^{*a\mu }(D_\mu c)^a+\displaystyle\frac 12f_{abc}C^{*a}c^bc^c+f_{abc}\phi ^{*a}c^b\phi ^c\right) \;. \end{array} \label{bf-red-action}$$ As usual, the reduced action $\Sigma $ obeys the homogeneous Slavnov-Taylor $$\begin{array}{ll} \frac 12\mathcal{B}_\Sigma \Sigma =0=\displaystyle\int d^4x & \left( \displaystyle\frac{\delta \Sigma }{\delta A_\mu ^a}\displaystyle\frac{\delta \Sigma }{\delta A^{*a\mu }}+\displaystyle\frac{\delta \Sigma }{\delta c^a}\displaystyle\frac{\delta \Sigma }{\delta C^{*a}}+\displaystyle\frac 12\displaystyle\frac{\delta \Sigma }{\delta B_{\mu \nu }^a}\displaystyle\frac{\delta \Sigma }{\delta B^{*a\mu \nu }}\right. \\ & \\ & \left. +\displaystyle\frac{\delta \Sigma }{\delta \phi ^a}\displaystyle\frac{\delta \Sigma }{\delta \phi ^{*a}}+\displaystyle\frac{\delta \Sigma }{\delta \xi _\mu ^a}\displaystyle\frac{\delta \Sigma }{\delta \xi ^{*a\mu }}\right) \;, \end{array} \label{bf-hom-slv-tayl}$$ with $$\begin{array}{ll} \mathcal{B}_\Sigma =\displaystyle\int d^4x & \left( \displaystyle\frac{\delta \Sigma }{\delta A_\mu ^a}\displaystyle\frac \delta {\delta A^{*a\mu }}+\displaystyle\frac{\delta \Sigma }{\delta A^{*a\mu }}\displaystyle\frac \delta {\delta A_\mu ^a}+\displaystyle\frac{\delta \Sigma }{\delta c^a}\displaystyle\frac \delta {\delta C^{*a}}+\displaystyle\frac{\delta \Sigma }{\delta C^{*a}}\displaystyle\frac \delta {\delta c^a}\right. \\ & \\ & +\displaystyle\frac 12\displaystyle\frac{\delta \Sigma }{\delta B_{\mu \nu }^a}\displaystyle\frac \delta {\delta B^{*a\mu \nu }}+\displaystyle\frac 12\displaystyle\frac{\delta \Sigma }{\delta B^{*a\mu \nu }}\displaystyle\frac \delta {\delta B_{\mu \nu }^a}+\displaystyle\frac{\delta \Sigma }{\delta \phi ^a}\displaystyle\frac \delta {\delta \phi ^{*a}} \\ & \\ & \left. +\displaystyle\frac{\delta \Sigma }{\delta \phi ^{*a}}\displaystyle\frac \delta {\delta \phi ^a}+\displaystyle\frac{\delta \Sigma }{\delta \xi _\mu ^a}\displaystyle\frac \delta {\delta \xi ^{*a\mu }}+\displaystyle\frac{\delta \Sigma }{\delta \xi ^{*a\mu }}\displaystyle\frac \delta {\delta \xi _\mu ^a}\right) \;, \end{array} \label{bf-lin}$$ and $$\mathcal{B}_\Sigma \mathcal{B}_\Sigma =0\;. \label{bf-nilp}$$ Let us turn now to the invariant vector cocycle $\left( \ref{vec-cocycle}\right) $, which in the present case takes the form $$\Omega _\nu ^{-1}\equiv \int d^4x\left( c^a\partial _\nu C^{*a}-A_\mu ^a\partial _\nu A^{*a\mu }+\xi _\mu ^a\partial _\nu \xi ^{*a\mu }-\phi ^a\partial _\nu \phi ^{*a}-\frac 12\;B_{\mu \tau }^a\partial _\nu B^{*a\mu \tau }\right) \;, \label{bf-vect-cocycle}$$ and $$\mathcal{B}_\Sigma \Omega _\nu ^{-1}=P_\nu \Sigma =0\;. \label{bf-vec-in}$$ Again, the vanishing of the cohomology of the operator $\mathcal{B}_\Sigma $ [@gms; @book] in the sector of the integrated local polynomials with ghost number -1 and with a free Lorentz index implies that, as in the case of the Chern-Simons model, $\Omega _\nu ^{-1}$ is an exact $\mathcal{B}_\Sigma $-cocycle $$\Omega _\nu ^{-1}=\mathcal{B}_\Sigma \Xi _\nu ^{-2}\;, \label{bf-triv}$$ for some local integrated polynomial $\Xi _\nu ^{-2}$ of dimension 5 and ghost number -2. In fact, repeating the same procedure done in the previous exemple, $\Xi _\nu ^{-2}$ is easily found to be $$\Xi _\nu ^{-2}=\int d^4x\;\left( C^{*a}A_\nu ^a+\;\frac 12\varepsilon _{\sigma \tau \mu \nu }A^{*a\sigma }B^{*a\tau \mu }-\phi ^{*a}\xi _\nu ^a-\xi ^{*a\mu }B_{\mu \nu }^a\right) \;. \label{bf-xi-cocycle}$$ Finally, converting the equation $\left( \ref{bf-triv}\right) $ into contact terms by means of the expression $\left( \ref{bf-lin}\right) $ and moving from the reduced action $\left( \ref{bf-red-action}\right) $ to the complete one $\left( \ref{bf-comp-act}\right) $, we get the linearly broken vector Ward identity $$\mathcal{W}_\nu \mathcal{S=}\Delta _\nu ^{cl}\;, \label{bf-susy}$$ with $$\begin{array}{ll} \mathcal{W}_\nu =\displaystyle\int d^4x & \left( \displaystyle\frac 12\varepsilon _{\sigma \tau \mu \nu }\left( \hat{B}^{*a\tau \mu }+\partial ^\tau \overline{\xi }^{a\mu }-\partial ^\mu \overline{\xi }^{a\tau }\right) \displaystyle\frac \delta {\delta A_\sigma ^a}+A_\nu ^a\displaystyle\frac \delta {\delta c^a}-\partial _\nu \overline{c}^a\displaystyle\frac \delta {\delta b^a}\right. \\ & -\displaystyle\frac 12\varepsilon _{\sigma \tau \mu \nu }\left( \hat{A}^{*a\sigma }+\partial ^\sigma \overline{c}^a\right) \displaystyle\frac \delta {\delta B_{\tau \mu }^a}-B_{\mu \nu }^a\displaystyle\frac \delta {\delta \xi _\mu ^a}-\xi _\nu ^a\displaystyle\frac \delta {\delta \phi ^a}+\overline{\phi }^a\displaystyle\frac \delta {\delta \overline{\xi }^{a\mu }} \\ & -\partial _\nu \overline{\phi }^a\displaystyle\frac \delta {\delta \omega ^a}+\partial _\nu e^a\displaystyle\frac \delta {\delta \lambda ^a}-\left( \omega ^a\delta _\nu ^\tau +\partial _\nu \overline{\xi }^{a\tau }\right) \displaystyle\frac \delta {\delta h^{a\tau }}+C^{*a}\displaystyle\frac \delta {\delta \hat{A}^{*a\nu }} \\ & \left. +\phi ^{*a}\displaystyle\frac \delta {\delta \hat{\xi}^{*a\nu }}-\hat{\xi}^{*a\mu }\displaystyle\frac \delta {\delta \hat{B}^{*a\mu \nu }}\right) \;, \end{array} \; \label{bf-W-op}$$ and the classical breaking $\Delta _\nu ^{cl}$ given by $$\begin{array}{ll} \Delta _\nu ^{cl}=\displaystyle\int d^4x\; & \left( \hat{A}^{*a\mu }\partial _\nu A_\mu ^a-C^{*a}\partial _\nu c^a-\hat{\xi}^{*a\mu }\partial _\nu \xi _\mu ^a+\phi ^{*a}\partial _\nu \phi ^a+\displaystyle\frac 12\hat{B}^{*a\tau \mu }\partial _\nu B_{\tau \mu }^a\right. \\ & \left. -\displaystyle\frac 12\varepsilon _{\sigma \tau \mu \nu }\hat{B}^{*a\tau \mu }\partial ^\sigma b^a+\displaystyle\frac 12\varepsilon _{\sigma \tau \mu \nu }\hat{A}^{*a\sigma }\partial ^\tau h^{a\mu }\right) \;. \end{array} \label{bf-breaking}$$ The equation $\left( \ref{bf-susy}\right) $ is recognized to be the well known topological vector susy Ward identity of the four dimensional $BF$ systems [@gms]. Let us conclude by remarking that the above construction can be easily generalized to the $BF$ systems in higher space-time dimensions, reproducing then the results of [@gms; @book]. In particular it is not difficult to check that, as it happens in the case of the three dimensional Chern-Simons model, the requirement that the breaking term $\left( \ref{bf-breaking}\right) $ is at most linear in the quantum fields completely fixes the gauge parameters that one could introduce in the gauge fixing term $\left( \ref{bf-gauge-fix}\right) $ and which would be left free by the Slavnov-taylor identity [@gms]. In other words, the relative coefficients of the lagrangian multiplier part of the gauge fixing are uniquely determined by the vector susy Ward identity $\left( \ref{bf-susy}\right) $. The b-c ghost system -------------------- We present here, as the last example of a topological model, the two dimensional $b$-$c$ ghost system whose action reads $$\mathcal{S}_{bc}=\displaystyle\int dzd\overline{z}\text{ }b\overline{\partial }c\;, \label{bc-action}$$ where the fields $b=b_{zz\text{ }}$ and $c=c^z$ are anticommuting and carry respectively ghost number -1 and +1. The action $\left( \ref{bc-action}\right) $ is the ghost part of the quantized bosonic string action [@bbrt; @gsw] and, as it is well known, is invariant under the following nonlinear nilpotent BRST transformations $$\begin{array}{c} sc=c\partial c\;, \\ sb=-(\partial b)c-2b\partial c\;. \end{array} \label{s-bc-transf}$$ In particular, the right hand-side of the BRST transformation of the field $b $ is easily identified with the component $T_{zz}$ of the energy-momentum tensor corresponding to the action $\left( \ref{bc-action}\right) $. This property allows for the topological interpretation of the $b$-$c$ ghost system. Transformations $\left( \ref{s-bc-transf}\right) $ being nonlinear, one needs to introduce two antifields $\left( b^{*}=b_{\;\overline{z}}^{z*},\;c^{*}=c_{zz\overline{z}}^{*}\right) $ of ghost number 0 and -2 $$\mathcal{S}_{ext}=\displaystyle\int dzd\overline{z}\text{ }\left( c^{*}c\partial c+b^{*}(c\partial b-2b\partial c)\right) \;. \label{bc-ext-act}$$ The complete action $$\mathcal{S=S}_{bc}+\mathcal{S}_{ext}\;, \label{bc-comp-act}$$ obeys thus the Slavnov-Taylor identity $$\int dzd\overline{z}\left( \frac{\delta \mathcal{S}}{\delta b}\frac{\delta \mathcal{S}}{\delta b^{*}}\;+\frac{\delta \mathcal{S}}{\delta c}\frac{\delta \mathcal{S}}{\delta c^{*}}\;\right) =\;\frac 12\mathcal{B}_{\mathcal{S}}\mathcal{S}\;=\;0\;\;, \label{bc-Slv-Tayl}$$ $\mathcal{B}_{\mathcal{S}}$ denoting the linearized operator $$\mathcal{B}_{\mathcal{S}}=\int dzd\overline{z}\left( \frac{\delta \mathcal{S}}{\delta b}\frac \delta {\delta b^{*}}\;+\frac{\delta \mathcal{S}}{\delta b^{*}}\frac \delta {\delta b}+\frac{\delta \mathcal{S}}{\delta c}\frac \delta {\delta c^{*}}\;+\frac{\delta \mathcal{S}}{\delta c^{*}}\frac \delta {\delta c}\right) \;, \label{bc-lin}$$ and $$\mathcal{B}_{\mathcal{S}}\mathcal{B}_{\mathcal{S}}=0\;. \label{bc-nilp-lin}$$ $c$ $b$ $c^{*}$ $b^{*}$ ----------- ----- ------ --------- --------- $dim$ $0$ $1$ $1$ $0$ $gh-numb$ $1$ $-1$ $-2$ $0$ : dimension and ghost number[]{data-label="bc-table"} \ Concerning now the vector cocycle $\left( \ref{vec-cocycle}\right) $, here written in components, we have $$\begin{array}{c} \Omega _z^{-1}=\displaystyle\int dzd\overline{z\text{ }}\left( b\partial b^{*}+c\partial c^{*}\right) \;, \\ \Omega _{\overline{z}}^{-1}=\displaystyle\int dzd\overline{z\text{ }}\left( b\overline{\partial }b^{*}+c\overline{\partial }c^{*}\right) \;, \end{array} \label{bc-vec-cocyc}$$ and $$\begin{array}{c} \mathcal{B}_{\mathcal{S}}\Omega _z^{-1}=P_z\mathcal{S}=0\;, \\ \mathcal{B}_{\mathcal{S}}\Omega _{\overline{z}}^{-1}=P_{\overline{z}}\mathcal{S}=0\;. \end{array} \label{bc-cocy-inv}$$ As done before, we look then at the solution of the BRST exact conditions $$\begin{array}{c} \Omega _z^{-1}=\mathcal{B}_{\mathcal{S}}\Xi _z^{-2}\;, \\ \Omega _{\overline{z}}^{-1}=\mathcal{B}_{\mathcal{S}}\Xi _{\overline{z}}^{-2}\;, \end{array} \label{bc-exact-coc}$$ for some local integrated polynomials $\left( \Xi _z^{-2},\Xi _{\overline{z}}^{-2}\right) $ of ghost number -2 and dimension 1. After some almost trivial algebraic manipulations one easily find $$\begin{array}{l} \Xi _z^{-2}=-\displaystyle\int dzd\overline{z\text{ }}c^{*}\;, \\ \Xi _{\overline{z}}^{-2}=-\displaystyle\int dzd\overline{z\text{ }}c^{*}b^{*}\;. \end{array} \label{bc-x-cocycles}$$ Converting then the equations $\left( \ref{bc-exact-coc}\right) $ into contact terms, we get the two linearly broken Ward identities $$\begin{array}{l} \displaystyle\int dzd\overline{z}\displaystyle\frac{\delta \mathcal{S}}{\delta c}=-\displaystyle\int dzd\overline{z\text{ }}\left( b\partial b^{*}+c\partial c^{*}\right) \;, \\ \displaystyle\int dzd\overline{z}\left( b^{*}\displaystyle\frac{\delta \mathcal{S}}{\delta c}+c^{*}\displaystyle\frac{\delta \mathcal{S}}{\delta b}\right) =-\displaystyle\int dzd\overline{z\text{ }}\left( b\overline{\partial }b^{*}+c\overline{\partial }c^{*}\right) \;, \end{array} \label{bc-vec-susy}$$ which are nothing but the topological vector susy Ward identities of the $b$-$c$ ghost system [@womss; @zcurv; @book]. The case of the rigid gauge invariance: the Landau Ghost Equation ================================================================= In this section we shall analyse another interesting example of a classically linearly broken Ward identity whose contact terms can be characterized in the same purely algebraic cohomological way of the topological vector susy. This Ward identity is not related to a specific gauge model, being present in all the cases in which the rigid gauge invariance is an exact symmetry of the action. However in order to present its derivation in a detailed way we shall consider, as explicit example, the four dimensional pure Yang-Mills theory of the Sect.3, the generalization to other models being straightforward. As it is well known the rigid gauge invariance is an exact symmetry of the Yang-Mills action $\left( \ref{red-ym-action}\right) $, expressing the simple fact that all the fields and antifields belong to the adjoint representation of the gauge group, *i.e.* $$\mathcal{R}_a^{rig}\Sigma =\int d^4x\text{ }f_{abc}\left( A_\mu ^b\frac{\delta \Sigma }{\delta A_\mu ^c}+A_\mu ^{*b}\frac{\delta \Sigma }{\delta A_\mu ^{*c}}\;+c^b\frac{\delta \Sigma }{\delta c^c}+C^{*b}\frac{\delta \Sigma }{\delta C^{*c}}\;\right) =\;\;0\;\;, \label{ym-rig-inv}$$ Let us consider now the following integrated local polynomial, linear in the (shifted) antifields $\left( A_\mu ^{*},\;C^{*}\right) $, of ghost number -1, dimension four, and possessing a group index belonging to the adjoint representation $$\Omega _a^{-1}=\int d^4x\text{ }f_{abc}\left( A^{b\mu }A_\mu ^{*c}-c^bC^{*c}\right) \;. \label{rig-cocycle}$$ As in the case of the vector cocycle of the eq.$\left( \ref{vec-cocycle}\right) $, the above cocycle turns out to be BRST invariant. In fact we have $$\mathcal{B}_\Sigma \Omega _a^{-1}=\mathcal{R}_a^{rig}\Sigma =0\;, \label{rig-cocyc-inv}$$ due to the rigid invariance $\left( \ref{ym-rig-inv}\right) $. The operator $\mathcal{B}_\Sigma $ appearing in the above equation is the usual linearized Slavnov-Taylor operator defined in eq.$\left( \ref{y-m-linear}\right) $. However, unlike the vector cocycle $\Omega _\nu ^{-1}$, the coloured cocycle $\Omega _a^{-1}$ turns out to be always $\mathcal{B}_\Sigma $-exact, due to a very well known result of the BRST cohomolgy stating that there are no nontrivial cohomology classes with one free group index [@ymcoh]. Therefore we can write $$\Omega _a^{-1}=\mathcal{B}_\Sigma \Xi _a^{-2}\;, \label{rig-exact}$$ for some local integrated polynomial $\Xi _a^{-2}$ of ghost number -2 and dimension four. From the Table 1 it follows that the most general form for $\Xi _a^{-2}$ can be written as $$\Xi _a^{-2}=\beta \int d^4x\text{ }C_a^{*}\;, \label{xi-express}$$ $\beta $ being an arbitrary free parameters. Using the expression of the operator $\mathcal{B}_\Sigma $ of eq.$\left( \ref{y-m-linear}\right) $ and of the reduced Yang-Mills action $\Sigma $ given in eq.$\left( \ref {red-ym-action}\right) $, the condition $\left( \ref{rig-exact}\right) $ becomes $$\int d^4x\text{ }f_{abc}\left( A^{b\mu }A_\mu ^{*c}-c^bC^{*c}\right) =\beta \int d^4x\frac{\delta \Sigma }{\delta c^a}=\beta \int d^4x\;f_{abc}\left( A^{b\mu }A_\mu ^{*c}-c^bC^{*c}\right) \;, \label{beta-value}$$ which gives $\beta =1$. Moving now from the reduced action $\Sigma $ to the complete Yang-Mills action $\mathcal{S}$ $\left( \ref{red-ym-action}\right) $ and making use of the gauge condition $\left( \ref{feynm-gauge-fix}\right) $ and of the definition $\left( \ref{shifted-antif}\right) $, the equation $\left( \ref{beta-value}\right) $ takes the form $$\int d^4x\left( \frac{\delta \mathcal{S}}{\delta c^a}-f_{abc}\overline{c}^b\frac{\delta \mathcal{S}}{\delta b^c}\right) \;=\;\int d^4x\;f_{abc}\left( A^{b\mu }\hat{A}_\mu ^{*c}-c^bC^{*c}\right) +\alpha \int d^4x\;f_{abc}b^b\overline{c}^c\;.\;\;\; \label{quadr-break}$$ It should be remarked that the left hand side of this equation, besides a pure linear breaking, contains a term which is quadratic in the quantum fields, *i.e.* $\alpha f_{abc}b^b\overline{c}^c$. This term, being subject to renormalization, would have been defined as an insertion, spoiling then the usefulness of the eq.$\left( \ref{quadr-break}\right) $. Therefore we see that the requirement that the breaking term is at most linear in the quantum fields implies the vanishing of the gauge parameter, *i.e.* $\alpha =0$, selecting thus the Landau gauge as the gauge fixing condition. Finally, setting $\alpha =0$ in the gauge condition $\left( \ref{feynm-gauge-fix}\right) $, we obtain the linearly broken identity $$\int d^4x\left( \frac{\delta \mathcal{S}}{\delta c^a}-f_{abc}\overline{c}^b\frac{\delta \mathcal{S}}{\delta b^c}\right) \;=\;\int d^4x\;f_{abc}\left( A^{b\mu }\hat{A}_\mu ^{*c}-c^bC^{*c}\right) \;, \label{Landau-W-id}$$ which is recognized to be the so-called *ghost equation* Ward identity [@bps], always present in the Landau gauge. Let us conclude by recalling that the ghost equation $\left( \ref{Landau-W-id}\right) $, although valid only in the Landau gauge, turns out to be a very powerful tool in order to study the ultraviolet finiteness properties of a large class of gauge invariant local field polynomials belonging to the BRST cohomology [@c5]. These gauge invariant polynomials can be promoted at the quantum level to local insertions whose anomalous dimensions are independent from the gauge fixing parameter $\alpha $. Therefore they can be studied without loss of generality in the Landau gauge. In particular, the ghost equation $\left( \ref{Landau-W-id}\right) $ allows to prove the vanishing of the anomalous dimensions of the invariant ghost monomials[^3] $tr(c^{2n+1})$, which are deeply related to the gauge anomalies and to the generalized Chern-Simons terms [@book]. This result is of great importance in order to prove the Adler-Bardeen nonrenormalization theorem for the gauge and the $U(1)$ axial anomalies [@c5; @adb; @book]. Let us mention, finally, that the ghost equation $\left( \ref{Landau-W-id}\right) $ has been proven to be renormalizable [@bps; @book] and that, recently, has been extended to the $N=1$ supersymmetric gauge theories in superspace [@sl]. Conclusion ========== A purely algebraic characterization of the topological vector susy and of the Landau ghost equation Ward identities has been given. These Ward identities, always linearly broken, are obtained by exploiting the BRST exactness condition of antifield dependent cocycles with ghost number -1. Applications to other kinds of linearly broken Ward identities as well as to other topological theories and to superspace supersymmetric models are under investigation. Acknowledgements ================ The Conselho Nacional de Desenvolvimento Científico e Tecnológico CNPq-Brazil is gratefully acknowledged for the financial support. [10]{} D. Birmingham, M. Blau, M. Rakowski and G. Thompson, **Phys. Rep. 209 (1991) 129**; D. Birmingham, M. Rakowski and G. Thompson, **Nucl. Phys. B329 (1990) 83**;\ D. Birmingham and M. Rakowski, **Mod. Phys. Lett. A4 (1989) 1753**; F. Delduc, F. Gieres and S.P. Sorella, **Phys. Lett. B225 (1989) 367**;\ F. Delduc, C. Lucchesi, O. Piguet and S.P. Sorella, **Nucl. Phys. B346 (1990) 313**; E. Guadagnini, N. Maggiore ans S.P. Sorella, **Phys. Lett. B255 (1991) 65**;\ N. Maggiore and S.P. Sorella, **Int. Journ. Mod. Phys. A8 (1993) 929**;\ C. Lucchesi, O. Piguet and S.P. Sorella, **Nucl. Phys. B395 (1993) 325**; D. Birmingham and M. Rakowski, **Phys. Lett. B269 (1991) 103**;\ D. Birmingham and M. Rakowski, **Phys. Lett. B275 (1992) 289**;\ D. Birmingham and M. Rakowski, **Phys. Lett. B289 (1992) 271**;\ A. Brandhuber, O. Moritsch, M.W. de Oliveira, O. Piguet and M. Schweda, **Nucl. Phys. B341 (1994) 173**; O. Piguet and S.P. Sorella, *”Algebraic Renormalization ”*, **Monographs Series, Vol. m28, Springer-Verlag, Berlin 1995**; S.P. Sorella, **Comm. Math. Phys. 157 (1993) 231**;\ S.P. Sorella and L. Tataru, **Phys. Lett. B324 (1994) 351**; M. Werneck de Oliveira and S.P. Sorella, **Int. Journ. Mod. Phys. A9 (1994) 2979**;\ O. Moritsch, M. Schweda and S.P. Sorella, **Class. Quantum Grav. 11 (1994) 1225**;\ O. Moritsch and M. Schweda, **Helv. Phys. Acta 67 (1994) 289**;\ P.A. Blaga, O. Moritsch, M. Schweda, T. Sommer, L. Tataru, H. Zerrouki, **Phys. Rev. D 51 (1995) 2792;\ **O. Moritsch, M. Schweda, T. Sommer, L. Tataru, H. Zerrouki, *”BRST Cohomology of Yang-Mills Gauge Fields in the Presence of Gravity in Ashtekar Variables ”,* **hep-th/9409081;\ **S. Emery, O. Moritsch, M. Schweda, T. Sommer, H. Zerrouki**, Helv. Phys. Acta 68 (1995) 167**;\ O. Piguet, *”On the Role of Vector Supersymmetry in Topological Field Theory ”,* **UGVA-DPT 1995/02-880, hep-th/9502033**; M. Werneck de Oliveira, M. Schweda and S.P. Sorella, **Phys. Lett. B315 (1993) 93**;\ L. Tataru and I.V. Vancea, **Int. Journ. Mod. Phys. 11 (1996) 375**; A. Boresch, M. Schweda and S.P. Sorella, **Phys. Lett. B328 (1994) 36**; M. Carvalho, L.C. Queiroz Vilar and S.P. Sorella, **Int. Journ. Mod. Phys. A, Vol. 10, 27 (1995) 3877**; M. Carvalho, L.C. Queiroz Vilar, C.A.G. Sasaki and S.P. Sorella,** Journ. of Math. Phys. 37 (1996) 5310**;\ M. Carvalho, L.C. Queiroz Vilar, C.A.G. Sasaki and S.P. Sorella, **Journ. of Math. Phys. 37 (1996) 5325**; L.C. Queiroz Vilar, C.A.G. Sasaki and S.P. Sorella,* ”Superspace Descent Equations and Zero Curvature Formalism of the Four Dimensional N=1 Supersymmetric Yang-Mills theories ”,* **CBPF-NF-049/96, hep-th 9610220**, sub. to** Int. J. Mod. Phys. A**; E. Witten, **Comm. Math. Phys. 117 (1988) 353**; N. Maggiore and S.P. Sorella, **Nucl. Phys. B377 (1992) 236**; M. Henneaux and C. Teitelboim, ”*Quantization of Gauge Systems ”*, Princeton NJ; **Princeton University Press** **1992**;\ G. Barnich, F. Brandt amd M. Henneaux, **Comm. Math. Phys. 174 (1995) 57**;\ G. Barnich, F. Brandt amd M. Henneaux, **Comm. Math. Phys. 174 (1995) 93**;\ M. Henneaux, **Journ. of Pure and Applied Algebra 100 (1995) 3**; F. Brandt, M. Henneaux and A. Wilch, **Phys. Lett. B387 (1996) 320**;\ F. Brandt, M. Henneaux and A. Wilch, ”*Ward Identities for Rigid symmetries of higher order ”,*** hep-th 9611056**; A. Blasi, O. Piguet and S.P. Sorella, **Nucl. Phys. B356 (1991) 154**; F. Brandt, N. Dragon and M. Kreuzer, **Phys. Lett. B231 (1989) 263**;\ F. Brandt, N. Dragon and M. Kreuzer,** Nucl. Phys. B332 (1990) 224**;\ F. Brandt, N. Dragon and M. Kreuzer, **Nucl. Phys. B332 (1990) 250**;\ M. Dubois-Violette, M. Henneaux, M. Talon and C.M. Viallet, **Phys. Lett. B289 (1992) 361**; I.A. Batalin and G.A. Vilkovisky, **Phys. Lett. B102 (1981) 27**;\ I.A. Batalin and G.A. Vilkovisky,** Phys. Rev. D28 (1983) 2567**; S. Ouvry, R. Stora and P. Van Baal, **Phys. Lett. B220 (1989) 159**;\ A. Blasi and R. Collina, **Phys. Lett. B222 (1989) 159**;\ S.P. Sorella, **Phys. Lett. B228 (1989) 159**; J. Kalkman, **Comm. Math. Phys. 153 (1993) 447**;\ R. Stora, *”Equivariant Cohomology and Topological Theories ”,* in BRS Symmetry, M. Abe, N. Nakanishi, Iojima eds., **Universal Academy Press, Tokyo, Japan, 1996**;\ R. Stora, F. Thuillier and J.C. Wallet, ”*Algebraic Structure of Cohomological Field Theory Models and Equivariant Cohomology ”,* lectures given at the First Caribbean Spring School of Mathematics and Theoretical Physics, R. Coquereaux, M. Dubois-Violette and P. Flad Eds., **World Scientific Publ., 1995**;\ R. Stora,* ”Exercises in Equivariant Cohomology ”,***ENSLAPP-A-619/96, hep-th/9611114**;\ R. Stora, ”*De la fixation de jauge consideree comme un des beaux arts et de la symetrie de Slavnov qui s ’ensuit ”,* **ENSLAPP-A-620/96, hep-th/9611115**;\ R. Stora,*”Exercises in Equivariant Cohomology and Topological Theories ”,***hep-th/9611116**; C. Becchi, R. Collina and C. Imbimbo, **Phys. Lett. B 322 (1994) 79**;\ C. Becchi and C. Imbimbo, **Nucl. Phys. B462 (1996) 571**;\ C. Becchi, C. Imbimbo and S. Giusto, ”*Gauge dependence in topological gauge theories ”,*** GEF-Th/96-20, hep-th/9611113***;* F. Delduc, N. Maggiore, O. Piguet and S. Wolf, *”Note on Constrained Cohomology”,* **UGVA-DPT 1996/05-925, hep-th/9605158,** to appear in **Phys. Lett. B**; L. Baulieu and I.M. Singer, **Nucl. Phys. B15 (1988) 12**;\ L. Baulieu and I.M. Singer,** Comm. Math. Phys. 135 (1991) 253**; J.M.F. Labastida and M. Pernici, **Phys. Lett. B212 (1988) 56**; E. Witten, **Comm. Math. Phys. 118 (1988) 411**; J.H. Lowenstein, **Phys. Rev. D4 (1971) 2281***;*\ J.H. Lowenstein, **Comm. Math. Phys. 24 (1971) 1**;\ Y.M.P. Lam, **Phys. Rev. D6 (1972) 2145**;\ Y.M.P. Lam, **Phys. Rev. D7 (1973) 2943**;\ T.E. Clark and J.H. Lowenstein, **Nucl. Phys. B113 (1976) 109**; M. Green, J. Schwarz and E. Witten, ”*Superstring Theory”,* Vol. I, II, **Cambridge University Press, 1987**; O. Piguet and S.P. Sorella, **Nucl. Phys. B381 (1992) 373**; O. Piguet and S.P. Sorella, **Nucl. Phys. B395 (1993) 661**; O. Piguet and S.P. Sorella, **Phys. Lett. B371 (1996) 238.** [^1]: The name *reduced* action is used to denote the part of the complete gauge fixed action which does not depend from the Lagrangian multipliers and which depends from the antighosts only through the shifted antifields. As it will be discussed in the next Sections, the dependence of the complete action from the Lagrangian multipliers as well as from the gauge parameters present in the gauge fixing condition will be determined by requiring that the breaking term associated to the topological vector susy Ward identity is at most linear in the quantum fields. [^2]: Let us remark here that, as discussed by R. Stora et al.[@osv; @kk; @bci; @dmpw], the relevant cohomology for the topological theories of the Witten’s type is the so called equivariant cohomology. The latter is the restriction of the BRST cohomology to gauge invariant local field polynomials which do not depend from the Faddee-Popov ghost field $c$. Contrary to the BRST cohomology, the equivariant cohomology is not empty and turns out to provide a consistent definition of the Witten’s observables. [^3]: We have used here the matrix notation $c=c^aT_a$, $T_a$ being the generators of the gauge group, i.e. $[T_a,T_b]=if_{ab}^{\;\;\;c}T_c$.
--- abstract: 'Rather than simply recognizing the action of a person individually, collective activity recognition aims to find out what a group of people is acting in a collective scene. Previous state-of-the-art methods using hand-crafted potentials in conventional graphical model which can only define a limited range of relations. Thus, the complex structural dependencies among individuals involved in a collective scenario cannot be fully modeled. In this paper, we overcome these limitations by embedding latent variables into feature space and learning the feature mapping functions in a deep learning framework. The embeddings of latent variables build a global relation containing person-group interactions and richer contextual information by jointly modeling broader range of individuals. Besides, we assemble attention mechanism during embedding for achieving more compact representations. We evaluate our method on three collective activity datasets, where we contribute a much larger dataset in this work. The proposed model has achieved clearly better performance as compared to the state-of-the-art methods in our experiments.' author: - | Yongyi Tang$^1$, Peizhen Zhang$^2$, Jian-Fang Hu$^2$ and Wei-Shi Zheng$^2$$^3$[^1]\ $^1$School of Electronics and Information Technology, Sun Yat-Sen University, China\ $^2$School of Data and Computer Science, Sun Yat-Sen University, China\ $^3$The Key Laboratory of Machine Intelligence and Advanced Computing\ (Sun Yat-sen University), Ministry of Education, China\ [{tangyy8,zhangpzh5}@mail2.sysu.edu.cn, hujf5@mail.sysu.edu.cn, wszheng@ieee.org]{} bibliography: - 'egbib.bib' title: Latent Embeddings for Collective Activity Recognition --- Introduction ============ Recognizing what a group of people is doing, which is named the collective activity, is critical and useful in some real-world applications including visual surveillance. A critical point for collective activity analysis is to model the interaction between persons in the collective scenario, and inferring relations among individuals in images/videos remains challenging. Existing approaches for the collective activity recognition typically modeled the collective interactions in terms of person-person interactions [@choi2014understanding; @lan2012discriminative; @amer2014hirf; @hajimirsadeghi2015learning]. For instance, Lan [@lan2010beyond] explicitly modeled pairwise potential between individuals based on atomic action labels. Choi [@choi2014understanding] explored several hand-crafted interaction features for constructing pairwise potentials. Chang [@chang2015learning] chose to model person-person interaction in collective scene by an interaction metric matrix. In addition, deep learning models such as recurrent neural network are also proposed for modeling pairwise person interaction [@deng2015deep; @ibrahim2015hierarchical; @deng2015structure]. The person-person interaction based models intend to describe activities from a local perspective, but it causes ambiguities. Besides, those models are intrinsically limited in capturing high-level collective activities due to the inherent visual ambiguity caused by the activity invaders[^2] and local pattern uncertainty. In this work, we aim to describe collective interactions from a more global perspective, where the interactions between each anchor individual and the rest individuals are explicitly modeled. We call this interaction the *person-group interaction*. For effectively capturing person-group interactions, we introduce a set of latent variables that are modeled by jointly considering all the related person in a collective scenario. We infer those latent variables with complicated dependencies by embedding them into feature space using deep neural network instead of defining hand-crafted potentials in conventional graphical model. The benefits are twofold. First, by utilizing embedding-based method, our model is able to model more complex collective structures beyond pairwise person-person structures. Second, the non-linear dependencies between person and group can be inferred by discriminative learning procedure in deep learning framework. To obtain a more concise collective activity representation, an attention mechanism is employed to modify the contextual structure by setting different impact factors for each individuals during the embedding procedure. In summary, the contributions of our paper are threefold. Firstly, a latent variable model capable of capturing complex connections among individuals is developed for collective activity recognition. Secondly, the complicated dependencies between person and group are represented by latent variable embedding and an attention mechanism is integrated for obtaining a compact embedding representation. Thirdly, a new dataset with more activity samples is collected for the benchmarking of collective activity recognition. ![Illustration of constructing latent variables to model person-group interaction. The latent variable $h_i$ captures local person-group interaction of person $i$ while $h_{scene}$ mines the global interaction by aggregate all the local interaction information. To effectively model complex dependencies, we learn the representation of latent variables in embedding feature space.[]{data-label="fig:fig2"}](embedding_fig2.pdf){width="0.8\linewidth"} ![image](pipeline.pdf){width="0.9\linewidth"} Our Approach ============ Instead of capturing collective structures using person-person interaction only, we consider mining more globally structural interactions between each individual and the rest group (other individuals). Here, we utilize latent variables to capture the complicated dependencies between person and group in collective activity scenario. Rather than directly infer the latent states, we exploit the embeddings of latent variables parametrized by a deep neural network to represent structural information from a global view and then explicitly model person-group interaction for collective activity recognition. Modeling Collective Activity with Latent Variable ------------------------------------------------- In this section, we aim to construct a mid-level feature representation indicating the collective interactions among individuals via latent variables encompassed by a graphical model. For simplification, we denote $x_i$ as visible variable of person $i$, where $i\in\mathcal{V}_p$ and $\mathcal{V}_p$ is the set of people involved in a collective scene, and the visible variable of scene as $x_{scene}$. In addition, we use $h_i$ and $h_{scene}$ to indicate the hidden variables of the $i-th$ individual and scene, respectively. Based on the isolated representation of entities, the interactions among actors, related group and the context can then be captured by the corresponding latent variables. Figure \[fig:fig2\] provides a graphical representation of our model. Thus, the posterior probability of each latent variable can be expressed as $p(h_i|x_i,{\{x_j\}}_{j\in\mathcal{V}_p\backslash i},h_{scene})$ and $p(h_{scene}|x_{scene},{\{x_i\}}_{i\in\mathcal{V}_p},{\{h_i\}}_{i\in\mathcal{V}_p})$. It means that the hidden variable $h_i$ captures the person-group interaction information for anchor person $i$ and $h_{scene}$ captures all the interaction in the collective scenario from global view. Building upon the latent variables, the collective activities can be recognized by jointly considering local person-group interactions and global context as $p(y|{\{h_i\}}_{i\in\mathcal{V}_p},h_{scene})$. Latent Variable Embedding ------------------------- However, even though we can define the posterior probability of these latent variables, the exact inference procedure is difficult and sometimes even intractable in conventional graphical model based on hand-crafted potentials. Inspired by [@dai2016discriminative] where latent variables are embedded into feature space for structural modeling, we utilize a deep neural network to capture the non-linear dependencies among person-group interactions and represent it as the embeddings of latent variables. The embeddings can be viewed as an indication of the posterior probabilities. As shown in [Figure \[fig:fig3\]]{}, we develop a mean-field like procedure in order to approximate inference and capture the person-group interaction during embedding. Thus, the embeddings of latent variable can be learnt in the iterative manner introduced below. We first denote $\mathbf{u}_i^{(t)}$ as the embedding of latent variable $h_i$ and formulate it by jointly considering the unary image feature $\mathbf{x}_i$ of person $i$, the averaged appearance feature $(\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j)/|\mathcal{N}(i)|$ of all the neighbours of person $i$, and the embedding of global scene from last step $\mathbf{u}_{scene}^{(t-1)}$. We denote the neighbouring persons of $i$ as $\mathcal{N}(i)$, $\mathcal{N}(i)\subseteq\mathcal{V}_p$. Then, the update of $\mathbf{u}_i^{(t)}$ is below: $\forall i\in\mathcal{V}_p$, $$\label{eq:u_i} \begin{split} \mathbf{u}_i^{(t)} &=(1-\lambda)\cdot\mathbf{u}_i^{(t-1)} \\ &+\lambda\cdot\sigma(\mathbf{W}_u [\mathbf{x}_i; \frac{\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j}{|\mathcal{N}(i)|}; \mathbf{u}_{scene}^{(t-1)}]), \end{split}$$ where “$;$” indicates vector vertical concatenation, $\sigma(\cdot)$ is a rectified linear unit and $\lambda$ is the update step size. Here, we omit the biases term for simplicity. Intuitively, the aggregated neighbour feature $(\sum_{j\in\mathcal{N}(i)}\mathbf{x}_j)/|\mathcal{N}(i)|$ is employed to represent group appearance information, while $\mathbf{u}_{scene}^{(t-1)}$ indicates the global context information. Thus, the local person-group interaction can be represented by the embedding $\mathbf{u}_i^{(t)}$. Likewise, $\mathbf{u}_{scene}^{(t)}$ is the embedding of $h_{scene}$, and it aims to capture collective interactions from a global view. To this end, we formulate it by the global image feature $\mathbf{x}_{scene}$, the pooled low-level representation of person, $\sum_{i\in\mathcal{V}_p}\mathbf{x}_i/|\mathcal{V}_p|$ and the aggregate embeddings of individuals, $\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(t)}/|\mathcal{V}_p|$. Specifically, it can be formulated as: $$\label{eq:u_s} \begin{split} \mathbf{u}_{scene}^{(t)} &= (1-\lambda)\cdot\mathbf{u}_{scene}^{(t-1)} \\ &+\lambda\cdot\sigma(\mathbf{W}_{s} [\mathbf{x}_{scene}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{x}_i}{|\mathcal{V}_p|}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(t)}}{|\mathcal{V}_p|}]), \end{split}$$ Thus, $\mathbf{u}_{scene}^{(t)}$ can be considered as global relation representation since it models the non-linear dependencies of individuals and their local relations. Based on the embeddings of latent variable, we can define the posterior probability of assigning activity label $\mathbf{y}$ to a given sample by non-linearly combining all the embeddings together: $$\label{eq:p_y} \begin{split} p(\mathbf{y}&|{\{\mathbf{u}_i\}}_{i\in\mathcal{V}_p},\mathbf{u}_{scene}) = \\ &\phi(\mathbf{W}_{out}\sigma(\mathbf{W}_{y}[\frac{\sum_{i\in\mathcal{V}_p}\mathbf{u}_i^{(T)}}{|\mathcal{V}_p|};\mathbf{u}_{scene}^{(T)}])). \end{split}$$ Here, $\phi$ is an activation function used for scaling the network outputs and we set it as softmax. Finally, we use the following cross entropy loss function to measure the consistency between the model outputs and manual annotations: $$\label{eq:loss} \begin{split} L(\theta)&=-\sum_{k=1}^{\mathrm{K}}y_k\log(p(y_k|{\{\mathbf{u}_i^{(T)}\}}_{i\in\mathcal{V}_p},\mathbf{u}_{scene}^{(T)})), \end{split}$$ where $\theta$ is the model parameters to be learned, $K$ is the number of collective activity labels and $y_k$ is $1$ if the frame belongs to class $k$ and $0$ otherwise. The model parameters are optimized using the back propagation through time (BPTT) algorithm. Embedding with Attention ------------------------ \[sec:Attention\] Note that the update of $\mathbf{u}_{scene}^{(t)}$ in Eq. involves the summation of individual embeddings ${\{\mathbf{u}_i^{(t)}\}}_{i\in\mathcal{V}_p}$, which means that all the person-group interactions are equally connected to the group activity. However, to correctly discover the collective structure information, one should pay more attention to some relevant person-group interactions. For example, in a waiting scenario presented in [Figure \[fig:fig3\]]{}, those individuals who are waiting in line should be paid more attention to, since their person-group interactions are strongly relating to the activity, while the subjects walking behind are less valuable for the recognition, and sometimes even cause ambiguity. Thus the influence of the interactions between walking subjects and waiting group should be suppressed in this case. Inspired by the recent success of attention models for sequential modeling [@chorowski2015attention; @ramanathan2015detecting], we use an attention mechanism to encode the relevance of each individual embedding and scene embedding as: $$\label{eq:alpha} \alpha_i^{(t)}=tanh(\mathbf{w}_g^\mathrm{T}\mathbf{u}_i^{(t)} + \mathbf{w}_{gs}^\mathrm{T}\mathbf{u}_{scene}^{(t-1)}),$$ where $\mathbf{w}_g,\mathbf{w}_{gs}\in\mathbb{R}^d$. Given the relevance of individuals in the collective scenario, we can measure the importance of the person-group interactions derived from individual $i$ as: $$\label{eq:gi} g_i^{(t)}=\frac{e^{\alpha_i^{(t)}/\tau}}{\sum_{j\in\mathcal{V}_p}e^{\alpha_j^{(t)}/\tau}},$$ where $\tau$ is the softmax temperature parameter. By considering all the individuals in the given collective scenario together, we can reformulate the embedding of scene as following: $$\label{eq:newus} \begin{split} \mathbf{u}_{scene}^{(t)} &= (1-\lambda)\cdot\mathbf{u}_{scene}^{(t-1)} \\ &+\lambda\cdot\sigma(\mathbf{W}_{s} [\mathbf{x}_{scene}; \frac{\sum_{i\in\mathcal{V}_p}\mathbf{x}_i}{|\mathcal{V}_p|}; \sum_{i\in\mathcal{V}_p}g_i^{(t)}\mathbf{u}_i^{(t)}). \end{split}$$ Experimental Results ==================== For evaluation, we tested our model on three collective activity datasets: collective activity dataset [@Choi_VSWS_2009], the collective activity extended dataset and a newly proposed dataset by ourselves denoted as CA Dataset, CAE Dataset and SYSU-CA Dataset, respectively. We have compared our model with the state-of-the-art collective activity recognition methods [@antic2014learning; @lan2012discriminative; @choi_eccv12; @hajimirsadeghi2015learning; @hajimirsadeghi2015visual; @deng2015deep; @deng2015structure; @ibrahim2015hierarchical]. In the following, we first provide some implementation details and then report our results on these three benchmark. For feature representation, we used the feature maps obtained in the “pool5” layer of two-stream ResNet-50 net (pretrained on the UCF101 action set [@twostream01]) as our two-stream feature. For each person in the collective scenario, we extracted its two-stream feature as our individual feature. We also extracted the two-stream feature from the entire collective image as the feature representation of a scene. Our algorithm was implemented using the Tensorflow package [@tensorflow2015-whitepaper]. We empirically set the Softmax temperature parameter $\tau$ and the update step size as 0.25 and 0.3 respectively. The hidden units of latent embedding was set as 256. The dropout weight of the dropout layer employed in Eq.(\[eq:p\_y\]) was set to 0.5 during training phase. We deployed Xavier Initiaizer suggested in [@glorot2010understanding] and optimized parameters with Adam Optimization strategy [@kingma2014adam]. We also conducted two baselines including Image Classification and Person Classification for comparison. In Image Classification, we built a softmax classifier on top of two-stream feature of each single frame. While in Person Classification, we constructed a feature representation by averaging features over all people instead. Collective Activity (CA) Dataset -------------------------------- The collective activity dataset contains 44 video clips of 5 collective activities including crossing, waiting, queueing, walking and talking. Each participant appeared in the videos was annotated every 10 frames with a bounding box and the collective activity labels were provided for evaluation. We followed exactly the testing protocol used in [@deng2015structure] and compared our method with the state-of-the-art methods in Table \[Tab:CompResCAD\]. We set the model parameters $T$ variables as 3. Their effects will be further discussed in section \[sec:moreEvaluation\]. As shown, our model outperformed both the deep learning based and non-deep learning based competitors, and obtained the state-of-the-art results on the Collective Activity Dataset. Specifically, our method achieved an accuracy of 85.4%, which is about 2% higher than that of Cardinality Kernel model. Our model outperformed the Deep Structure Model by a margin of 4%. The results demonstrate that our proposed person-group interaction based modeling performs better than the existing person-person interaction based modelings. The relevant confusion matrix obtained by our method is presented in Figure \[Fig:confusion\] (a). It reveals that our method can achieve a good result for the recognition of activities like talking, waiting, and queueing . We also observe that our method often misclassified activity Walking as Crossing. This is because that subjects in both walking and crossing actvities performed similar atomic action (walking), and the person-group interactions in these activities were not as distinguishable as those in the other activities such as talking and queueing. This result is consistent with the claim drawn in [@Choi_CVPR_2011] that the walking activity in this set could be biased. Collective Activity Extended (CAE) Dataset ------------------------------------------ By replacing the walking activity with two activities dancing and jogging, the Collective Activity extended Dataset with 6 collective activities was proposed in [@Choi_CVPR_2011]. For evaluation, we also set $T$ as 3 and followed the evaluation protocol in [@deng2015structure]. Table \[Tab:CompResCADE\] presents the detailed comparison results. As shown, our model can obtain an accuracy of 97.94%, which is 7% superior to the best result obtained by the Structure Inference Machines model [@deng2015structure]. This again demonstrates the effectiveness of the proposed person-group interaction modeling for collective activity recognition. By exactly examining the confusion table obtained by our method in Figure \[Fig:confusion\] (b), our model can obtain good recognition results for most of the activities. We also observe that about 12% of the Waiting samples were misclassified as Crossing, since in most of misclassified scenarios, waiting activity was usually followed by crossing, and the transition boundary between these two activities is indistinguishable, which are usually labelled as crossing. SYSU Collective Activity (SYSU-CA) Dataset ------------------------------------------ For more in-depth evaluation, we also collected a new multi-view collective activity dataset. This dataset includes 7 different collective activities (*Talking*, *Fighting*, *Following*, *Waiting*, *Entering*, *Gathering* and *Dismissing*) distributed in total 285 video clips which were captured from 3 different views. Compared with other existing datasets, this set is unique in the following aspects: 1) each activity was captured from three different views; 2) the set contains more activity samples for collective activity analysis; 3) the dynamic motions in collective activities are more complex. Our results are obtained by different methods on this dataset followed in four different settings: view1, view2, view3 and an integrated version. As for single view evaluation, we employed three-fold cross validation protocol, where two-thirds of the videos from the corresponding view were used for training and the rest for testing. In the integrated evaluation setting, we report the accumulative accuracies obtained on separate views. In details, we set the parameters $T$ as 4, respectively. The experimental results are presented in Table \[Tab:CompResNewCAD\] and Figure \[Fig:confusion\] (c). Compared with the baselines Image Classification and Person Classification, our method achieved the best recognition result on most of the view settings and obtained an accuracy of 85.85% on the total setting. We also observe that our baseline Image Classification model can obtain a reliable performance on this set, while the Person Classification performed unsatisfactorily. We further concluded that our model is robust to view variation since the results on three different views were consistent and satisfactory. However, since the person-group interactions were explicitly modeled, the performance has been further improved. The confusion table in Figure \[Fig:confusion\] (c) indicates that our method often confuses activities Gathering and Dismissing with each other. This can be attributed to that the individuals in both activities had high similarity on the spatial and temporal distribution. More Discussions ---------------- \[sec:moreEvaluation\] **Efficiency of our model.** Compared with baseline models, the difference in our model is that we explicitly model person-group interaction in latent space so that our model is able to compensate the individual information with group context such that our model outperforms most of the baseline models with the same feature in all dataset as shown in Table \[Tab:CompResCAD\], Table \[Tab:CompResCADE\] and Table \[Tab:CompResNewCAD\]. **Effect of iteration step $T$.** Table \[Tab:EvalT\] provides the results of varying the iteration step number $T$ in our embedding procedure. In this experiment, attention mechanism was employed and the number of hidden neurons was set as 256. We can observe that, a better recognition result can be obtained by setting $T$ as 3 or 4 in most of the cases, which means that the collective interactions can be effectively discovered by our embedding model with a quite small $T$. **With vs. without attention.** Here, we investigated the effect of the employed attention embedding mechanism. For comparison, we set the parameters $T$ and number of hidden units as 3 and 256, respectively. The detailed comparison results are presented in Table \[Tab:EvalAtt\]. As shown, using attention mechanism can always benefit the recognition. Especially on the collective activity dataset, the introduced attention mechanism can improve the accuracy by a margin of 2%, which demonstrates that the attention mechanism can help to suppress the influence of activity invaders and thus obtained a better activity representation. Conclusion ========== In this paper, we developed a latent embedding model for collective activity recognition. By embedding the latent variables in the collective graphical model and combining with an attention mechanism, our method can effectively capture the complex collective structures depicted in collective activity videos (images) and obtain the state-of-the-art results on two benchmarking datasets and a new collective activity set. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported partially by the National Key Research and Development Program of China (2016YFB1001002, 2016YFB1001003), NSFC (No.61522115, 61472456, 61628212), Guangdong Natural Science Funds for Distinguished Young Scholar under Grant S2013050014265, the Guangdong Program (No.2015B010105005), the Guangdong Science and Technology Planning Project (No.2016A010102012, 2014B010118003), and Guangdong Program for Support of Top-notch Young Professionals (No.2014TQ01X779). [^1]: Corresponding author. [^2]: We use activity invaders to indicate the individuals irrelevant to the activity.
--- abstract: 'We study Voiculescu’s microstate free entropy for a single non–selfadjoint random variable. The main result is that certain additional constraints on eigenvalues of microstates do not change the free entropy. Our tool is the method of random regularization of Brown measure which was studied recently by Haagerup and the author. As a simple application we present an upper bound for the free entropy of a single non–selfadjoint operator in terms of its Brown measure and the second moment. We furthermore show that this inequality becomes an equality for a class of $DT$–operators which was introduced recently by Dykema and Haagerup.' address: 'Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland' author: - Piotr Śniady bibliography: - 'biblio.bib' title: 'Inequality for Voiculescu’s free entropy in terms of Brown measure' --- Introduction ============ The microstate free entropy $\chi$ was introduced by Voiculescu [@VoiculescuPart2] as a tool for the study of some non–commutative systems. Roughly speaking, it answers the question how many finite matrices have nearly the same moments as a given non–commutative random variable. It has turned out to have very powerful applications (cf.[@Ge1997; @VoiculescuPart3]), however it is not an easy object to deal with. One of the reasons of these difficulties is that currently there are no general methods for the computation of the free entropy in concrete cases. Exact formulas were found for the free entropy of a single selfadjoint random variable, for tuples of free random variables [@VoiculescuPart2] and for $R$–diagonal elements [@NicaShlyakhtenkoSpeicher1999]. In this article we present a method which hopefully will be useful for calculating and estimating free entropy in many concrete cases. The main idea is to change the definition of microstates $\Gamma$ which approximate a single non–selfadjoint random variable $x$ in such a way that it does not change the value of the free entropy $\chi(x)$. The original sets $\Gamma$ consisted of all matrices which—informally speaking—had almost the same moments as a given random variable $x$, while our new sets $\tilde{\Gamma}$ will consist of these matrices in $\Gamma$ which additionally have similar eigenvalues to the Brown measure of $x$. In order to show that $\Gamma$ and $\tilde{\Gamma}$ give rise to the same free entropy we use the method of random regularization of Brown spectral measure which was introduced by Haagerup [@Haagerup2001] and further developed by the author [@Sniady2001]. As an application we present a new upper bound for the free entropy of a single random variable $x$ in terms of its Brown measure and second moment. We show also that for a class of $DT$–operators which was introduced recently by Dykema and Haagerup [@DykemaHaagerup2001] this inequality becomes equality. Preliminaries {#sec:preliminaries} ============= Non–commutative probability spaces ---------------------------------- A non–commutative probability space is a pair $({{\mathcal{A}}},\phi)$, where ${{\mathcal{A}}}$ is a $\star$–algebra and $\phi$ is a normal, faithful, tracial state on ${{\mathcal{A}}}$. Elements of ${{\mathcal{A}}}$ will be referred to as non–commutative random variables and state $\phi$ as expectation value. One of the simplest examples is the set ${{\mathcal{M}}}_N$ of all complex–valued $N\times N$ matrices equipped with a normalized trace $\operatorname{tr}$ given by $\operatorname{tr}m=\frac{1}{N} \operatorname{Tr}m$, where $m\in{{\mathcal{M}}}_N$ and $\operatorname{Tr}$ is the usual trace. Microstate free entropy. ------------------------ The original definition of Voiculescu’s free entropy $\chi^{{\operatorname{sa}}}(x_1,\dots,x_n)$ allowed to compute the free entropy of a tuple of non–commutative self–adjoint random variables. The considered in this article free entropy $\chi(x)$ of a non–selfadjoint random variable is connected with the original definition by $$\chi(x)=\chi^{{\operatorname{sa}}}(\Re x,\Im x).$$ Let $x$ be a non–commutative random variable, $\epsilon>0$, $R>0$ be real numbers and $k>0$ be integer. We define the sets [@VoiculescuPart2; @NicaShlyakhtenkoSpeicher1999] $$\begin{gathered} \Gamma_R(x;k,N,\epsilon)=\Big\{m\in {{\mathcal{M}}}_N: \|m\|\leq R \text{ and} \\ |\operatorname{tr}(m^{s_1}\cdots m^{s_p})- \tau(x^{s_1} \cdots x^{s_p})|<\epsilon\\ \text{for all } p\leq k \text{ and } s_1,s_2,\dots, s_p\in \{1,\star\} \Big\}. \label{eq:defgamma}\end{gathered}$$ Define next $$\chi_R(x;k,\epsilon)=\limsup_{N\to\infty} \left[\frac{1}{N^2}\log\operatorname{vol}\Gamma_R(x;k,N,\epsilon) + \log N\right], \label{eq:defchir}$$ where $\operatorname{vol}$ is a Lebesgue measure on ${{\mathcal{M}}}_N$ as described in (\[eq:defvol\]). Lastly, the free entropy is defined by $$\chi(x)=\sup_R \inf_{k,\epsilon} \chi_R(x;k,\epsilon). \label{eq:defchi}$$ Since $\chi_R(x;k,\epsilon)$ is a decreasing function of $k$ and an increasing function of $\epsilon$, hence we have the following simple lemma. \[lem:pierwszy\] Let a non–commutative random variable $x$ and a number $R>0$ be given. Then there exists a sequence $(\epsilon_N)$ of non–negative numbers and a sequence $(k_N)$ of natural numbers such that $\lim_{N\rightarrow\infty} \epsilon_N =0$, $\lim_{N\rightarrow\infty} k_N=\infty$ and $$\chi_R(x) \leq\limsup_{N\rightarrow\infty} \chi_R(x;k_N,N,\epsilon_N). \label{eq:nawygnaniu}$$ Fuglede–Kadison determinant and Brown measure --------------------------------------------- Let a non–commutative probability space $({{\mathcal{A}}},\phi)$ be given. For $x\in{{\mathcal{A}}}$ we define its Fuglede–Kadison determinant $\Delta(x)$ by [@FugledeKadison] $$\Delta(x)=\exp\left[ \phi( \ln |x| )\right]$$ and its Brown measure [@Brown] to be the Schwartz distribution on ${{\mathbb{C}}}$ given by $$\mu_x= \frac{1}{2\pi} \left( \frac{\partial^2}{\partial (\Re\lambda)^2} +\frac{\partial^2}{\partial (\Im\lambda)^2} \right) \ln \Delta (x-\lambda).$$ One can show that in fact $\mu_x$ is a positive probability measure on ${{\mathbb{C}}}$. The Brown measure of a matrix $m\in{{\mathcal{M}}}_N$ with respect to the state $\operatorname{tr}$ is a probability counting measure on the set of eigenvalues of $m$: $$\mu_m=\frac{1}{N} \sum_{i=1}^N \delta_{\lambda_i},$$ where $\lambda_1,\dots,\lambda_N$ are the eigenvalues of $m$ counted with multiples. In the following we will be interested in studying the random measure $\omega\mapsto \mu_{A(\omega)}$ for a random matrix $A\in{{{\mathcal{M}}}_N\big({{\mathcal{L}}}^{\infty-}(\Omega) \big)}$. This random measure is called the empirical eigenvalues distribution. Convergence of $\star$–moments ------------------------------ Let a sequence $(A_N)$ of random matrices (where $A_N\in{{{\mathcal{M}}}_N\big({{\mathcal{L}}}^{\infty-}(\Omega) \big)}$), a non–commutative probability space $({{\mathcal{A}}},\phi)$ and $x\in{{\mathcal{A}}}$ be given. We say that the sequence $A_N$ converges to $x$ in $\star$–moments almost surely if for every $n\in{{\mathbb{N}}}$ and $s_1,\dots, s_n\in\{1,\star\}$ we have that $$\lim_{N\rightarrow\infty} \operatorname{tr}_N[ A_N^{s_1} \cdots A_N^{s_n} ] = \phi( x^{s_1} \cdots x^{s_n} )$$ holds almost surely. Random regularization of Brown measure -------------------------------------- We say that a random matrix $$G_N=(G_{N,ij})_{1\leq i,j\leq N}\in {{{\mathcal{M}}}_N\big({{\mathcal{L}}}^{\infty-}(\Omega) \big)}$$ is a standard Gaussian random matrix if $$\big(\Re G_{N,ij}\big)_{1\leq i,j\leq N}, \big(\Im G_{N,ij}\big)_{1\leq i,j \leq N}$$ are independent Gaussian variables with mean zero and variance $\frac{1}{2 N}$. \[theo:regularyzacja\] Let $(A_N)$ be a sequence of random matrices, $A_N\in{{{\mathcal{M}}}_N\big({{\mathcal{L}}}^{\infty-}(\Omega) \big)}$, which converges in $\star$–moments to $x$ almost surely. Let furthermore $(G_N)$ be a sequence of independent standard Gaussian matrices which is independent of $(A_N)$. There exists a sequence $(t_N)$ of real numbers such that $\lim_{N\rightarrow\infty} t_N=0$ and such that the sequence of empirical eigenvalues distributions $\mu_{A_N+t_N G_N}$ converges weakly to $\mu_x$ almost surely. There also exists a sequence $(B_N)$ of non–random matrices $B_N\in{{\mathcal{M}}}_N$ such that $\lim_{N\rightarrow\infty} \|B_N\|=0$ and such that the sequence of empirical eigenvalues distributions $\mu_{A_N+B_N}$ converges weakly to $\mu_x$ almost surely. The first part was was proved in [@Sniady2001]. For the second part of the theorem let us define $B_N=t_N G_N(\omega)$. Since $\limsup_{N\rightarrow\infty} \|G_N\|<\infty$ holds almost surely [@Geman], hence so defined sequence $B_N$ fulfills the hypothesis of the theorem almost surely. The main result: improved microstates $\tilde{\Gamma}$ {#sec:themain} ====================================================== Let $x$ be a non–commutative random variable, $\epsilon>0$, $R>0$ be real numbers and $k>0$, $m>0$ be integers. In the full analogy with (\[eq:defgamma\])—(\[eq:defchi\]) we define improved microstates $\tilde{\Gamma}$ and improved free entropy $\tilde{\chi}$: $$\begin{gathered} \tilde{\Gamma}_R(x;k,N,\epsilon,l,\theta)=\bigg\{m\in \Gamma_R(x;k,N,\epsilon):\\ \left| \int_{{\mathbb{C}}}z^i \bar{z}^j d\mu_m - \int_{{\mathbb{C}}}z^i \bar{z}^j {\ {\mathrm{d}}}\mu_x\right| <\theta \quad\text{for } i,j\leq l \bigg\}, \label{eq:defgammatilde}\end{gathered}$$ $$\tilde{\chi}_R(x;k,\epsilon,l,\theta)=\limsup_{N\to\infty} \left[\frac{1}{N^2}\log\operatorname{vol}\tilde{\Gamma}_R(x;k,N,\epsilon,l,\theta) + \log N\right], \label{eq:defchirtilde}$$ $$\tilde{\chi}(x)=\sup_R \inf_{k,\epsilon,l,\theta} \tilde{\chi}_R(x;k,\epsilon,l,\theta). \label{eq:defchitilde}$$ \[theo:glowne\] For every non–commutative random variable $x$ we have $$\chi(x)=\tilde{\chi}(x).$$ Since $\tilde{\Gamma}_R(x; k,N,\epsilon,l,\theta)\subseteq \Gamma_R(x; k,N,\epsilon)$ hence $\tilde{\chi}(x)\leq \chi(x)$ follows easily. Let $(\epsilon_N)$ and $(k_N)$ be the sequences given by Lemma \[lem:pierwszy\]. Let $(A_N)$ be a sequence of independent random matrices such that the distribution of $A_N$ is the uniform distribution on the set $\Gamma_R(x; k_N,N,\epsilon_N)$ and let $(B_N)$ be the sequence given by Theorem \[theo:regularyzacja\]. Since $\|B_N\|$ converges to zero, hence there exists $R'>0$, a sequence of positive numbers $(\epsilon'_N)$ which converges to zero and a sequence of natural numbers $(k'_N)$ which diverges to infinity such that $$\Gamma_R(x;k_N,N,\epsilon_N)+B_N\subseteq\Gamma_{R'}(x;k'_N,N,\epsilon_N')$$ holds for every $N\in{{\mathbb{N}}}$, where $\Gamma_R(x;k_N,N,\epsilon_N)+B_N$ denotes a translation of the set $\Gamma_R(x;k_N,N,\epsilon_N)$ by the vector $B_N$. Since random measures $\omega\mapsto\mu_{A_N(\omega)+B_N}$ converge weakly to $\mu_x$ in probability, hence for any $\theta>0$ and integer $l>0$ we have that $$\lim_{N\rightarrow\infty} P\big(\omega:A_N(\omega)+B_N\not\in\tilde{\Gamma}_{R'}(x; k'_N,N,\epsilon_N',l,\theta) \big) =0.$$ Since the Lebesgue measure is translation–invariant, hence for any $\theta>0$ and integer $l>0$ we have $$\limsup_{N\rightarrow\infty} \frac{\operatorname{vol}\Gamma_R(x; k_N,N,\epsilon_N)}{\operatorname{vol}\tilde{\Gamma}_{R'}(x; k'_N,N,\epsilon'_N,l,\theta)} \leq 1,$$ or equivalently $$\label{eq:christmasa} \limsup_{N\rightarrow\infty} \big( \chi_R(x; k_N,N,\epsilon_N) - \tilde{\chi}_{R'}(x; k'_N,N,\epsilon'_N,l,\theta) \big)\leq 0.$$ For any $\epsilon>0$ and integer $k>0$ there exists $N_0$ such that for any $N>N_0$ we have $\epsilon_N<\epsilon$ and $k_N>k$, hence for $N>N_0$ $$\label{eq:christmasb} \tilde{\chi}_{R'}(x; k'_N,N,\epsilon'_N,l,\theta) \leq \tilde{\chi}_{R'}(x; k,N,\epsilon,l,\theta).$$ Inequalities , and combine to give $$\chi(x)\leq \tilde{\chi}(x).$$ Application: upper bound for free entropy ========================================= In this section we present a new inequality for the free entropy of a single non–selfadjoint random variable. The main idea is to write matrices from microstates $\tilde{\Gamma}$ in the upper triangular form and then to find constraints on diagonal and offdiagonal entries. Pull–back of the Lebesgue measure on ${{\mathcal{M}}}_N$ -------------------------------------------------------- We denote by ${{\mathcal{M}}}_N^{{\operatorname{d}}}=\{m\in{{\mathcal{M}}}_N: m_{ij}=0 \mbox{ if } i\neq j\}$ the set of diagonal matrices and by ${{\mathcal{M}}}_N^{{\operatorname{sut}}}=\{m\in{{\mathcal{M}}}_N: m_{ij}=0 \mbox{ if } i\geq j\}$ the set of all strictly upper triangular matrices. We can regard ${{\mathcal{M}}}_N$ and ${{\mathcal{M}}}_N^{{\operatorname{sut}}}$ as real Euclidean spaces with a scalar product $\langle x,y\rangle=\Re \operatorname{Tr}x y{^{\star}}$ and thus equip them with Lebesgue measures $$\operatorname{vol}=\prod_{1\leq i,j\leq N} {\ {\mathrm{d}}}\Re m_{ij} {\ {\mathrm{d}}}\Im m_{ij} \label{eq:defvol}$$ and $$\operatorname{vol}^{{\operatorname{sut}}}= \prod_{1\leq i<j\leq N} {\ {\mathrm{d}}}\Re m_{ij} {\ {\mathrm{d}}}\Im m_{ij}$$ respectively. We have a clear isomorphism ${{\mathcal{M}}}_N^{{\operatorname{d}}}=\{ (\lambda_1,\dots,\lambda_N)\in {{\mathbb{C}}}^N \}$ and we equip it with a measure $$\operatorname{vol}^{{\operatorname{d}}}=\frac{ \pi^{\frac{N^2-N}{2}} }{\prod_{1\leq i\leq N} i!} \prod_{1\leq i<j\leq N} |\lambda_i-\lambda_j|^2 \prod_{1\leq i\leq N} {\ {\mathrm{d}}}\Re \lambda_i {\ {\mathrm{d}}}\Im \lambda_i.$$ We also denote by $U_N$ the set of unitary $N\times N$ matrices equipped with the Haar measure $\operatorname{vol}^{{\operatorname{U}}}$ normalized in such a way that it is a probability measure. \[prop:pullback\] For every $N$ the measure $\operatorname{vol}^{{\operatorname{d}}}\times \operatorname{vol}^{{\operatorname{sut}}}\times \operatorname{vol}^{{\operatorname{U}}}$ is a pull–back of the measure $\operatorname{vol}$ with respect to the map $${{\mathcal{M}}}_N^{{\operatorname{d}}}\times {{\mathcal{M}}}_N^{{\operatorname{sut}}}\times U_N \ni (d, m, u) \mapsto u(d+m)u^{-1} \in {{\mathcal{M}}}_N.$$ This result is due to Dyson and can be extracted from Appendix A.35 of [@Mehta]. Diagonal entropy $\hat{\chi}^{{\operatorname{d}}}$ -------------------------------------------------- Let $x$ be a non–commutative random variable. In the following we define an auxiliary quantity $\hat{\chi}^{{\operatorname{d}}}(\nu)$ which would answer the question how many diagonal matrices (with respect to the measure $\operatorname{vol}^{{\operatorname{d}}}$) have almost the same Brown measure as $x$. $$\begin{gathered} \hat{\Gamma}^{{\operatorname{d}}}_R (x;N,l,\theta)= \bigg\{m \in {{\mathcal{M}}}_N^{{\operatorname{d}}}: \|m\|\leq R \text{ and} \\ \bigg| \int_{{\mathbb{C}}}z^i \bar{z}^j{\ {\mathrm{d}}}\mu_m - \int_{{\mathbb{C}}}z^i \bar{z}^j{\ {\mathrm{d}}}\mu_x \bigg|< \theta \text{ for all } i,j\leq l \bigg\}; \end{gathered}$$ $$\hat{\chi}_R^{{\operatorname{d}}}(x;l,\theta)=\lim_{N\to\infty} \left[\frac{1}{N^2}\log \operatorname{vol}^{{\operatorname{d}}}\hat{\Gamma}_R^{{\operatorname{d}}}(x;l,\theta) + \frac{\log N}{2} \right],$$ $$\hat{\chi}^{{\operatorname{d}}}(x)=\sup_R \inf_{l,\theta} \hat{\chi}^{{\operatorname{d}}}(x; l,\theta).$$ \[theo:diagonalne\] For any non–commutative random variable $x$ we have $$\hat{\chi}^{{\operatorname{d}}}(x)=\int_{{{\mathbb{C}}}} \int_{{{\mathbb{C}}}} \log |z_1-z_2| {\ {\mathrm{d}}}\mu_x(z_1) {\ {\mathrm{d}}}\mu_x(z_2)+\frac{3}{4}+\frac{\ln \pi}{2}.$$ Proof follows exactly the proof of Proposition 4.5 of [@VoiculescuPart2], but since we are dealing with measures on ${{\mathbb{C}}}$ and the original proof concerns measures on ${{\mathbb{R}}}$, we have to replace Lemma 4.3 in [@VoiculescuPart2] by Theorem 2.1 of [@Hadwin]. Offdiagonality -------------- If $x$ is a non–commutative random variable we define its offdiagonality $\operatorname{od}_x$ by $$\operatorname{od}_x=\tau(xx{^{\star}})-\int_{{\mathbb{C}}}|z|^2 d\mu_x(z).$$ This quantity can be regarded as a kind of a non–commutative variance. Since $\operatorname{od}_x=0$ if and only if $x$ is normal, hence offdiagonality of $x$ can be also regarded as a kind of a distance of $x$ to normal operators. For an upper–triangular matrix $m\in{{\mathcal{M}}}_N({{\mathbb{C}}})$ ($m_{ij}=0$ if $i>j$) its offdiagonality is equal to the (normalized) sum of squares of the offdiagonal entries: $$\operatorname{od}_m=\frac{1}{N} \sum_{1\leq i<j\leq N} |m_{ij}|^2.$$ \[prop:pozadiagonalne\] For any $o>0$ we have that $$\lim_{N\rightarrow\infty} \left[ \frac{1}{N^2} \log \operatorname{vol}^{{\operatorname{sut}}}\{m\in{{\mathcal{M}}}_N^{{\operatorname{sut}}}: \operatorname{od}_m\leq o\}+\frac{\log N}{2} \right]=\frac{1}{2}+\frac{\log 2\pi o}{2}.$$ It is enough to notice that $\{m\in{{\mathcal{M}}}_N^{{\operatorname{sut}}}: \operatorname{tr}mm{^{\star}}\leq o \}$ is a $N(N-1)$–dimensional ball with radius $\sqrt{oN}$, hence its volume is equal to $$\pi^{\frac{N(N-1)}{2}} \left[ \Gamma\left( \frac{N (N-1)}{2}+1 \right)\right]^{-1} (oN)^{\frac{N (N-1)}{2}}.$$ The main inequality ------------------- Let $x$ be a non–commutative random variable. Then $$\label{eq:glowne} \chi(x)\leq \int_{{{\mathbb{C}}}} \int_{{{\mathbb{C}}}} \log |z_1-z_2| {\ {\mathrm{d}}}\mu_x(z_1) {\ {\mathrm{d}}}\mu_x(z_2)+\frac{5}{4}+\ln \pi\sqrt{2 \operatorname{od}_x}.$$ Proof is a direct consequence of Theorem \[theo:glowne\], Proposition \[prop:pullback\], Theorem \[theo:diagonalne\] and Proposition \[prop:pozadiagonalne\]. Free entropy of $DT$–operators ------------------------------ For any compactly supported probability measure $\nu$ on ${{\mathbb{C}}}$ and $o\geq 0$ Dykema and Haagerup consider an operator $x$ which is said to be $DT(\nu,o)$ [@DykemaHaagerup2001]. This operator is implicitly defined to be the expected $\star$–moment limit of random matrices $$A_N=D_N+\sqrt{o} T_N, \label{eq:definicjaan}$$ where $D_N$ is a diagonal random matrix with eigenvalues $\lambda_1,\dots, \lambda_n$ which are i.i.d. random variables with distribution given by $\nu$ and $$T_N=\left[ \begin{array}{ccccc} 0 & g_{1,2} & \cdots & g_{1,n-1} & g_{1,n} \\ 0 & 0 & \cdots & g_{2,n-1} & g_{2,n} \\ \vdots& & \ddots & \vdots & \vdots \\ & & & 0 & g_{n-1,n} \\ 0& & \cdots & 0 & 0 \end{array} \right], \label{eq:utm}$$ is an upper–triangular random matrix where $(\Re g_{i,j},\Im g_{i,j})_{1\leq i<j \leq N}$ are i.i.d. $N\left(0,\frac{1}{N}\right)$ random variables. We recall that the Brown measure and offdiagonality of $x$ are given by $\mu_x=\nu$ and $\operatorname{od}_x=o$. For any compactly supporded probability measure $\nu$ on ${{\mathbb{C}}}$ and any number $o>0$ if $x$ is a $DT(\nu,o)$ then the inequality (\[eq:glowne\]) becomes equality. Let us fix $R>0$. Similarly as in Lemma \[lem:pierwszy\] let $(\theta_N)$ be a sequence of non–negative numbers and $(l_N)$ be a sequence of natural numbers such that $$\lim_{N\rightarrow\infty} \theta_N =0, \qquad \lim_{N\rightarrow\infty} l_N=\infty,$$ $$\hat{\chi}^{{\operatorname{d}}}_R(x) \leq\limsup_{N\rightarrow\infty} \hat{\chi}^{{\operatorname{d}}}_R(x;N,l_N,\theta_N).$$ In definition (\[eq:definicjaan\]) of $A_N$ let us change $D_N$ to be any (non–random) element of the set $\hat{\Gamma}^{{\operatorname{d}}}_R(x; N, l_N,\theta_N)$. From results of Dykema and Haagerup [@DykemaHaagerup2000] it follows that despite this change the sequence $A_N$ still converges in expected $\star$–moments to $x$: $$\lim_{N\rightarrow\infty} {{\mathbb{E}}}\operatorname{tr}(A_N^{s_1} \cdots A_N^{s_k})= \phi(x^{s_1} \cdots x^{s_k}) \label{eq:zbieznosca}$$ for any $k\in{{\mathbb{N}}}$ and $s_1,\dots,s_k\in\{1,\star\}$ and by using similar combinatorial arguments as in [@Thorbjornsen2000] one can show that $$\lim_{N\rightarrow\infty} \operatorname{Var}\operatorname{tr}(A_N^{s_1} \cdots A_N^{s_k})=0. \label{eq:zbieznoscb}$$ Since $\limsup_{N\rightarrow\infty} \|T_N\|<\infty$ almost surely [@Geman], therefore there exists $R'>0$ such that for any integer $k$ and $\epsilon>0$ $$\lim_{N\rightarrow\infty} P\big(\omega\in\Omega: D_N+\sqrt{o} T_N(\omega)\in \Gamma_{R'}(x; k,N,\epsilon) \big) =1 . \label{eq:prawieok}$$ Furthermore since the convergence in (\[eq:zbieznosca\]) and (\[eq:zbieznoscb\]) is uniform with respect to choice of the sequence $(D_N)$, hence it is possible to find universal $R'$ for all choices of $(D_N)$. By comparing the densities of two measures on ${{\mathcal{M}}}_N^{{\operatorname{sut}}}$: the Lebesgue measure $\operatorname{vol}^{{\operatorname{sut}}}$ and the distribution of the Gaussian random matrix $\sqrt{o} T_N$ we see that (\[eq:prawieok\]) implies that for every $0<\delta<1$, every integer $k$ and $\epsilon>0$ there exists $N_0$ such that for $N>N_0$ we have that the volume of the set $\{m\in{{\mathcal{M}}}_N^{{\operatorname{sut}}}: D_N+m \in \Gamma_{R'}(x;k,N,\epsilon) \}$ is bigger or equal to the volume of a $N(N-1)$–dimensional ball with radius $\sqrt{(1-\delta)oN}$: $$\begin{gathered} \operatorname{vol}^{{\operatorname{sut}}}\{m\in{{\mathcal{M}}}_N^{{\operatorname{sut}}}: D_N+m \in \Gamma_{R'}(x;k,N,\epsilon) \}\geq \\ \pi^{\frac{N(N-1)}{2}} \left[ \Gamma\left( \frac{N (N-1)}{2}+1 \right)\right]^{-1} [(1-\delta)oN]^{\frac{N (N-1)}{2}}. \label{eq:idenaobiad}\end{gathered}$$ Since the sequence $(D_N)$ was chosen arbitrarily, it follows that for every $0<\delta<1$, every integer $k$ and $\epsilon>0$ there exists $N_0$ such that for any $N>N_0$ and any $D_N\in\hat{\Gamma}^{{\operatorname{d}}}_R(x; N, l_N,\theta_N)$ inequality (\[eq:idenaobiad\]) holds. Now it is enough to apply Proposition \[prop:pullback\] to show that for every $0<\delta<1$, every integer $k$ and $\epsilon>0$ we have $$\chi_{R'}(x; k,\epsilon) \geq \hat{\chi}^{{\operatorname{d}}}_{R}(x)+\frac{1}{2}+\frac{\log 2\pi (1-\delta) o}{2}$$ what finishes the proof. Comparison of inequalities for free entropy ------------------------------------------- The inequality (\[eq:glowne\]) contains a double integral, which is often called logarithmic energy of a measure. A similar term appears in the formula for the free entropy of a single selfadjoint operator which is due do Voiculescu [@VoiculescuPart2]: $$\chi^{{\operatorname{sa}}}(x)=\int_{{{\mathbb{R}}}} \int_{{{\mathbb{R}}}} \log |z_1-z_2| {\ {\mathrm{d}}}\mu_x(z_1) {\ {\mathrm{d}}}\mu_x(z_2)+\frac{3}{4}+\frac{\log 2\pi}{2}. \label{eq:entropiasamosprzezonego}$$ It should be stressed that—despite the formal resemblance—the free entropies in (\[eq:glowne\]) and (\[eq:entropiasamosprzezonego\]) are different objects. Namely, in (\[eq:glowne\]) we consider a free entropy $\chi(x)$ of a non–selfadjoint random variable while in (\[eq:entropiasamosprzezonego\]) we consider a free entropy $\chi^{{\operatorname{sa}}}(x)$ of a selfadjoint random variable which is defined by hermitian matrix approximations. On the other hand inequality (\[eq:glowne\]) contains a term equal to the logarithm of the offdiagonality, which can be regarded as a non–commutative variance. A similar expression appears in the Voiculescu’s inequality for a free entropy of a non–selfadjoint variable [@VoiculescuPart2]: $$\chi(x)\leq \log \left[ \pi e \big( \phi(|x|^2)-|\phi(x)|^2 \big)^2 \right].$$ Acknowledgements ================ The research was conducted at Texas A&M University on a scholarship funded by Polish–US Fulbright Commission. I acknowledge the support of Polish Research Committee grant No. P03A05415.
--- abstract: 'Let $\mathcal L=-\Delta_{\mathbb H^n}+V$ be a Schrödinger operator on the Heisenberg group $\mathbb H^n$, where $\Delta_{\mathbb H^n}$ is the sub-Laplacian on $\mathbb H^n$ and the nonnegative potential $V$ belongs to the reverse Hölder class $RH_s$ with $s\geq Q/2$. Here $Q=2n+2$ is the homogeneous dimension of $\mathbb H^n$. For given $\alpha\in(0,Q)$, the fractional integrals associated to the Schrödinger operator $\mathcal L$ is defined by $\mathcal I_{\alpha}={\mathcal L}^{-{\alpha}/2}$. In this article, we first introduce the Morrey space $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ and weak Morrey space $WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ related to the nonnegative potential $V$. Then we establish the boundedness of fractional integrals ${\mathcal L}^{-{\alpha}/2}$ on these new spaces. Furthermore, in order to deal with certain extreme cases, we also introduce the spaces $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$ and $\mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n)$ with exponent $\beta\in(0,1]$.' address: | College of Mathematics and Econometrics, Hunan University, Changsha, 410082, P. R. China\ & Department of Mathematics and Statistics, Memorial University, St. John’s, NL A1C 5S7, Canada author: - Hua Wang title: Morrey spaces related to certain nonnegative potentials and fractional integrals on the Heisenberg groups --- Introduction ============ Heisenberg group $\mathbb H^n$ ------------------------------ The *Heisenberg group* $\mathbb H^n$ is a nilpotent Lie group with underlying manifold $\mathbb C^n\times\mathbb R$. The group structure (the multiplication law) is given by $$(z,t)\cdot(z',t'):=\Big(z+z',t+t'+2\mathrm{Im}(z\cdot\overline{z'})\Big),$$ where $z=(z_1,z_2,\dots,z_n)$, $z'=(z_1',z_2',\dots,z_n')\in\mathbb C^n$, and $$z\cdot\overline{z'}:=\sum_{j=1}^nz_j\overline{z_j'}.$$ It can be easily seen that the inverse element of $u=(z,t)$ is $u^{-1}=(-z,-t)$, and the identity is the origin $(0,0)$. The Lie algebra of left-invariant vector fields on $\mathbb H^n$ is spanned by $$\begin{cases} X_j=\displaystyle\frac{\partial}{\partial x_j}+2y_j\frac{\partial}{\partial t},\quad j=1,2,\dots,n,&\\ Y_j=\displaystyle\frac{\partial}{\partial y_j}-2x_j\frac{\partial}{\partial t},\quad j=1,2,\dots,n,&\\ T=\displaystyle\frac{\partial}{\partial t}.& \end{cases}$$ All non-trivial commutation relations are given by $$[X_j,Y_j]=-4T,\quad j=1,2,\dots,n.$$ The sub-Laplacian $\Delta_{\mathbb H^n}$ is defined by $$\Delta_{\mathbb H^n}:=\sum_{j=1}^n\big(X_j^2+Y_j^2\big).$$ The dilations on $\mathbb H^n$ have the following form $$\delta_a(z,t):=(az,a^2t),\quad a>0.$$ For given $(z,t)\in\mathbb H^n$, the *homogeneous norm* of $(z,t)$ is given by $$|(z,t)|=\big(|z|^4+t^2\big)^{1/4}.$$ Observe that $|(z,t)^{-1}|=|(z,t)|$ and $$\big|\delta_a(z,t)\big|=\big(|az|^4+(a^2t)^2\big)^{1/4}=a|(z,t)|.$$ In addition, this norm $|\cdot|$ satisfies the triangle inequality and leads to a left-invariant distant $d(u,v)=\big|u^{-1}\cdot v\big|$ for $u=(z,t)$, $v=(z',t')\in\mathbb H^n$. The ball of radius $r$ centered at $u$ is denoted by $$B(u,r)=\big\{v\in\mathbb H^n:d(u,v)<r\big\}.$$ The Haar measure on $\mathbb H^n$ coincides with the Lebesgue measure on $\mathbb R^{2n}\times\mathbb R$. The measure of any measurable set $E\subset\mathbb H^n$ is denoted by $|E|$. For $(u,r)\in\mathbb H^n\times(0,\infty)$, it can be shown that the volume of $B(u,r)$ is $$|B(u,r)|=r^{Q}\cdot|B(0,1)|,$$ where $Q:=2n+2$ is the *homogeneous dimension* of $\mathbb H^n$ and $|B(0,1)|$ is the volume of the unit ball in $\mathbb H^n$. A direct calculation shows that $$|B(0,1)|=\frac{2\pi^{n+\frac{\,1\,}{2}}\Gamma(\frac{\,n\,}{2})}{(n+1)\Gamma(n)\Gamma(\frac{n+1}{2})}.$$ Given a ball $B=B(u,r)$ in $\mathbb H^n$ and $\lambda>0$, we shall use the notation $\lambda B$ to denote $B(u,\lambda r)$. Clearly, we have $$\label{homonorm} |B(u,\lambda r)|=\lambda^{Q}\cdot|B(u,r)|.$$ For more information about the harmonic analysis on the Heisenberg groups, we refer the reader to [@stein2 Chapter XII] and [@thangavelu]. Let $V:\mathbb H^n\rightarrow\mathbb R$ be a nonnegative locally integrable function that belongs to the *reverse Hölder class* $RH_s$ for some exponent $1<s<\infty$; i.e., there exists a positive constant $C>0$ such that the following reverse Hölder inequality $$\left(\frac{1}{|B|}\int_B V(w)^s\,dw\right)^{1/s}\leq C\left(\frac{1}{|B|}\int_B V(w)\,dw\right)$$ holds for every ball $B$ in $\mathbb H^n$. For given $V\in RH_s$ with $s\geq Q/2$, we introduce the *critical radius function* $\rho(u)=\rho(u;V)$ which is given by $$\label{rho} \rho(u):=\sup\bigg\{r>0:\frac{1}{r^{Q-2}}\int_{B(u,r)}V(w)\,dw\leq1\bigg\},\quad u\in\mathbb H^n,$$ where $B(u,r)$ denotes the ball in $\mathbb H^n$ centered at $u$ and with radius $r$. It is well known that this auxiliary function satisfies $0<\rho(u)<\infty$ for any $u\in\mathbb H^n$ under the above assumption on $V$ (see [@lu]). We need the following known result concerning the critical radius function . \[N0\] If $V\in RH_s$ with $s\geq Q/2$, then there exist constants $C_0\geq 1$ and $N_0>0$ such that for all $u$ and $v$ in $\mathbb H^n$, $$\label{com} \frac{\,1\,}{C_0}\left(1+\frac{|v^{-1}u|}{\rho(u)}\right)^{-N_0}\leq\frac{\rho(v)}{\rho(u)}\leq C_0\left(1+\frac{|v^{-1}u|}{\rho(u)}\right)^{\frac{N_0}{N_0+1}}.$$ Lemma \[N0\] is due to Lu [@lu]. In the setting of $\mathbb R^n$, this result was given by Shen in [@shen]. As a straightforward consequence of , we can see that for each integer $k\geq1$, the following estimate $$\label{com2} 1+\frac{2^kr}{\rho(v)}\geq \frac{1}{C_0}\left(1+\frac{r}{\rho(u)}\right)^{-\frac{N_0}{N_0+1}}\left(1+\frac{2^kr}{\rho(u)}\right)$$ holds for any $v\in B(u,r)$ with $u\in\mathbb H^n$ and $r>0$, $C_0$ is the same as in . Fractional integrals -------------------- First we recall the fractional power of the Laplacian operator on $\mathbb R^n$. For given $\alpha\in(0,n)$, the classical fractional integral operator $I^{\Delta}_{\alpha}$ (also referred to as the Riesz potential) is defined by $$I^{\Delta}_{\alpha}(f):=(-\Delta)^{-\alpha/2}(f),$$ where $\Delta$ is the Laplacian operator on $\mathbb R^n$. If $f\in\mathcal S(\mathbb R^n)$, then by virtue of the Fourier transform, we have $$\widehat{I^{\Delta}_{\alpha}f}(\xi)=(2\pi|\xi|)^{-\alpha}\widehat{f}(\xi),\quad \forall\,\xi\in\mathbb R^n.$$ Comparing this to the Fourier transform of $|x|^{-\alpha}$, $0<\alpha<n$, we are led to redefine the fractional integral operator $I^{\Delta}_{\alpha}$ by $$\label{frac} I^{\Delta}_{\alpha}f(x):=\frac{1}{\gamma(\alpha)}\int_{\mathbb R^n}\frac{f(y)}{|x-y|^{n-\alpha}}\,dy,$$ where $$\gamma(\alpha)=\frac{\pi^{\frac{n}{\,2\,}}2^\alpha\Gamma(\frac{\alpha}{\,2\,})}{\Gamma(\frac{n-\alpha}{2})}$$ with $\Gamma(\cdot)$ being the usual gamma function. It is well-known that the Hardy-Littlewood-Sobolev theorem states that the fractional integral operator $I^{\Delta}_{\alpha}$ is bounded from $L^p(\mathbb R^n)$ to $L^q(\mathbb R^n)$ for $0<\alpha<n$, $1<p<n/{\alpha}$ and $1/q=1/p-{\alpha}/n$. Also we know that $I^{\Delta}_{\alpha}$ is bounded from $L^1(\mathbb R^n)$ to $WL^q(\mathbb R^n)$ for $0<\alpha<n$ and $q=n/{(n-\alpha)}$ (see [@stein]). Next we are going to discuss the fractional integrals on the Heisenberg group. For given $\alpha\in(0,Q)$ with $Q=2n+2$, the fractional integral operator $I_{\alpha}$ (also referred to as the Riesz potential) is defined by (see [@xiao]) $$\label{frac2} I_{\alpha}(f):=(-\Delta_{\mathbb H^n})^{-\alpha/2}(f),$$ where $\Delta_{\mathbb H^n}$ is the sub-Laplacian on $\mathbb H^n$ defined above. Let $f$ and $g$ be integrable functions defined on $\mathbb H^n$. Define the *convolution* $f*g$ by $$(f*g)(u):=\int_{\mathbb H^n}f(v)g(v^{-1}u)\,dv.$$ We denote by $H_s(u)$ the convolution kernel of heat semigroup $\big\{T_s=e^{s\Delta_{\mathbb H^n}}:s>0\big\}$. Namely, $$e^{s\Delta_{\mathbb H^n}}f(u)=\int_{\mathbb H^n}H_s(v^{-1}u)f(v)\,dv.$$ For any $u=(z,t)\in\mathbb H^n$, it was proved in [@xiao Theorem 4.2] that $I_{\alpha}$ can be expressed by the following formula: $$\label{frac3} \begin{split} I_{\alpha}f(u)&=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{s\Delta_{\mathbb H^n}}f(u)\,s^{\alpha/2-1}ds\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big(H_s*f\big)(u)\,s^{\alpha/2-1}ds. \end{split}$$ Let $V\in RH_s$ for $s\geq Q/2$. For such a potential $V$, we consider the time independent *Schrödinger operator* on $\mathbb H^n$ (see [@lin]), $$\mathcal L:=-\Delta_{\mathbb H^n}+V,$$ and its associated semigroup $$\mathcal T^{\mathcal L}_sf(u):=e^{-s\mathcal L}f(u)=\int_{\mathbb H^n}P_s(u,v)f(v)\,dv,\quad f\in L^2(\mathbb H^n),~s>0,$$ where $P_s(u,v)$ denotes the kernel of the operator $e^{-s\mathcal L},s>0$. For any $u=(z,t)\in\mathbb H^n$, it is well-known that the heat kernel $H_s(u)$ has the explicit expression: $$H_s(z,t)=(2\pi)^{-1}(4\pi)^{-n}\int_{\mathbb R}\bigg(\frac{|\lambda|}{\sinh|\lambda|s}\bigg)^n\exp\left\{-\frac{|\lambda||z|^2}{4}\coth|\lambda|s-i\lambda s\right\}d\lambda,$$ and hence it satisfies the following estimate (see [@jerison] for instance) $$\label{heatkernel} 0\leq H_s(u)\leq C\cdot s^{-Q/2}\exp\bigg(-\frac{|u|^2}{As}\bigg),$$ where the constants $C,A>0$ are independent of $s$ and $u\in\mathbb H^n$. Since $V\geq0$, by the *Trotter product formula* and , one has $$\label{heat} 0\leq P_s(u,v)\leq H_s(v^{-1}u)\leq C\cdot s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg),\quad s>0.$$ Moreover, this estimate can be improved when $V$ belongs to the reverse Hölder class $RH_s$ for some $s\geq Q/2$. The auxiliary function $\rho(u)$ arises naturally in this context. \[ker1\] Let $V\in RH_s$ with $s\geq Q/2$, and let $\rho(u)$ be the auxiliary function determined by $V$. For every positive integer $N\geq1$, there exists a positive constant $C_N>0$ such that for all $u$ and $v$ in $\mathbb H^n$, $$0\leq P_s(u,v)\leq C_N\cdot s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N},\quad s>0.$$ This estimate of $P_s(u,v)$ is better than , which was given by Lin and Liu in [@lin Lemma 7]. Inspired by and , for given $\alpha\in(0,Q)$, the *$\mathcal L$-Fractional integral operator* or *$\mathcal L$-Riesz potential* on the Heisenberg group is defined by (see [@jiang] and [@jiang2]) $$\begin{split} \mathcal I_{\alpha}(f)(u)&:={\mathcal L}^{-{\alpha}/2}f(u)\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{-s\mathcal L}f(u)\,s^{\alpha/2-1}ds. \end{split}$$ Recall that in the setting of $\mathbb R^n$, this integral operator was first introduced by Dziubański et al.[@dziu]. In this article we shall be interested in the behavior of the fractional integral operator $\mathcal I_{\alpha}$ associated to Schrödinger operator on $\mathbb H^n$. For $1\leq p<\infty$, the Lebesgue space $L^p(\mathbb H^n)$ is defined to be the set of all measurable functions $f$ on $\mathbb H^n$ such that $$\big\|f\big\|_{L^p(\mathbb H^n)}:=\bigg(\int_{\mathbb H^n}\big|f(u)\big|^p\,du\bigg)^{1/p}<\infty.$$ The weak Lebesgue space $WL^p(\mathbb H^n)$ consists of all measurable functions $f$ on $\mathbb H^n$ such that $$\big\|f\big\|_{WL^p(\mathbb H^n)}:= \sup_{\lambda>0}\lambda\cdot\big|\big\{u\in\mathbb H^n:|f(u)|>\lambda\big\}\big|^{1/p}<\infty.$$ Now we are going to establish strong-type and weak-type estimates of the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ on the Lebesgue spaces. We first claim that the following estimate $$\label{claim} |\mathcal I_{\alpha}f(u)|\leq C\int_{\mathbb H^n}|f(v)|\frac{1}{|v^{-1}u|^{Q-\alpha}}\,dv=C\big(|f|*|\cdot|^{\alpha-Q}\big)(u)$$ holds for all $u\in\mathbb H^n$. Let us verify . To do so, denote by $\mathcal K_{\alpha}(u,v)$ the kernel of the fractional integral operator $\mathcal I_{\alpha}$. Then we have $$\begin{split} \int_{\mathbb H^n}\mathcal K_{\alpha}(u,v)f(v)\,dv&=\mathcal I_{\alpha}f(u)={\mathcal L}^{-{\alpha}/2}f(u)\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{-s\mathcal L}f(u)\,s^{\alpha/2-1}ds\\ &=\int_0^{\infty}\bigg[\frac{1}{\Gamma(\alpha/2)}\int_{\mathbb H^n}P_s(u,v)f(v)\,dv\bigg]s^{\alpha/2-1}ds\\ &=\int_{\mathbb H^n}\bigg[\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds\bigg]f(v)\,dv. \end{split}$$ Hence, $$\mathcal K_{\alpha}(u,v)=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds.$$ Moreover, by using , we can deduce that $$\begin{split} \big|\mathcal K_{\alpha}(u,v)\big|&\leq\frac{C}{\Gamma(\alpha/2)}\int_0^{\infty}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)s^{\alpha/2-Q/2-1}ds\\ &\leq\frac{C}{\Gamma(\alpha/2)}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}\int_0^{\infty}e^{-t}\,t^{(Q/2-\alpha/2)-1}dt\\ &=C\cdot\frac{\Gamma(Q/2-\alpha/2)}{\Gamma(\alpha/2)}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}, \end{split}$$ where in the second step we have used a change of variables. Thus holds. According to Theorems 4.4 and 4.5 in [@xiao], we get the Hardy-Littlewood-Sobolev theorem on the Heisenberg group. \[strong\] Let $0<\alpha<Q$ and $1\leq p<Q/{\alpha}$. Define $1<q<\infty$ by the relation $1/q=1/p-{\alpha}/Q$. Then the following statements are valid: 1. if $p>1$, then $\mathcal I_{\alpha}$ is bounded from $L^p(\mathbb H^n)$ to $L^q(\mathbb H^n)$; 2. if $p=1$, then $\mathcal I_{\alpha}$ is bounded from $L^1(\mathbb H^n)$ to $WL^q(\mathbb H^n)$. The organization of this paper is as follows. In Section 2, we will give the definitions of Morrey space and weak Morrey space and state our main results: Theorems \[mainthm:1\], \[mainthm:2\] and \[mainthm:3\]. Section 3 is devoted to proving the boundedness of the fractional integral operator in the context of Morrey spaces. We will study certain extreme cases in Section 4. Throughout this paper, $C$ represents a positive constant that is independent of the main parameters, but may be different from line to line, and a subscript is added when we wish to make clear its dependence on the parameter in the subscript. We also use $a\approx b$ to denote the equivalence of $a$ and $b$; that is, there exist two positive constants $C_1$, $C_2$ independent of $a,b$ such that $C_1a\leq b\leq C_2a$. Main results ============ In this section, we introduce some types of Morrey spaces related to the nonnegative potential $V$ on $\mathbb H^n$, and then give our main results. Let $\rho$ be the auxiliary function determined by $V\in RH_s$ with $s\geq Q/2$. Let $1\leq p<\infty$ and $0\leq\kappa<1$. For given $0<\theta<\infty$, the Morrey space $L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all $p$-locally integrable functions $f$ on $\mathbb H^n$ such that $$\label{morrey1} \bigg(\frac{1}{|B|^{\kappa}}\int_B\big|f(u)\big|^p\,du\bigg)^{1/p} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}$$ for every ball $B=B(u_0,r)$ in $\mathbb H^n$. A norm for $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$, denoted by $\|f\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}$, is given by the infimum of the constants in , or equivalently, $$\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}:=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta} \bigg(\frac{1}{|B|^{\kappa}}\int_B\big|f(u)\big|^p\,du\bigg)^{1/p} <\infty,$$ where the supremum is taken over all balls $B=B(u_0,r)$ in $\mathbb H^n$, $u_0$ and $r$ denote the center and radius of $B$ respectively. Define $$L^{p,\kappa}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}L^{p,\kappa}_{\rho,\theta}(\mathbb H^n).$$ Let $\rho$ be the auxiliary function determined by $V\in RH_s$ with $s\geq Q/2$. Let $1\leq p<\infty$ and $0\leq\kappa<1$. For given $0<\theta<\infty$, the weak Morrey space $WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all measurable functions $f$ on $\mathbb H^n$ such that $$\frac{1}{|B|^{\kappa/p}}\sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|f(u)|>\lambda\big\}\big|^{1/p} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}$$ for every ball $B=B(u_0,r)$ in $\mathbb H^n$, or equivalently, $$\big\|f\big\|_{WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}:=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta}\frac{1}{|B|^{\kappa/p}} \sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|f(u)|>\lambda\big\}\big|^{1/p}<\infty.$$ Correspondingly, we define $$WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n).$$ Obviously, if we take $\theta=0$ or $V\equiv0$, then this Morrey space (or weak Morrey space) is just the Morrey space $L^{p,\kappa}(\mathbb H^n)$ (or $WL^{p,\kappa}(\mathbb H^n)$), which was defined by Guliyev et al.[@guliyev]. Moreover, according to the above definitions, one has $$\begin{cases} L^{p,\kappa}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\theta_1}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\theta_2}(\mathbb H^n);&\\ WL^{p,\kappa}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\theta_1}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\theta_2}(\mathbb H^n),& \end{cases}$$ for $0<\theta_1<\theta_2<\infty$. Hence $L^{p,\kappa}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ and $WL^{p,\kappa}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ for $(p,\kappa)\in[1,\infty)\times[0,1)$. The space $L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ (or $WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$) could be viewed as an extension of Lebesgue (or weak Lebesgue) space on $\mathbb H^n$ (when $\kappa=\theta=0$). In this article we will extend the Hardy-Littlewood-Sobolev theorem on $\mathbb H^n$ to the Morrey spaces. We now present our main results as follows. \[mainthm:1\] Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $0<\kappa<p/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $L^{q,{(\kappa q)}/p}_{\rho,\infty}(\mathbb H^n)$. \[mainthm:2\] Let $0<\alpha<Q$, $p=1$ and $q=Q/{(Q-\alpha)}$. If $V\in RH_s$ with $s\geq Q/2$ and $0<\kappa<1/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{1,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $WL^{q,(\kappa q)}_{\rho,\infty}(\mathbb H^n)$. Before stating our next theorem, we need to introduce a new space $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$ defined by $$\mathrm{BMO}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}\mathrm{BMO}_{\rho,\theta}(\mathbb H^n),$$ where for $0<\theta<\infty$ the space $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all locally integrable functions $f$ satisfying $$\label{BM} \frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta},$$ for all $u_0\in\mathbb H^n$ and $r>0$, $f_{B(u_0,r)}$ denotes the mean value of $f$ on $B(u_0,r)$, that is, $$f_{B(u_0,r)}:=\frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}f(v)\,dv.$$ A norm for $f\in\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$, denoted by $\|f\|_{\mathrm{BMO}_{\rho,\theta}}$, is given by the infimum of the constants satisfying , or equivalently, $$\|f\|_{\mathrm{BMO}_{\rho,\theta}} :=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta}\bigg(\frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du\bigg),$$ where the supremum is taken over all balls $B(u_0,r)$ with $u_0\in\mathbb H^n$ and $r>0$. Recall that in the setting of $\mathbb R^n$, the space $\mathrm{BMO}_{\rho,\theta}(\mathbb R^n)$ was first introduced by Bongioanni et al.[@bong2] (see also [@bong3]). Moreover, given any $\beta\in[0,1]$, we introduce the space of Hölder continuous functions on $\mathbb H^n$, with exponent $\beta$. $$\mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n),$$ where for $0<\theta<\infty$ the space $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all locally integrable functions $f$ satisfying $$\label{hconti} \frac{1}{|B(u_0,r)|^{1+\beta/Q}}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta},$$ for all $u_0\in\mathbb H^n$ and $r\in(0,\infty)$. The smallest bound $C$ for which is satisfied is then taken to be the norm of $f$ in this space and is denoted by $\|f\|_{\mathcal{C}^{\beta}_{\rho,\theta}}$. When $\theta=0$ or $V\equiv0$, $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ and $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ will be simply written as $\mathrm{BMO}(\mathbb H^n)$ and $\mathcal{C}^{\beta}(\mathbb H^n)$, respectively. Note that when $\beta=0$ this space $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ reduces to the space $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ mentioned above. For the case $\kappa\geq p/q$ of Theorem \[mainthm:1\], we will prove the following result. \[mainthm:3\] Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $p/q\leq\kappa<1$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $\mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta$ sufficiently small. To be more precise, $\beta<\delta\leq1$ and $\delta$ is given as in Lemma $\ref{kernel2}$. In particular, for the limiting case $\kappa=p/q$ (or $\beta=0$), we obtain the following result on BMO-type estimate of $\mathcal I_{\alpha}$. \[mainthm:4\] Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $\kappa=p/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$. Proofs of Theorems $\ref{mainthm:1}$ and $\ref{mainthm:2}$ ========================================================== In this section, we will prove the conclusions of Theorems \[mainthm:1\] and \[mainthm:2\]. Let us remind that the $\mathcal L$-fractional integral operator of order $\alpha\in(0,Q)$ can be written as $$\mathcal I_{\alpha}f(u)={\mathcal L}^{-{\alpha}/2}f(u)=\int_{\mathbb H^n}\mathcal K_{\alpha}(u,v)f(v)\,dv,$$ where $$\label{kauv} \mathcal K_{\alpha}(u,v)=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds.$$ The following lemma gives the estimate of the kernel $\mathcal K_{\alpha}(u,v)$ related to the Schrödinger operator $\mathcal L$, which plays a key role in the proof of our main theorems. \[kernel\] Let $V\in RH_s$ with $s\geq Q/2$ and $0<\alpha<Q$. For every positive integer $N\geq1$, there exists a positive constant $C_{N,\alpha}>0$ such that for all $u$ and $v$ in $\mathbb H^n$, $$\label{WH1} \big|\mathcal K_{\alpha}(u,v)\big|\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}.$$ By Lemma \[ker1\] and , we have $$\begin{split} \big|\mathcal K_{\alpha}(u,v)\big|&\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big|P_s(u,v)\big|\,s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split}$$ We consider two cases $s>|v^{-1}u|^2$ and $0\leq s\leq|v^{-1}u|^2$, respectively. Thus, $|\mathcal K_{\alpha}(u,v)|\leq I+II$, where $$I=\frac{1}{\Gamma(\alpha/2)}\int_{|v^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds$$ and $$II=\frac{1}{\Gamma(\alpha/2)}\int_0^{|v^{-1}u|^2}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds.$$ When $s>|v^{-1}u|^2$, then $\sqrt{s\,}>|v^{-1}u|$, and hence $$\begin{split} I&\leq\frac{1}{\Gamma(\alpha/2)}\int_{|v^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\int_{|v^{-1}u|^2}^{\infty}s^{\alpha/2-Q/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}, \end{split}$$ where the last integral converges because $0<\alpha<Q$. On the other hand, $$\begin{split} II&\leq C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{s^{Q/2}}\cdot\bigg(\frac{|v^{-1}u|^2}{s}\bigg)^{-(Q/2+N/2)} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{|v^{-1}u|^Q}\cdot\bigg(\frac{\sqrt{s\,}}{|v^{-1}u|}\bigg)^{N} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split}$$ It is easy to see that when $0\leq s\leq|v^{-1}u|^2$, $$\frac{\sqrt{s\,}}{|v^{-1}u|}\leq\frac{\sqrt{s\,}+\rho(u)}{|v^{-1}u|+\rho(u)}.$$ Hence, $$\begin{split} II&\leq C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{|v^{-1}u|^Q}\cdot\bigg(\frac{\sqrt{s\,}+\rho(u)}{|v^{-1}u|+\rho(u)}\bigg)^{N} \bigg(\frac{\sqrt{s\,}+\rho(u)}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=\frac{C_{N,\alpha}}{|v^{-1}u|^Q}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\int_0^{|v^{-1}u|^2}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}. \end{split}$$ Combining the estimates of $I$ and $II$ yields the desired estimate for $\alpha\in(0,Q)$. This concludes the proof of the lemma. We are now ready to show our main theorems. By definition, we only need to show that for any given ball $B=B(u_0,r)$ of $\mathbb H^n$, there is some $\vartheta>0$ such that $$\label{Main1} \bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f(u)\big|^q\,du\bigg)^{1/q}\leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta}$$ holds for given $f\in L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $(p,\kappa)\in(1,Q/{\alpha})\times(0,p/q)$. Suppose that $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. We decompose the function $f$ as $$\begin{cases} f=f_1+f_2\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n);\ &\\ f_1=f\cdot\chi_{2B};\ &\\ f_2=f\cdot\chi_{(2B)^c}, \end{cases}$$ where $2B$ is the ball centered at $u_0$ of radius $2r>0$, $\chi_{2B}$ is the characteristic function of $2B$ and $(2B)^c=\mathbb H^n\backslash(2B)$. Then by the linearity of $\mathcal I_{\alpha}$, we write $$\begin{split} \bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f(u)\big|^q\,du\bigg)^{1/q} &\leq\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_1(u)\big|^q\,du\bigg)^{1/q}\\ &+\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_2(u)\big|^q\,du\bigg)^{1/q}\\ &:=I_1+I_2. \end{split}$$ In what follows, we consider each part separately. By Theorem \[strong\] (1), we have $$\begin{split} I_1&=\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_1(u)\big|^q\,du\bigg)^{1/q}\\ &\leq C\cdot\frac{1}{|B|^{\kappa/p}}\bigg(\int_{\mathbb H^n}\big|f_1(u)\big|^p\,du\bigg)^{1/p}\\ &=C\cdot\frac{1}{|B|^{\kappa/p}}\bigg(\int_{2B}\big|f(u)\big|^p\,du\bigg)^{1/p}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot \frac{|2B|^{\kappa/p}}{|B|^{\kappa/p}}\cdot\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Also observe that for any fixed $\theta>0$, $$\label{2rx} 1\leq\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}\leq 2^{\theta}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}.$$ This in turn implies that $$\begin{split} I_1&\leq C_{\theta,n}\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Next we estimate the other term $I_2$. Notice that for any $u\in B(u_0,r)$ and $v\in (2B)^c$, one has $$\big|v^{-1}u\big|=\big|(v^{-1}u_0)\cdot(u_0^{-1}u)\big|\leq\big|v^{-1}u_0\big|+\big|u_0^{-1}u\big|$$ and $$\big|v^{-1}u\big|=\big|(v^{-1}u_0)\cdot(u_0^{-1}u)\big|\geq\big|v^{-1}u_0\big|-\big|u_0^{-1}u\big|.$$ Thus, $$\frac{1}{\,2\,}\big|v^{-1}u_0\big|\leq\big|v^{-1}u\big|\leq\frac{3}{\,2\,}\big|v^{-1}u_0\big|,$$ i.e., $|v^{-1}u|\approx|v^{-1}u_0|$. It then follows from Lemma \[kernel\] that for any $u\in B(u_0,r)$ and any positive integer $N$, $$\label{Talpha} \begin{split} \big|\mathcal I_{\alpha}f_2(u)\big|&\leq\int_{(2B)^c}|\mathcal K_{\alpha}(u,v)|\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha}\int_{(2B)^c}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha,n}\int_{(2B)^c}\bigg(1+\frac{|v^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u_0|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &=C_{N,\alpha,n}\sum_{k=1}^\infty\int_{2^kr\leq|v^{-1}u_0|<2^{k+1}r}\bigg(1+\frac{|v^{-1}u_0|}{\rho(u)}\bigg)^{-N} \frac{1}{|v^{-1}u_0|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha,n}\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}} \int_{|v^{-1}u_0|<2^{k+1}r}\bigg(1+\frac{2^kr}{\rho(u)}\bigg)^{-N}|f(v)|\,dv. \end{split}$$ In view of and , we can further obtain $$\begin{aligned} \label{Tf2} \big|\mathcal I_{\alpha}f_2(u)\big| &\leq C\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\notag\\ &\times\int_{|v^{-1}u_0|<2^{k+1}r}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^kr}{\rho(u_0)}\right)^{-N}|f(v)|\,dv\notag\\ &\leq C\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\notag\\ &\times\int_{B(u_0,2^{k+1}r)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}|f(v)|\,dv.\end{aligned}$$ We consider each term in the sum of separately. By using Hölder’s inequality, we obtain that for each integer $k\geq1$, $$\begin{split} &\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|\,dv\\ &\leq\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\bigg(\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|^p\,dv\bigg)^{1/p} \bigg(\int_{B(u_0,2^{k+1}r)}1\,dv\bigg)^{1/{p'}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ This allows us to obtain $$\begin{split} I_2&\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|B(u_0,r)|^{1/q}}{|B(u_0,r)|^{{\kappa}/p}} \sum_{k=1}^\infty\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &=C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \sum_{k=1}^\infty\frac{|B(u_0,r)|^{1/q-\kappa/p}}{|B(u_0,2^{k+1}r)|^{1/q-\kappa/p}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}. \end{split}$$ Thus, by choosing $N$ large enough so that $N>\theta$, and the last series is convergent, then we have $$\begin{split} I_2&\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\sum_{k=1}^\infty\left(\frac{|B(u_0,r)|}{|B(u_0,2^{k+1}r)|}\right)^{{(1/q-\kappa/p)}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split}$$ where the last inequality follows from the fact that $1/q-\kappa/p>0$. Summing up the above estimates for $I_1$ and $I_2$ and letting $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$, we obtain the desired inequality . This completes the proof of Theorem \[mainthm:1\]. To prove Theorem \[mainthm:2\], by definition, it suffices to prove that for each given ball $B=B(u_0,r)$ of $\mathbb H^n$, there is some $\vartheta>0$ such that $$\label{Main2} \frac{1}{|B|^{\kappa}}\sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f(u)|>\lambda\big\}\big|^{1/q} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta}$$ holds for given $f\in L^{1,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $0<\kappa<1/q$ and $q=Q/{(Q-\alpha)}$. Now suppose that $f\in L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. We decompose the function $f$ as $$\begin{cases} f=f_1+f_2\in L^{1,\kappa}_{\rho,\theta}(\mathbb H^n);\ &\\ f_1=f\cdot\chi_{2B};\ &\\ f_2=f\cdot\chi_{(2B)^c}. \end{cases}$$ Then for any given $\lambda>0$, by the linearity of $\mathcal I_{\alpha}$, we can write $$\begin{split} &\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f(u)|>\lambda\big\}\big|^{1/q}\\ &\leq\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_1(u)|>\lambda/2\big\}\big|^{1/q}\\ &+\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_2(u)|>\lambda/2\big\}\big|^{1/q}\\ &:=J_1+J_2. \end{split}$$ We first give the estimate for the term $J_1$. By Theorem \[strong\] (2), we get $$\begin{split} J_1&=\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha} f_1(u)|>\lambda/2\big\}\big|^{1/q}\\ &\leq C\cdot\frac{1}{|B|^{\kappa}}\bigg(\int_{\mathbb H^n}\big|f_1(u)\big|\,du\bigg)\\ &=C\cdot\frac{1}{|B|^{\kappa}}\bigg(\int_{2B}\big|f(u)\big|\,du\bigg)\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|2B|^{\kappa}}{|B|^{\kappa}}\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Therefore, in view of , $$J_1\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}.$$ As for the second term $J_2$, by using the pointwise inequality and Chebyshev’s inequality, we can deduce that $$\label{Tf2pr} \begin{split} J_2&=\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_2(u)|>\lambda/2\big\}\big|^{1/q}\\ &\leq\frac{2}{|B|^{\kappa}}\bigg(\int_{B}\big|\mathcal I_{\alpha}f_2(u)\big|^q\,du\bigg)^{1/q}\\ &\leq C\cdot\frac{|B|^{1/q}}{|B|^{\kappa}} \sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\\ &\times\int_{B(u_0,2^{k+1}r)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}|f(v)|\,dv. \end{split}$$ We consider each term in the sum of separately. For each integer $k\geq1$, we compute $$\begin{split} &\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|\,dv\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot \frac{|B(u_0,2^{k+1}r)|^{\kappa}}{|B(u_0,2^{k+1}r)|^{1/q}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Consequently, $$\begin{split} J_2&\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \cdot\frac{|B(u_0,r)|^{1/q}}{|B(u_0,r)|^{\kappa}}\sum_{k=1}^\infty\frac{|B(u_0,2^{k+1}r)|^{\kappa}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &=C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\sum_{k=1}^\infty\frac{|B(u_0,r)|^{{1/q-\kappa}}}{|B(u_0,2^{k+1}r)|^{{1/q-\kappa}}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}. \end{split}$$ Therefore, by selecting $N$ large enough so that $N>\theta$, we thus have $$\begin{split} J_2&\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \sum_{k=1}^\infty\left(\frac{|B(u_0,r)|}{|B(u_0,2^{k+1}r)|}\right)^{{(1/q-\kappa)}}\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split}$$ where the last step is due to the fact that $0<\kappa<1/q$. Let $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$. Here $N$ is an appropriate constant. Summing up the above estimates for $J_1$ and $J_2$, and then taking the supremum over all $\lambda>0$, we obtain the desired inequality . This finishes the proof of Theorem \[mainthm:2\]. Proof of Theorem \[mainthm:3\] ============================== We need the following lemma which establishes the Lipschitz regularity of the kernel $P_s(u,v)$. See Lemma 11 and Remark 4 in [@lin]. \[ker2\] Let $V\in RH_s$ with $s\geq Q/2$. For every positive integer $N\geq1$, there exists a positive constant $C_N>0$ such that for all $u$ and $v$ in $\mathbb H^n$, and for some fixed $0<\delta\leq 1$, $$\big|P_s(u\cdot h,v)-P_s(u,v)\big|\leq C_N\bigg(\frac{|h|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N},$$ whenever $|h|\leq|v^{-1}u|/2$. Based on the above lemma, we are able to prove the following result, which plays a key role in the proof of our main theorem. \[kernel2\] Let $V\in RH_s$ with $s\geq Q/2$ and $0<\alpha<Q$. For every positive integer $N\geq1$, there exists a positive constant $C_{N,\alpha}>0$ such that for all $u,v$ and $w$ in $\mathbb H^n$, and for some fixed $0<\delta\leq 1$, $$\label{WH2} \big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}},$$ whenever $|v^{-1}u|\leq |w^{-1}u|/2$. In view of Lemma \[ker2\] and , we have $$\begin{split} &\big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\\ &=\frac{1}{\Gamma(\alpha/2)}\bigg|\int_0^{\infty}P_s(u,w)\,s^{\alpha/2-1}ds-\int_0^{\infty}P_s(v,w)\,s^{\alpha/2-1}ds\bigg|\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big|P_s(u\cdot(u^{-1}v),w)-P_s(u,w)\big|\,s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}C_N\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(w)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}C_N\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split}$$ Arguing as in the proof of Lemma \[kernel\], consider two cases as below: $s>|w^{-1}u|^2$ and $0\leq s\leq|w^{-1}u|^2$. Then the right-hand side of the above expression can be written as $III+IV$, where $$III=\frac{1}{\Gamma(\alpha/2)}\int_{|w^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot \bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds,$$ and $$IV=\frac{1}{\Gamma(\alpha/2)}\int_0^{|w^{-1}u|^2}\frac{C_N}{s^{Q/2}}\cdot \bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds.$$ When $s>|w^{-1}u|^2$, then $\sqrt{s\,}>|w^{-1}u|$, and hence $$\begin{split} III&\leq\frac{1}{\Gamma(\alpha/2)}\int_{|w^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\bigg(\frac{|u^{-1}v|}{|w^{-1}u|}\bigg)^{\delta} \exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\bigg(\frac{|u^{-1}v|}{|w^{-1}u|}\bigg)^{\delta} \int_{|w^{-1}u|^2}^{\infty}s^{\alpha/2-Q/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{split}$$ where the last inequality holds since $|u^{-1}v|=|v^{-1}u|$ and $0<\alpha<Q$. On the other hand, $$\begin{split} IV&\leq C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{1}{s^{Q/2}}\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} \bigg(\frac{|w^{-1}u|^2}{s}\bigg)^{-(Q/2+N/2+\delta/2)}\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(\frac{\sqrt{s\,}}{|w^{-1}u|}\bigg)^{N} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split}$$ It is easy to check that when $0\leq s\leq|w^{-1}u|^2$, $$\frac{\sqrt{s\,}}{|w^{-1}u|}\leq\frac{\sqrt{s\,}+\rho(u)}{|w^{-1}u|+\rho(u)}.$$ This in turn implies that $$\begin{split} IV&\leq C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(\frac{\sqrt{s\,}+\rho(u)}{|w^{-1}u|+\rho(u)}\bigg)^{N} \bigg(\frac{\sqrt{s\,}+\rho(u)}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\cdot\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\int_0^{|w^{-1}u|^2}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{split}$$ where the last step holds because $|u^{-1}v|=|v^{-1}u|$. Combining the estimates of $III$ and $IV$ produces the desired inequality for $\alpha\in(0,Q)$. This concludes the proof of the lemma. We are now in a position to give the proof of Theorem $\ref{mainthm:3}$. Fix a ball $B=B(u_0,r)$ with $u_0\in\mathbb H^n$ and $r\in(0,\infty)$, it suffices to prove that the following inequality $$\label{end1.1} \frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f(u)-(\mathcal I_{\alpha}f)_B\big|\,du\leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta}$$ holds for given $f\in L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $1<p<q<\infty$ and $p/q\leq\kappa<1$, where $0<\alpha<Q$ and $(\mathcal I_{\alpha}f)_B$ denotes the average of $\mathcal I_{\alpha}f$ over $B$. Suppose that $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. Decompose the function $f$ as $f=f_1+f_2$, where $f_1=f\cdot\chi_{4B}$, $f_2=f\cdot\chi_{(4B)^c}$, $4B=B(u_0,4r)$ and $(4B)^c=\mathbb H^n\backslash(4B)$. By the linearity of the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$, the left-hand side of can be written as $$\begin{split} &\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f(u)-(\mathcal I_{\alpha}f)_B\big|\,du\\ &\leq\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_1(u)-(\mathcal I_{\alpha}f_1)_B\big|\,du +\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\,du\\ &:=K_1+K_2. \end{split}$$ First let us consider the term $K_1$. Applying the strong-type $(p,q)$ estimate of $\mathcal I_{\alpha}$ (see Theorem \[strong\]) and Hölder’s inequality, we obtain $$\begin{split} K_1&\leq\frac{2}{|B|^{1+\beta/Q}}\int_B|\mathcal I_{\alpha}f_1(u)|\,du\\ &\leq\frac{2}{|B|^{1+\beta/Q}}\bigg(\int_B|\mathcal I_{\alpha}f_1(u)|^q\,du\bigg)^{1/q}\bigg(\int_B1\,du\bigg)^{1/{q'}}\\ &\leq\frac{C}{|B|^{1+\beta/Q}}\bigg(\int_{4B}|f(u)|^p\,du\bigg)^{1/p}|B|^{1/{q'}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \cdot\frac{|B(u_0,4r)|^{{\kappa}/p}}{|B(u_0,r)|^{1/q+\beta/Q}}\left(1+\frac{4r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Using the inequalities and , and noting the fact that $\beta/Q=\kappa/p-1/q$, we derive $$\begin{split} K_1&\leq C_n\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{4r}{\rho(u_0)}\right)^{\theta}\\ &\leq C_{n,\theta}\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{split}$$ Let us now turn to estimate the term $K_2$. For any $u\in B(u_0,r)$, $$\begin{split} \big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big| &=\bigg|\frac{1}{|B|}\int_B\big[\mathcal I_{\alpha}f_2(u)-\mathcal I_{\alpha}f_2(v)\big]\,dv\bigg|\\ &=\bigg|\frac{1}{|B|}\int_B\bigg\{\int_{(4B)^c}\Big[\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\Big]f(w)\,dw\bigg\}dv\bigg|\\ &\leq\frac{1}{|B|}\int_B\bigg\{\int_{(4B)^c}\big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\cdot|f(w)|\,dw\bigg\}dv. \end{split}$$ By using the same arguments as that of Theorem \[mainthm:1\], we find that $$|v^{-1}u|\leq |w^{-1}u|/2 \quad \& \quad |w^{-1}u|\approx |w^{-1}u_0|,$$ whenever $u,v\in B$ and $w\in(4B)^c$. This fact along with Lemma \[kernel2\] yields $$\begin{aligned} \label{average} &\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\notag\\ &\leq\frac{C_{N,\alpha}}{|B|}\int_B\bigg\{\int_{(4B)^c}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N} \frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\bigg\}dv\notag\\ &\leq C_{N,\alpha,n}\int_{(4B)^c}\bigg(1+\frac{|w^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{r^{\delta}}{|w^{-1}u_0|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\notag\\ &=C_{N,\alpha,n}\sum_{k=2}^\infty\int_{2^kr\leq|w^{-1}u_0|<2^{k+1}r} \bigg(1+\frac{|w^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{r^{\delta}}{|w^{-1}u_0|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\notag\\ &\leq C_{N,\alpha,n}\sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\frac{1}{|B(u_0,2^{k+1}r)|^{1-({\alpha}/Q)}} \int_{B(u_0,2^{k+1}r)}\bigg(1+\frac{2^kr}{\rho(u)}\bigg)^{-N}|f(w)|\,dw.\end{aligned}$$ Furthermore, by using Hölder’s inequality and , we deduce that for any $u\in B(u_0,r)$, $$\begin{aligned} \label{end1.3} &\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\notag\\ &\leq C\sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\frac{1}{|B(u_0,2^{k+1}r)|^{1-({\alpha}/Q)}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}\notag\\ &\times\bigg(\int_{B(u_0,2^{k+1}r)}\big|f(w)\big|^p\,dw\bigg)^{1/p} \left(\int_{B(u_0,2^{k+1}r)}1\,dw\right)^{1/{p'}}\notag\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}\notag\\ &\times\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}\notag\\ &=C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{|B(u_0,2^{k+1}r)|^{\beta/Q}}{2^{k\delta}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta},\end{aligned}$$ where the last equality is due to the assumption $\beta/Q=\kappa/p-1/q$. From the pointwise estimate and , it readily follows that $$\begin{split} K_2&=\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\,du\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\left(\frac{|B(u_0,2^{k+1}r)|}{|B(u_0,r)|}\right)^{\beta/Q} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k(\delta-\beta)}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split}$$ where $N>0$ is a sufficiently large number so that $N>\theta$. Also observe that $\beta<\delta\leq1$, and hence the last series is convergent. Therefore, $$K_2\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}.$$ Fix this $N$ and set $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$. Finally, combining the above estimates for $K_1$ and $K_2$, the inequality is proved and then the proof of Theorem \[mainthm:3\] is finished. In the end of this article, we discuss the corresponding estimates of the fractional integral operator $I_{\alpha}=(-\Delta_{\mathbb H^n})^{-\alpha/2}$ (under $0<\alpha<Q$). We denote by $K^{*}_{\alpha}(u,v)$ the kernel of $I_{\alpha}=(-\Delta_{\mathbb H^n})^{-\alpha/2}$. In , we have already shown that $$\label{WH3} \big|K^{*}_{\alpha}(u,v)\big|\leq C_{\alpha,n}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}.$$ Using the same methods and steps as we deal with in Lemma \[kernel2\], we can also show that for some fixed $0<\delta\leq 1$ and $0<\alpha<Q$, there exists a positive constant $C_{\alpha,n}>0$ such that for all $u,v$ and $w$ in $\mathbb H^n$, $$\label{WH4} \big|K^{*}_{\alpha}(u,w)-K^{*}_{\alpha}(v,w)\big|\leq C_{\alpha,n}\cdot\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}},$$ whenever $|v^{-1}u|\leq |w^{-1}u|/2$. Following along the lines of the proof of Theorems \[mainthm:1\]–\[mainthm:3\] and using the inequalities and , we can obtain the following estimates of $I_{\alpha}$ with $\alpha\in(0,Q)$. \[thm:1\] Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $0<\kappa<p/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $L^{q,{(\kappa q)}/p}(\mathbb H^n)$. \[thm:2\] Let $0<\alpha<Q$, $p=1$ and $q=Q/{(Q-\alpha)}$. If $0<\kappa<1/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{1,\kappa}(\mathbb H^n)$ into $WL^{q,(\kappa q)}(\mathbb H^n)$. Here, we remark that Theorems \[thm:1\] and \[thm:2\] have been proved by Guliyev et al.[@guliyev]. \[thm:3\] Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $p/q\leq\kappa<1$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathcal{C}^{\beta}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta<\delta\leq1$, where $\delta$ is given as in . As an immediate consequence we have the following corollary. Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $\kappa=p/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathrm{BMO}(\mathbb H^n)$. Upon taking $\alpha=1$ in Theorem \[thm:3\], we get the following **Morrey’s lemma** on the Heisenberg group. Let $\alpha=1$, $1<p<Q$ and $1/q=1/p-1/Q$. If $p/q<\kappa<1$, then the fractional integral operator $I_{1}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathcal{C}^{\beta}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta<\delta\leq1$, where $\delta$ is given as in . Namely, $$\big\|\nabla_{\mathbb H^n}f\big\|_{\mathcal{C}^{\beta}(\mathbb H^n)}\leq C\big\|f\big\|_{L^{p,\kappa}(\mathbb H^n)},$$ where $0<\kappa<1$, $p>(1-\kappa)Q$, $\beta=1-{(1-\kappa)Q}/p$ and the gradient $\nabla_{\mathbb H^n}$ is defined by $$\nabla_{\mathbb H^n}=\big(X_1,\dots,X_n,Y_1,\dots,Y_n\big).$$ [99]{} B. Bongioanni, E. Harboure, O. Salinas, *Weighted inequalities for commutators of Schrödinger-Riesz transforms*, J. Math. Anal. Appl., **392** (2012), 6–22. B. Bongioanni, E. Harboure, O. Salinas, *Commutators of Riesz transforms related to Schrödinger operators*, J. Fourier Anal. Appl., **17** (2011), 115–134. J. Dziubański,G. Garrigós, T. Martínez, J. L. Torrea and J. Zienkiewicz, *$BMO$ spaces related to Schrödinger operators with potentials satisfying a reverse Hölder inequality*, Math. Z., **249** (2005), 329–356. V. S. Guliyev, A. Eroglu and Y. Y. Mammadov, *Riesz potential in generalized Morrey spaces on the Heisenberg group*, J. Math. Sci. (N.Y.) **189** (2013), 365–382. D. Jerison and A. Sanchez-Calle, *Estimates for the heat kernel for a sum of squares of vector fields*, Indiana Univ. Math. J., **35** (1986) 835–854. Y. S. Jiang, *Some properties of the Riesz potential associated to the Schrödinger operator on the Heisenberg groups*, Acta Math. Sinica (Chin. Ser), **53** (2010), 785–794. Y. S. Jiang, *Endpoint estimates for fractional integral associated to Schrödinger operators on the Heisenberg groups*, Acta Math. Sci. Ser. B, **31** (2011), 993–1000. C. C. Lin and H. P. Liu, *$BMO_{L}(\mathbb H^n)$ spaces and Carleson measures for Schrödinger operators*, Adv. Math., **228** (2011), 1631–1688. G. Z. Lu, *A Fefferman-Phong type inequality for degenerate vector fields and applications*, Panamer. Math. J., **6** (1996), 37–57. Z. W. Shen, *$L^p$ estimates for Schrödinger operators with certain potentials*, Ann. Inst. Fourier (Grenoble), **45** (1995), 513–546. E. M. Stein, *Singular Integrals and Differentiability Properties of Functions*, Princeton Univ. Press, Princeton, New Jersey, 1970. E. M. Stein, *Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals*, Princeton Univ. Press, Princeton, New Jersey, 1993. S. Thangavelu, *Harmonic Analysis on the Heisenberg Group*, Progress in Mathematics, Vol. 159, Birkhäuser, Boston/Basel/Berlin, 1998. J. S. Xiao and J. X. He, *Riesz potential on the Heisenberg group*, J. Inequal. Appl., 2011, Art. ID 498638, 13 pp.
--- abstract: 'Two different self-adjoint Pauli extensions describing a spin-1/2 two-dimensional quantum system with singular magnetic field are studied. An Aharonov-Casher type formula is proved for the maximal Pauli extension and it is also checked that this extension can be approximated by operators corresponding to more regular magnetic fields.' address: | Department of Mathematics\ Chalmers University of Technology\ and University of Gothenburg\ Eklandagatan 86, S-412 96 Gothenburg\ Sweden author: - Mikael Persson title: 'On the Aharonov-Casher formula for different self-adjoint extensions of the Pauli operator with singular magnetic field' --- Introduction {#sec:intro} ============ Two-dimensional spin-$1/2$ quantum systems involving magnetic fields are described by the self-adjoint Pauli operator. One interesting question about such systems is the appearance of zero modes (eigenfunctions with eigenvalue zero). Aharonov and Casher proved in [@ac] that if the magnetic field is bounded and compactly supported, then zero modes can arise, and the number of zero modes is simply connected to the total flux of the magnetic field. Since then, Aharonov-Casher type formulas have been proved for more and more singular magnetic fields in different settings, see [@cfks; @gegr; @lali; @mi]. Recently they were proved for measure-valued magnetic fields in [@ervo] by [Erdős]{} and Vougalter. We are interested in the Pauli operator when the magnetic field consists of a regular part with compact support and a singular part with a finite number of Aharonov-Bohm (AB) solenoids [@ab]. The Pauli operator for such singular magnetic fields, defined initially on smooth functions with support not touching the singularities, is not essentially self-adjoint. Thus there are several ways of defining the self-adjoint Pauli extension, depending on what boundary conditions one sets at the AB solenoids, see [@adte; @dast; @exstvy; @gest; @gest2]. Different extensions describe different physics, and there is a discussion going on about which extensions describe the real physical situation. There are two possible approaches to making the choice of the extension: trying to describe boundary conditions at the singularities by means of modelling actual interaction of the particle with an AB flux, or considering approximations of singular fields by regular ones, see [@bopu; @ta]. We are going to study the maximal extension introduced in [@gegr], called the Maximal Pauli operator, and compare it with the extension defined in [@ervo], that we will call the EV Pauli operator. These two extensions were recently studied in [@rosh] in the presence of infinite number of AB solenoids, and it was proved that a magnetic field with infinite flux gives an infinite-dimensional space of zero modes for both extensions. When studying the Pauli operator in the presence of AB solenoids one must always keep in mind the possibility to reduce the intensities of solenoids by arbitrary integers by means of singular gauge transformations. In Section \[sec:def\] we define both extensions via quadratic forms. The Maximal Pauli operator can be defined directly for arbitrary strength of the AB fluxes, while the EV Pauli operator has to be defined via gauge transformations if the AB intensities do not belong to the interval $(-1,1)$. The EV Pauli operator is not gauge invariant. However, following [@ervo], we always make a reduction of the AB intensities to the interval $[-1/2,1/2)$. Hence the EV Pauli operator is not uniquely defined for AB intensities belonging to $(-1,1)\setminus [-1/2,1/2)$, see Section \[sec:def\]. Moreover, the asymmetry of the interval $[-1/2,1/2)$, leads to the absence of the invariance of spectral properties of the EV Pauli operator under the changing sign of the magnetic field, the latter property being natural to expect. For the Dirac operators with strongly singular magnetic field the question on the number of zero modes was considered in [@hiog]. The definition of the self-adjoint operator considered there is close to the one in Erdös-Vougalter, however it is not gauge invariant, therefore the Aharonov Casher-type formula obtained in [@hiog] depends on intensity of each AB solenoid separtely. In Section \[sec:prop\] we establish that the Maximal Pauli operator is gauge invariant and that changing the sign of the magnetic field leads to anti-unitarily equivalence. Our main result is the Aharonov-Casher type formula for the Maximal Pauli operator. An interesting fact is that this operator can have both spin-up and spin-down zero modes, in contrary to the EV Pauli operator and the Pauli operator for less singular magnetic fields, which have either spin-up or spin-down zero modes, but not both. In [@gegr] a setting with an infinite lattice of AB solenoids with equal AB flux at each solenoid is studied, having both spin-up and spin-down zero modes, both with infinite multiplicity. In Section \[sec:approx\] we discuss the approximation by more regular fields in the sense of Borg and Pulé, see [@bopu]. It turns out that the Maximal Pauli operator can and the EV Pauli operator can not be approximated in this way. However, different ways of approximating the magnetic field may lead to different results, see [@bovo; @ta]. Definition of the Pauli operators {#sec:def} ================================= The Pauli operator is formally defined as $$P=\left(\sigma\cdot\left(-i\nabla+{\ensuremath{\mathbf{A}}}\right)\right)^2=\left(-i\nabla+{\ensuremath{\mathbf{A}}}\right)^2+\sigma_3 B$$ on $L_2(\mathbb{R}^2)\otimes \mathbb{C}^2$. Here $\sigma=(\sigma_1,\sigma_2)$, where $\sigma_1$, $\sigma_2$ and $\sigma_3$ are the Pauli matrices $$\sigma_1= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} ,\quad \sigma_2= \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix} ,\ \text{and}\quad \sigma_3= \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix},$$ ${\ensuremath{\mathbf{A}}}$ is the real magnetic vector potential and $B=\operatorname{curl}({\ensuremath{\mathbf{A}}})$ is the magnetic field. This definition does not work if the magnetic field $B$ is too singular, see the discussion in [@ervo; @so]. If ${\ensuremath{\mathbf{A}}}\in L_{2,\text{loc}}(\mathbb{R}^2)$, using the notations $\Pi_k = -i\partial_k+A_k$, for $k=1,2$, $Q_\pm= \Pi_1\pm i\Pi_2$ and $\lambda$ for the Lebesgue measure, the Pauli operator can be defined via the quadratic form $$\label{eq:kvadform} p[\psi]=\|Q_+\psi_+\|^2+\|Q_-\psi_-\|^2=\int|\sigma\cdot(-i\nabla+{\ensuremath{\mathbf{A}}})\psi|^2d\lambda(x),$$ the domain being the closure in the sense of the metrics $p[\psi]$ of the core consisting of smooth compactly supported functions. With this notation, we can write the Pauli operator $P$ as $$\label{eq:paulinotion} P=\begin{pmatrix} P_+ & 0\\ 0 & P_-\end{pmatrix}=\begin{pmatrix} Q_+^*Q_+ & 0\\ 0 & Q_-^*Q_-\end{pmatrix}.$$ However, defining the Pauli operator via the quadratic form $p[\psi]$ in  requires that the vector potential ${\ensuremath{\mathbf{A}}}$ belongs to $L_{2,\text{loc}}(\mathbb{R}^2)$, otherwise $p[\psi]$ can be infinite for nice functions $\psi$, see [@so]. If the magnetic field consists of only one AB solenoid located at the origin with intensity (flux divided by $2\pi$) $\alpha$, then the magnetic vector potential ${\ensuremath{\mathbf{A}}}$ is given by ${\ensuremath{\mathbf{A}}}(x_1,x_2)=\frac{\alpha}{x_1^2+x_2^2}(-x_2,x_1)$ which is not in $L_{2,\text{loc}}(\mathbb{R}^2)$. Here, and elsewhere we identify a point $(x_1,x_2)$ in the two-dimensional space $\mathbb{R}^2$ with $z=x_1+ix_2$ in the complex plan $\mathbb{C}$. Following [@ervo], we will define the Pauli operator via another quadratic form, that agrees with $p[\psi]$ for less singular magnetic fields. We start by describing the magnetic field. Even though the Pauli operator can be defined for more general magnetic fields, in order to demonstrate the main features of the study, without extra technicalities, we restrict ourself to a magnetic field consisting of a sum of two parts, the first being a smooth function with compact support, the second consisting of finitely many AB solenoids. Let $\Lambda=\{z_j\}_{j=1}^n$ be a set of distinct points in $\mathbb{C}$ and let $\alpha_j\in\mathbb{R}\setminus\mathbb{Z}$. The magnetic field we will study in this paper has the form $$\label{eq:magnet} B(z)=B_0(z)+\sum_{j=1}^n 2\pi\alpha_j\delta_{z_j},$$ where $B_0\in C_0^1(\mathbb{R}^2)$. In [@ervo] the magnetic field is given by a signed real regular Borel measure $\mu$ on $\mathbb{R}^2$ with locally finite total variation. It is clear that $\mu=B_0(z)d\lambda(z)+\sum_{j=1}^n 2\pi\alpha_j\delta_{z_j}$ is such a measure. The function $h_0$ given by $$h_0(z)=\frac{1}{2\pi}\int \log|z-z'|B_0(z')d\lambda(z')$$ satisfies $\Delta h_0=B_0$ since $B_0\in C_0^1(\mathbb{R}^2)$ and $\Delta \log|z-z_j|=2\pi\delta_{z_j}$ in the sense of distributions. The function $$h(z)=h_0(z)+\sum_{j=1}^n \alpha_j\log|z-z_j|$$ satisfies $\Delta h=B$. It is easily seen that $h_0(z)\sim\Phi_0\log|z|$ as $|z|\to\infty$, and thus the asymptotics of $e^{h(z)}$ is $$e^{\pm h(z)}\sim \begin{cases} |z|^{\pm \Phi}, & |z|\to\infty\\ |z-z_j|^{\pm \alpha_j}, & z\to z_j, \end{cases}$$ where $\Phi_0=\frac{1}{2\pi}\int B_0(z)d\lambda(z)$ and $\Phi=\frac{1}{2\pi}\int B(z)d\lambda(z)=\Phi_0+\sum_{j=1}^n\alpha_j$. We are now ready to define the two self-adjoint Pauli operators. The decisive difference between them is the sense in which we are taking derivatives. This leads to different domains, and, as we will see in later sections, to different properties of the operators. Let us introduce notations for taking derivatives on the different spaces of distributions. Remember that $\Lambda=\{z_j\}_{j=1}^n$ is a finite set of distinct points in $\mathbb{C}$. We let the derivatives in ${\ensuremath{\mathcal{D}}}'(\mathbb{R}^2)$ be denoted by ${\partial}$ and the derivatives in ${\ensuremath{\mathcal{D}}}'(\mathbb{R}^2\setminus\Lambda)$ be denoted by ${\partial}$ with a tilde over it, that is ${\tilde{\partial}}$. Thus, for example, by ${\ensuremath{{\partial}_z}}$ we mean ${\ensuremath{\frac{\partial}{\partial z}}}$ in the space ${\ensuremath{\mathcal{D}}}'(\mathbb{R}^2)$ and by ${\ensuremath{{\tilde{\partial}}_z}}$ we mean ${\ensuremath{\frac{\partial}{\partial z}}}$ in the space ${\ensuremath{\mathcal{D}}}'(\mathbb{R}^2\setminus\Lambda)$. The EV Pauli operator --------------------- We follow [@ervo] and define the sesquilinear forms ${\ensuremath{\pi}}_+$ and ${\ensuremath{\pi}}_-$ by $$\begin{aligned} {\ensuremath{\pi}}_+^h(\psi_+,\xi_+)&=4\int \overline{{\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\psi_+\right)}{\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\xi_+\right)e^{2h}d\lambda(z),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_+)&=\left\{\psi_+\in L_2(\mathbb{R}^2)\ :\ {\ensuremath{\pi}}^h_+(\psi_+,\psi_+)<\infty\right\},\end{aligned}$$ and $$\begin{aligned} {\ensuremath{\pi}}_-^h(\psi_-,\xi_-)&=4\int \overline{{\ensuremath{{\partial}_z}}\left(e^{h}\psi_-\right)}{\ensuremath{{\partial}_z}}\left(e^{h}\xi_-\right)e^{-2h}d\lambda(z),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_-)&=\left\{\psi_-\in L_2(\mathbb{R}^2)\ :\ {\ensuremath{\pi}}^h_-(\psi_-,\psi_-)<\infty\right\}.\end{aligned}$$ Now set $$\begin{aligned} {\ensuremath{\pi}}^h(\psi,\xi)&={\ensuremath{\pi}}_+^h(\psi_+,\xi_+)+{\ensuremath{\pi}}_-^h(\psi_-,\xi_-),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h)&={\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_+)\oplus{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_-)=\big\{\psi=\begin{pmatrix}\psi_+\\ \psi_-\end{pmatrix}\in L_2(\mathbb{R}^2)\otimes\mathbb{C}^2\ :\ \pi^h(\psi,\psi)<\infty\big\}.\end{aligned}$$ Let us make more accurate the description of the domains of the forms ${\ensuremath{\pi}}_\pm^h$ and ${\ensuremath{\pi}}^h$. For example, what is required of a function $\psi_+$ to be in ${\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_+)$? It should belong to $L_2(\mathbb{R}^2)$, and the expression $${\ensuremath{\pi}}_+^h(\psi_+,\psi_+)=4\int \left|{\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\psi_+\right)\right|^2e^{2h}d\lambda(z)$$ should have a meaning and be finite. This means that the distribution ${\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\psi_+\right)$ actually must be a function and its modulus multiplied with $e^h$ must belong to $L_2(\mathbb{R}^2)$, that is $|{\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\psi_+\right)|e^h\in L_2(\mathbb{R}^2)$. This forces all the intensities $\alpha_j$ to be in the interval $(-1,1)$, see [@ervo]. Next we define the norm by $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\pi}}^h}={\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_+\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\pi}}_+^h}+{\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_-\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\pi}}_-^h},$$ where $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_+\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\pi}}_+^h}=\|\psi_+\|^2+\left\|{\ensuremath{{\partial}_{\bar{z}}}}\left(e^{-h}\psi_+\right)e^h\right\|^2$$ and $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_-\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\pi}}_-^h}=\|\psi_-\|^2+\left\|{\ensuremath{{\partial}_z}}\left(e^{h}\psi_-\right)e^{-h}\right\|^2.$$ This form $\pi^h$ is symmetric, nonnegative and closed with respect to $\|\cdot\|$, again see [@ervo], and hence it defines a unique self-adjoint operator ${\ensuremath{\mathcal{P}}}_h$ via $$\label{eq:pdefenr} {\ensuremath{\mathscr{D}}}({\ensuremath{\mathcal{P}}}_h)=\{\psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h)\ :\ {\ensuremath{\pi}}^h(\psi,\cdot)\in \left(L_2(\mathbb{R}^2)\otimes\mathbb{C}^2\right)\}$$ and $$\label{eq:pdomenr} ({\ensuremath{\mathcal{P}}}_h\psi,\xi)={\ensuremath{\pi}}^h(\psi,\xi),\quad \psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathcal{P}}}_h),\xi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h).$$ We call this operator ${\ensuremath{\mathcal{P}}}_h$ the *non-reduced EV Pauli operator*. If some intensities $\alpha_j$ belongs to $\mathbb{R}\setminus[-1/2,1/2)$, we let $\alpha_j^*$ be the unique real number in $[-1/2,1/2)$ such that $\alpha_j$ and $\alpha_j^*$ differ only by an integer, that is $\alpha_j^*-\alpha_j=m_j\in\mathbb{Z}$. We define the *reduced EV Pauli operator* (or just the *EV Pauli operator*), ${\ensuremath{P}}_h$, to be $$\label{eq:pdefe} {\ensuremath{P}}_h=\exp(i\phi){\ensuremath{\mathcal{P}}}_h\exp(-i\phi)$$ where $\phi(z)=\sum_{j=1}^n m_j \arg(z-z_j)$. Hence, if there are some $\alpha_j$ outside the interval $(-1,1)$ only the reduced EV Pauli operator is well-defined. If all the intensities $\alpha_j$ belong to the interval $[-1/2,1/2)$ then we do not have to perform the reduction and hence there is only one definition. However, if there are intensities $\alpha_j$ inside the interval $(-1,1)$ but outside the interval $[-1/2,1/2)$ then we have two different definitions of the EV Pauli operator, the direct one and the one obtained by reduction. In the next section we will show that these two operators are not the same. The Maximal Pauli operator -------------------------- Now, again, let $\alpha_j\in\mathbb{R}\setminus\mathbb{Z}$. We define the forms $$\begin{aligned} {\ensuremath{\mathfrak{p}}}_+^h(\psi_+,\xi_+)&=4\int \overline{{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_+\right)}{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\xi_+\right)e^{2h}d\lambda(z),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h_+)&=\left\{\psi_+\in L_2(\mathbb{R}^2)\ :\ {\ensuremath{\mathfrak{p}}}^h_+(\psi_+,\psi_+)<\infty\right\},\end{aligned}$$ and $$\begin{aligned} {\ensuremath{\mathfrak{p}}}_-^h(\psi_-,\xi_-)&=4\int \overline{{\ensuremath{{\tilde{\partial}}_z}}\left(e^{h}\psi_-\right)}{\ensuremath{{\tilde{\partial}}_z}}\left(e^{h}\xi_-\right)e^{-2h}d\lambda(z),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h_-)&=\left\{\psi_-\in L_2(\mathbb{R}^2)\ :\ {\ensuremath{\mathfrak{p}}}^h_-(\psi_-,\psi_-)<\infty\right\}.\end{aligned}$$ Now set $$\begin{aligned} {\ensuremath{\mathfrak{p}}}^h(\psi,\xi)&={\ensuremath{\mathfrak{p}}}_+^h(\psi_+,\xi_+)+{\ensuremath{\mathfrak{p}}}_-^h(\psi_-,\xi_-),\\ {\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)&={\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h_+)\oplus{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h_-)=\big\{\psi=\begin{pmatrix}\psi_+\\ \psi_-\end{pmatrix}\in L_2(\mathbb{R}^2)\otimes\mathbb{C}^2\ :\ {\ensuremath{\mathfrak{p}}}^h(\psi,\psi)<\infty\big\}.\end{aligned}$$ Again, let us make clear about the domains of the forms. For a function $\psi_+$ to be in ${\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_+^h)$ it is required that $\psi_+\in L_2(\mathbb{R}^2)$ and that ${\ensuremath{{\tilde{\partial}}_z}}(e^{-h}\psi_+)$ is a function. After taking the modulus of this derivative and multiplying by $e^h$ we should get into $L_2(\mathbb{R}^2\setminus\Lambda)$, that is $|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(e^{-h}\psi_+)|e^h\in L_2(\mathbb{R}^2\setminus\Lambda)$. Note that the form ${\ensuremath{\mathfrak{p}}}^h$ does not feel the AB fluxes at $\Lambda$ since the derivatives are taken in the space ${\ensuremath{\mathcal{D}}}'(\mathbb{R}^2\setminus\Lambda)$, and integration does not feel $\Lambda$ either since $\Lambda$ has Lebesgue measure zero. This enable the AB solenoids to have intensities that lies outside $(-1,1)$. Also, define the norm $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_h\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\mathfrak{p}}}^h}={\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_+\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\mathfrak{p}}}_+^h}+{\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_-\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\mathfrak{p}}}_-^h},$$ where $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_+\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\mathfrak{p}}}_+^h}=\|\psi_+\|^2+\left|\left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_+\right)e^h\right|\right|^2$$ and $${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\psi_-\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}^2_{{\ensuremath{\mathfrak{p}}}_-^h}=\|\psi_-\|^2+\left|\left|{\ensuremath{{\tilde{\partial}}_z}}\left(e^{h}\psi_-\right)e^{-h}\right|\right|^2.$$ The form ${\ensuremath{\mathfrak{p}}}^h$ defined above is symmetric, nonnegative and closed with respect to $\|\cdot\|$. It is clear that ${\ensuremath{\mathfrak{p}}}^h$ is symmetric and nonnegative. Let $\psi_n=(\psi_{n,+},\psi_{n,-})$ be a Cauchy sequence in the norm ${\left\lvert\mspace{-1.7mu}\left\lvert\mspace{-1.7mu}\left\lvert\cdot\right\rvert\mspace{-1.7mu}\right\rvert\mspace{-1.7mu}\right\rvert}_{{\ensuremath{\mathfrak{p}}}^h}$. This implies that $\psi_{n,\pm}\to\psi_{\pm}$ in $L_{2}(d\lambda(z))$, ${\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_{n,+}\right)\to u_+$ in $L_2(e^{2h}d\lambda(z))$ and ${\ensuremath{{\tilde{\partial}}_z}}(e^h\psi_{n,-})\to u_-$ in $L_2(e^{-2h}d\lambda(z))$. We have to show that ${\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_{+}\right)= u_+$ and ${\ensuremath{{\tilde{\partial}}_z}}(e^h\psi_{-})=u_-$. For any test-function $\phi\in C_0^\infty(\mathbb{R}^2\setminus\Lambda)$, $$\begin{aligned} \left|\int\bar{\phi}\left(u_+-{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_{+}\right)\right)d\lambda(z)\right|& \leq \left|\int \bar{\phi}\left(u_+-{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_{n,+}\right)\right)\right|\\ \mbox{}&\mbox{}\quad +\left|\int{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\bar\phi)e^{-h}\left(\psi_+-\psi_{n,+}\right)\right|\\ \mbox{}& \leq \|\bar\phi e^{-h}\|\cdot \left\|u_+-{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_{n,+}\right)\right\|_{L_2(e^{2h})}\\ \mbox{}&\mbox{}\quad +\left\|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\bar\phi)e^{-h}\right\|\cdot \|\psi_+-\psi_{n,+}\|.\end{aligned}$$ The last expression tends to zero as $n\to\infty$, since the first terms in each sum is bounded (thanks to $\phi$) and the other one tends to zero. The proof is the same for the spin down component. This shows that ${\ensuremath{\mathfrak{p}}}^h$ is closed. Hence ${\ensuremath{\mathfrak{p}}}^h$ defines a unique self-adjoint operator ${\ensuremath{\mathfrak{P}}}_h$ via $$\label{eq:paulidef} {\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_h)=\{\psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)\ :\ {\ensuremath{\mathfrak{p}}}^h(\psi,\cdot)\in \left(L_2(\mathbb{R}^2)\otimes\mathbb{C}^2\right)\}$$ and $$\label{eq:paulidom} ({\ensuremath{\mathfrak{P}}}_h\psi,\xi)={\ensuremath{\mathfrak{p}}}^h(\psi,\xi),\quad \psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_h),\xi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h).$$ We call this operator ${\ensuremath{\mathfrak{P}}}_h$ the *Maximal Pauli operator*. Properties of the Pauli operators {#sec:prop} ================================= In this section we will compare some properties of the two Pauli operators ${\ensuremath{P}}_h$ and ${\ensuremath{\mathfrak{P}}}_h$ defined in the previous section. We start by showing that ${\ensuremath{\mathfrak{P}}}_h$ is gauge invariant while ${\ensuremath{P}}_h$ is not. Gauge transformations --------------------- Let $B(z)=B_0(z)+\sum_{j=1}^n2\pi \alpha_j\delta_{z_j}$ be the same magnetic field as before and let $\hat{B}(z)$ be another magnetic field that differs from $B(z)$ only by some multiples of the delta functions, that is $\hat{B}(z)-B(z)=\sum_{j=1}^n 2\pi m_j\delta_{z_j}$, where $m_j$ are integers, not all zero. Then the corresponding scalar potentials $\hat{h}(z)$ and $h(z)$ differ only by the corresponding logarithms $\hat{h}(z)-h(z)=\sum_{j=1}^nm_j\log|z-z_j|$. Now with $\phi(z)=\sum_{j=1}^nm_j\arg(z-z_j)$ we get $\hat{h}(z)+i\phi(z)=h(z)+\sum_{j=1}^nm_j\log(z-z_j)$. This function is multivalued, however, since $m_j$ are integers, we have $$\begin{aligned} \label{eq:diffe} {\ensuremath{{\partial}_{\bar{z}}}}\left(\hat{h}(z)+i\phi(z)\right)&={\ensuremath{{\partial}_{\bar{z}}}}h(z)+\sum_{j=1}^nm_j{\ensuremath{{\partial}_{\bar{z}}}}\log(z-z_j),\\ \label{eq:diffm} {\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(\hat{h}(z)+i\phi(z)\right)&={\ensuremath{{\tilde{\partial}}_{\bar{z}}}}h(z),\ \text{and}\\ \label{eq:exprule} e^{\hat{h}+i\phi}&=e^h\prod_{j=1}^m(z-z_j)^{m_j}.\end{aligned}$$ To see that ${\ensuremath{P}}_h$ is not gauge invariant it is enough to look at an example. Let $n=1$, $z_1=0$, $\alpha_1=-1/2$ and $m_1=1$, so the two magnetic fields are $B(z)=B_0(z)-\pi\delta_0$ and $\hat{B}(z)=B_0(z)+\pi\delta_0$. The scalar potentials are given by $h(z)=h_0(z)-\frac12\log|z|$ and $\hat{h}(z)=h_0(z)+\frac12\log|z|$ respectively, where $h_0(z)$ is a smooth function with asymptotics $\Phi_0\log|z|$ as $|z|\to\infty$. We should show that ${\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^{\hat{h}})$ is not given by $e^{-i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h)$, where $\phi(z)=\arg(z)$. Then it follows that ${\ensuremath{\pi}}^h$ and ${\ensuremath{\pi}}^{\hat{h}}$ do not define unitarily equivalent operators. Let $\psi_+\in{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}_+^h)$. This means, in particular, that ${\ensuremath{{\partial}_{\bar{z}}}}(\psi_+e^{-h})$ belongs to $L_{1,\text{loc}}(\mathbb{R}^2)$. Now let $\hat{\psi}_+=e^{-i\phi}\psi_+$. Then, according to  we get $${\ensuremath{{\partial}_{\bar{z}}}}(\hat{\psi}_+e^{-\hat{h}})={\ensuremath{{\partial}_{\bar{z}}}}(\psi_+e^{-\hat{h}-i\phi})={\ensuremath{{\partial}_{\bar{z}}}}\left(\frac{\psi_+e^{-h}}{z}\right)={\ensuremath{{\partial}_{\bar{z}}}}(\psi_+e^{-h})\frac1z+\psi_+e^{-h}\pi\delta_0$$ which is not in $L_{1,\text{loc}}(\mathbb{R}^2)$ since it is a distribution involving $\delta_0$ (for non-smooth $\psi_+$ it is not even well-defined). Thus we have ${\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}_+^{\hat{h}})\neq e^{-i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}_+^h)$ and hence ${\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^{\hat{h}})\neq e^{-i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h)$ so ${\ensuremath{\pi}}^h$ and ${\ensuremath{\pi}}^{\hat{h}}$ are not defining unitarily equivalent operators. Let us now study what happens with ${\ensuremath{\mathfrak{p}}}^h$ when we do gauge transforms. Let $\psi=(\psi_+,\psi_-)^t\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)$. We should check that $e^{-i\phi}\psi$ belongs to ${\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^{\hat{h}})$, where $\phi(z)=\sum_{j=1}^n m_j\arg(z-z_j)$ is the harmonic conjugate to $\hat{h}(z)-h(z)$. We do this for ${\ensuremath{\mathfrak{p}}}_+^{\hat{h}}$. It is similar for ${\ensuremath{\mathfrak{p}}}_-^{\hat{h}}$. Since $\psi_+\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_+^h)$ we know that ${\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\psi_+e^{-h})\in L_{1,\text{loc}}(\mathbb{R}^2\setminus\Lambda)$. Let us check that ${\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\hat{\psi}_+e^{-\hat{h}})\in L_{1,\text{loc}}(\mathbb{R}^2\setminus\Lambda)$. Again, by  we have $$\begin{aligned} {\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\hat{\psi}_+e^{-\hat{h}})&={\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(\psi_+e^{-h}\prod_{j=1}^n(z-z_j)^{-m_j}\right)\\ &={\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(\psi_+e^{-h}\right)\prod_{j=1}^n(z-z_j)^{-m_j}+\psi_+e^{-h}{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(\prod_{j=1}^n(z-z_j)^{-m_j}\right),\end{aligned}$$ which clearly belongs to $L_{1,\text{loc}}(\mathbb{R}^2\setminus\Lambda)$. Next we should check that $|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\hat{\psi}_+e^{-\hat{h}})|e^{\hat{h}}$ belongs to $L_2(\mathbb{R}^2\setminus\Lambda)$ under the assumption that $|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\psi_+e^{-h})|e^{h}$ belongs to $L_2(\mathbb{R}^2\setminus\Lambda)$. A calculation using  and  gives $$\begin{aligned} \nonumber\left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-\hat{h}}\hat{\psi}_+\right)\right|e^{\hat{h}} &= \left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-\hat{h}-i\phi}\psi_+(z)\right)\right|e^{\hat{h}}\\ \label{eq:ltwo} &= \left|\left({\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(-h(z))\psi_++{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\psi_+(z)\right)e^{-h}\prod_{j=1}^n(z-z_j)^{-m_j}\right|e^{h}\prod_{j=1}^n|z-z_j|^{m_j}\\ \nonumber&=\left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_+\right)\right|e^{h}.\end{aligned}$$ Hence $\psi_+\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_+^h)$ implies $\hat{\psi}_+=e^{-i\phi}\psi_+\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_+^{\hat{h}})$. In a similar way it follows that $\psi_-\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_-^h)$ implies that $\hat{\psi}_-=e^{-i\phi}\psi_-\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}_-^{\hat{h}})$. Thus $e^{-i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)\subset{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^{\hat{h}})$. In the same way we can show that $e^{i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^{\hat{h}})\subset{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)$, and thus we can conclude that $e^{-i\phi}{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)={\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^{\hat{h}})$. From the calculation in  and a similar calculation for $\psi_-$ it also follows that $$\begin{aligned} {\ensuremath{\mathfrak{p}}}^{\hat{h}}\left(e^{-i\phi}\psi,e^{-i\phi}\psi\right) & = 4\int\left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-\hat{h}-i\phi}\psi_+\right)\right|^2e^{2\hat{h}}+\left|{\ensuremath{{\tilde{\partial}}_z}}\left(e^{\hat{h}-i\phi}\psi_-\right)\right|^2e^{-2\hat{h}}d\lambda(z)\\ & = 4\int\left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_+\right)\right|^2e^{2h}+\left|{\ensuremath{{\tilde{\partial}}_z}}\left(e^{h}\psi_-\right)\right|^2e^{-2h}d\lambda(z)\\ & = {\ensuremath{\mathfrak{p}}}^h(\psi,\psi).\end{aligned}$$ Hence we can conclude that if $\psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_h)$ and $\xi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)$ then $e^{-i\phi}\psi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_{\hat{h}})$ and $e^{-i\phi}\xi\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^{\hat{h}})$. If we denote by $U_\phi$ the unitary operator of multiplication by $e^{i\phi}$, then we get $$({\ensuremath{\mathfrak{P}}}_h\psi,\xi)={\ensuremath{\mathfrak{p}}}^h(\psi,\xi)={\ensuremath{\mathfrak{p}}}^{\hat{h}}(U_\phi^*\psi,U_\phi^*\xi)=({\ensuremath{\mathfrak{P}}}_{\hat{h}}U_\phi^*\psi,U_\phi^*\xi)=(U_\phi {\ensuremath{\mathfrak{P}}}_{\hat{h}}U_\phi^*\psi,\xi),$$ and hence ${\ensuremath{\mathfrak{P}}}_h$ and ${\ensuremath{\mathfrak{P}}}_{\hat{h}}$ are unitarily equivalent. We have proved the following proposition. Let $B$ and $\hat{B}$ be two singular magnetic fields as in , with difference $\hat{B}-B=\sum_{j=1}^n 2\pi m_j \delta_{z_j}$, where $m_j$ are integers, not all equal to zero. Then their corresponding Maximal Pauli operators defined by  and  are unitarily equivalent. Zero modes {#sec:zero} ---------- When studying spectral properties of the operator ${\ensuremath{\mathfrak{P}}}_h$ it is sufficient to consider AB intensities $\alpha_j$ that belong to the interval $(0,1)$, since the operator is gauge invariant. See the discussion after the proof of Theorem \[thm:AC\] for more details about what happens when we do gauge transformations. \[lemma:asymptot\] Let $c_j\in\mathbb{C}$ and $z_j\in\mathbb{C}$, $j=1,\ldots,n$, where $z_j\neq z_i$ if $j\neq i$ and not all $c_j$ are equal to zero. Then $$\label{eq:asympt} \sum_{j=1}^n \frac{c_j}{z-z_j}\sim |z|^{-l-1},\quad |z|\to\infty,$$ where $l$ is the smallest nonnegative integer such that $\sum_{j=1}^n c_j z_j^l\neq 0$. If $|z|$ is large in comparison with all $|z_j|$ we have $$\begin{aligned} \sum_{j=1}^n \frac{c_j}{z-z_j} &= \frac{1}{z}\sum_{j=1}^n \frac{c_j}{1-z_j/z}\\ &= \sum_{k=0}^\infty \left(\sum_{j=1}^n c_jz_j^k\right)\frac{1}{z^{k+1}}\\ &= \left(\sum_{j=1}^n c_jz_j^l\right)\frac{1}{z^{l+1}}+O(|z|^{-l-2})\end{aligned}$$ and thus $\sum_{j=1}^n \frac{c_j}{z-z_j}\sim |z|^{-l-1}$ as $|z|\to\infty$. We note that $l$ in Lemma \[lemma:asymptot\] may never be greater than $n-1$. Indeed, if $l\geq n$ then we would have the linear system of equations $\{\sum_{j=1}^nc_jz_j^k=0\}_{k=0}^{n-1}$. But the determinant of this system is $\prod_{i>j}(z_i-z_j)\neq 0$, and this would force all $c_j$ to be zero. Note also that for $l\leq n$ we have a system of $l$ equations $\{\sum_{j=1}^nc_jz_j^k=0\}_{k=0}^{l-1}$ with $n$ unknowns $c_j$, and that the $l\times n$ matrix $\{z_j^k\}$ has rank $l$. ------------------------------------------------------------------------ \[thm:AC\] Let $B(z)$ be the magnetic field  with all $\alpha_j\in(0,1)$, and let ${\ensuremath{\mathfrak{P}}}_h$ be the Pauli operator defined by  and in Section \[sec:def\] corresponding to $B(z)$. Then $$\dim\ker {\ensuremath{\mathfrak{P}}}_h= \left\{n-\Phi\right\}+\left\{\Phi\right\},$$ where $\Phi=\frac{1}{2\pi}\int B(z)d\lambda(z)$, and $\{x\}$ denotes the largest integer strictly less than $x$ if $x>1$ and $0$ if $x\leq 1$. Using the notations $Q_\pm$ introduced in Section \[sec:def\], we also have $$\dim\ker Q_+=\left\{n-\Phi\right\}\quad \text{and}\quad \dim\ker Q_-=\left\{\Phi\right\}.$$ We follow the reasoning originating in [@ac], with necessary modifications. First we note that $(\psi_+,\psi_-)^t$ belongs to $\ker {\ensuremath{\mathfrak{P}}}_h$ if and only if $\psi_+$ belongs to $\ker Q_+$ and $\psi_-$ belongs to $\ker Q_-$, which is equivalent to $${\ensuremath{{\tilde{\partial}}_{\bar{z}}}}\left(e^{-h}\psi_+\right)=0\quad \text{and}\quad {\ensuremath{{\tilde{\partial}}_z}}\left(e^{h}\psi_-\right)=0.$$ This means exactly that $f_+(z)=e^{-h}\psi_+(z)$ is holomorphic and $f_-(z)=e^h\psi_-(z)$ is antiholomorphic in $z\in\mathbb{R}^2\setminus\Lambda$. It is the change in the domain where the functions are holomorphic that influences the result. Let us start with the spin-up component $\psi_+$. The function $f_+$ is allowed to have poles of order at most one at $z_j$, $j=1,\ldots,n$, and no others, since $e^h\sim |z-z_j|^{\alpha_j}$ as $z\to z_j$ and $\psi_+=f_+e^h$ should belong to $L_2(\mathbb{R}^2)$. Hence there exist constants $c_j$ such that the function $f_+(z)-\sum_{j=1}^n\frac{c_j}{z-z_j}$ is entire. From the asymptotics $e^h\sim |z|^{\Phi}$, $|z|\to\infty$, it follows that $f_+-\sum_{j=1}^n\frac{c_j}{z-z_j}$ may only be a polynomial of degree at most $N=-\Phi-2$. Hence $$f_+(z)=\sum_{j=1}^n\frac{c_j}{z-z_j}+a_0+a_1z+\ldots a_Nz^N,$$ where we let the polynomial part disappear if $N<0$. Now, the asymptotics for $\psi_+$ is $$\psi_+(z)\sim |z|^{-l-1+\Phi}+|z|^{N+\Phi},\quad |z|\to\infty,$$ where $l$ is the smallest nonnegative integer such that $\sum_{j=1}^n c_jz_j^l\neq0$. To have $\psi_+$ in $L_2(\mathbb{R}^2)$ we take $l$ to be the smallest nonnegative integer strictly greater than $\Phi$. Remember also from the remark after Lemma \[lemma:asymptot\] that $l\leq n-1$. We get three cases. If $\Phi<-1$, then all complex numbers $c_j$ can be chosen freely, and a polynomial of degree $\{-\Phi\}-1$ may be added which results $\{n-\Phi\}$ degrees of freedom. If $-1\leq\Phi<n-1$ we have no contribution from the polynomial, and we now have to chose the coefficients $c_j$ such that $\sum_{j=1}^n c_jz_j^k=0$ for $k=0,1,\ldots,l-1$. The dimension of the null-space of the matrix $\{z_j^k\}$ is $n-l=\{n-\Phi\}$. If $\Phi\geq n-1$ then we must have all coefficients $c_j$ equal to zero and we get no contribution from the polynomial. Hence, in all three cases we have $\{n-\Phi\}$ spin-up zero modes. Let us now focus on the spin-down component $\psi_-$. The function $f_-$ may not have any singularities, since the asymptotics of $e^{-h}$ is $|z-z_j|^{-\alpha_j}$ as $z\to z_j$. Hence $f_-$ must be entire. Moreover, $f_-$ may grow no faster than a polynomial of degree $\Phi-1$ for $\psi_-$ to be in $L_2(\mathbb{R}^2)$. Thus $f_-$ has to be a polynomial of degree at most $\{\Phi\}-1$, which gives us $\{\Phi\}$ spin-down zero modes. The number of zero modes for ${\ensuremath{\mathfrak{P}}}_h$ and ${\ensuremath{P}}_h$ are not the same. The Aharonov-Casher theorem for the EV Pauli operator (Theorem 3.1 in [@ervo]) states for the field under consideration: \[thm:ACEV\] Let $B(z)$ be as in  and let $\hat{B}(z)$ be the unique magnetic field where all AB intensities $\alpha_j$ are reduced to the interval $[-1/2,1/2)$, that is $\hat{B}(z)=B(z)+\sum_{j=1}^n2\pi m_j\delta_{z_j}$, where $\alpha_j+m_j\in[-1/2,1/2)$. Let $\Phi=\frac{1}{2\pi}\int\hat{B}(z)d\lambda(z)$. Then the dimension of the kernel of the EV Pauli operator ${\ensuremath{P}}_h$ is given by $\{|\Phi|\}$. All zero modes belong only to the spin-up or only to the spin-down component (depending on the sign of $\Phi$). Below we explain by some concrete examples how the spectral properties of the two Pauli operators ${\ensuremath{\mathfrak{P}}}_h$ and ${\ensuremath{P}}_h$ differ. \[ex:gauge\] Since ${\ensuremath{P}}_h$ is not gauge invariant we must not expect that the number of zero modes of ${\ensuremath{P}}_h$ is invariant under gauge transforms. To see that this property in fact can fail, let us look at the Pauli operators ${\ensuremath{P}}_{h_1}$ and ${\ensuremath{P}}_{h_2}$ induced by the magnetic fields $$\begin{aligned} B_1(z)&=B_0(z)+\pi\delta_0,\ \text{and}\\ B_2(z)&=B_0(z)-\pi\delta_0\end{aligned}$$ respectively, where $B_0$ has compact support and $\Phi_0=\frac{1}{2\pi}\int B_0(z)d\lambda(z)=\frac{3}{4}$. Then $B_2$ is reduced (that is, its AB intensity belong to $[-1/2,1/2)$) but $B_1$ has to be reduced. Due to Theorem \[thm:ACEV\], the EV Pauli operators ${\ensuremath{P}}_{h_1}$ and ${\ensuremath{P}}_{h_2}$ corresponding to $B_1$ and $B_2$ have no zero modes. However, a direct computation for the non-reduced EV Pauli operator ${\ensuremath{\mathcal{P}}}_{h_1}$ corresponding to $B_1$ shows that it actually has one zero mode. The situation is getting more interesting when we look at the operator that should correspond to $B_3=B_0(z)+3\pi\delta_0$. The AB intensity for $B_3$ is too strong so we have to make a reduction. In [@ervo] the reduction is made to the interval $[-1/2,1/2)$, and we have followed this conventions, but physically there is nothing that says that this is the natural choice. Reducing the AB intensity of $B_3$ to $-1/2$ gives an operator with no zero modes and reducing it to $1/2$ gives an operator with one zero mode. The Maximal Pauli operators ${\ensuremath{\mathfrak{P}}}_{h_1}$, ${\ensuremath{\mathfrak{P}}}_{h_2}$ and ${\ensuremath{\mathfrak{P}}}_{h_3}$ for these three magnetic fields all have one zero mode. This is easily seen by applying Theorem \[thm:AC\] to ${\ensuremath{\mathfrak{P}}}_{h_1}$ and then using the fact that the operators are unitarily equivalent. However, more understanding is achieved when looking more closely at how the eigenfunctions for these three Maximal Pauli operators look like. Let $h_k$ be the scalar potential for $B_k$, $k=1,2,3$. Then, as we have seen before $h_1(z)=h_0(z)+\frac12\log|z|$, $h_2(z)=h_0(z)-\frac12\log|z|$ and $h_3(z)=h_0(z)+\frac32\log|z|$ where $h_0(z)$ corresponds to $B_0(z)$. Following the reasoning from the proof of Theorem \[thm:AC\] we see that the solution space to ${\ensuremath{\mathfrak{P}}}_{h_1}\psi=0$ is spanned by $\psi=(0,e^{-h_1})^t$. Next, we see what the solutions to ${\ensuremath{\mathfrak{P}}}_{h_2}\psi=0$ look like. Now we have $\Phi_2=\frac{1}{2\pi}\int B_2(z)d\lambda(z)=1/4>0$. Let us begin with the spin-up component $\psi_+$. This time, the holomorphic $f_+=e^{-h_2}\psi_+$ may not have any poles since then $\psi_+$ would not belong to $L_2(\mathbb{R}^2)$, and $f_+(z)=e^{-h_2}\psi_+(z)\to0$ as $|z|\to\infty$, so we must have $f_+\equiv 0$, and thus $\psi_+\equiv 0$. For $\psi_-(z)$ to be in $L_2(\mathbb{R}^2)$ it is possible for $f_-$ to have a pole of order $1$ at the origin. Hence there exist a constant $c$ such that $f_-(z)-c/\bar{z}$ is antiholomorphic in the whole plane. The function $f_-(z)\to0$ as $|z|\to\infty$ since the total intensity $\Phi_2>0$. This implies, by Liouville’s theorem, that $f_-(z)\equiv c/\bar{z}$, so the solution space to ${\ensuremath{\mathfrak{P}}}_{h_2}\psi=0$ is spanned by $\psi(z)=(0,e^{-h_2}/\bar{z})$. Finally, let us determine the solutions to ${\ensuremath{\mathfrak{P}}}_{h_3}\psi=0$. Now $\Phi_3=\frac{1}{2\pi}\int B_3(z)d\lambda(z)=9/4$. Consider the spin-up part $\psi_+$. For $\psi_+$ to be in $L_2(\mathbb{R}^2)$ our function $f_+$ may have a pole of order no more than two at the origin. As before, there exist constants $c_1$ and $c_2$ such that $f_+(z)-c_1/z-c_2/z^2$ is entire and its limit is zero as $|z|\to\infty$, and thus $f_+(z)\equiv c_1/z+c_2/z^2$. Again, both $c_1$ and $c_2$ must vanish for $\psi_+$ to be in $L_2(\mathbb{R}^2)$ (otherwise we would not stay in $L_2$ at infinity). Thus $\psi_+\equiv 0$. On the other hand, the function $f_-$ may not have any poles (these poles would push $\psi_-$ out of $L_2(\mathbb{R}^2)$), so it is antiholomorphic in the whole plane. It also may grow no faster than $|z|^{5/4}$ as $|z|\to\infty$, and thus $f_-$ has to be a first order polynomial in $\bar{z}$, that is $f_-(z)=c_0+c_1\bar{z}$. Moreover for $\psi_-$ to be in $L_2(\mathbb{R}^2)$ it must have a zero of order $1$ at the origin, and thus $f_-(z)=c_1\bar{z}$. We conclude that the solutions to ${\ensuremath{\mathfrak{P}}}_{h_3}\psi=0$ are spanned by $(0,\bar{z}e^{-h_3})^t$. ------------------------------------------------------------------------ A natural property one should expect of a reasonably defined Pauli operator is that its spectral properties are invariant under the reversing the direction of the magnetic field: $B\mapsto -B$. The corresponding operators are formally anti-unitary equivalent under the transformation $\psi\mapsto \bar{\psi}$ and interchanging of $\psi_+$ and $\psi_-$. \[ex:BminusB\] The number of zero modes for ${\ensuremath{P}}_h$ is not invariant under $B(z)\mapsto-B(z)$, which we should not expect since the interval $[-1/2,1/2)$ is not symmetric. We check this by showing that the number of zero modes are not the same. To see this, let $B(z)=B_0(z)+\pi\delta_0$, where $B_0$ has compact support and $\Phi_0=\frac{1}{2\pi}\int B_0(z)d\lambda(z)=\frac{3}{4}$. Then $B$ has to be reduced since the AB intensity at zero is $1/2\not\in[-1/2,1/2)$. After reduction we get the magnetic field $\hat{B}(z)=B_0(z)-\pi\delta_0$, and we can apply Theorem \[thm:ACEV\]. Now let $\hat{\Phi}=\frac{1}{2\pi}\int \hat{B}^*d\lambda(z)=\frac14$. Thus the number of zero modes for ${\ensuremath{P}}_h$ is $0$. Now look at the Pauli operator ${\ensuremath{P}}_{-h}$ defined by the magnetic field $B_-(z)=-B(z)=-B_0(z)-\pi\delta_0$. This magnetic field is reduced and thus we can apply Theorem \[thm:ACEV\] directly. The total intensity is $\Phi_-=\frac{1}{2\pi}\int -B(z)d\lambda(z)=-\frac{5}{4}$, so the number of zero modes for ${\ensuremath{P}}_{-h}$ is $1$. If $B$ has several AB fluxes then the difference in the number of zero modes of ${\ensuremath{P}}_h$ and ${\ensuremath{P}}_{-h}$ can be made arbitrarily large. Now, let us check that the number of zero modes for ${\ensuremath{\mathfrak{P}}}_h$ is invariant under $B(z)\mapsto -B(z)$. Since it is clear that the number of zero modes is invariant under $z\mapsto \bar{z}$ we look instead at how the Pauli operators change when we do $B(z)\mapsto \hat{B}(z)=-B(\bar{z})$. If we set $\zeta=\bar{z}$ we get $\hat{B}(\zeta)=-B(z)$ and the scalar potentials satisfy $\hat{h}(\zeta)=-h(z)$. Now assume that $\psi=(\psi_+(z),\psi_-(z))^t\in{\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h)$. Then $$\begin{aligned} \mbox{}&{\ensuremath{\mathfrak{p}}}^{h(z)}\left(\begin{pmatrix}\psi_+(z)\\ \psi_-(z)\end{pmatrix},\begin{pmatrix}\psi_+(z)\\ \psi_-(z)\end{pmatrix}\right) = \\ \mbox{}&\quad=4\int \left|{\ensuremath{{\tilde{\partial}}_{\bar{z}}}}(\psi_+(z)e^{-h(z)})\right|^2e^{2h(z)}+\left|{\ensuremath{{\tilde{\partial}}_z}}(\psi_-(z)e^{h(z)}\right|^2e^{-2h(z)}d\lambda(z)\\ \mbox{}&\quad=4\int\left|{\tilde{\partial}}_\zeta(\psi_+(\bar{\zeta})e^{\hat{h}(\zeta)}\right|^2e^{-2\hat{h}(\zeta)}+\left|{\tilde{\partial}}_{\bar{\zeta}}(\psi_-(\bar{\zeta})e^{-\hat{h}(\zeta)}\right|^2e^{2\hat{h}(\zeta)}d\lambda(\zeta)\\ \mbox{}&\quad={\ensuremath{\mathfrak{p}}}^{\hat{h}(\bar{z})}\left(\begin{pmatrix}\psi_-(z)\\ \psi_+(z)\end{pmatrix},\begin{pmatrix}\psi_-(z)\\ \psi_+(z)\end{pmatrix}\right)\end{aligned}$$ Hence we see that $(\psi_+,\psi_-)^t$ belongs to ${\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_{h(z)})$ if and only if $(\psi_-,\psi_+)^t$ belongs to ${\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{P}}}_{\hat{h}(\bar{z})})$ and then ${\ensuremath{\mathfrak{P}}}_{\hat{h}(\bar{z})}={\ensuremath{\mathfrak{P}}}_{h(z)}V$ where $V:L_2(\mathbb{R}^2)\otimes\mathbb{C}^2\to L_2(\mathbb{R}^2)\otimes\mathbb{C}^2$ is the isometric operator given by $V((\psi_+,\psi_-)^t)=(\psi_-,\psi_+)^t$. Hence it is clear that ${\ensuremath{\mathfrak{P}}}_{\hat{h}(\bar{z})}$ and ${\ensuremath{\mathfrak{P}}}_{h(z)}$ have the same number of zero modes. ------------------------------------------------------------------------ In the previous example we saw that the number of zero modes for the Maximal Pauli operators corresponding to $B$ and $-B$ are the same. This can easily be seen directly from the Aharonov-Casher formula in Theorem \[thm:AC\]. To be able to apply the theorem to $-B=-B_0-\sum_{j=1}^n 2\pi\alpha_j\delta_j$ we have to do gauge transformations, adding $1$ to all the AB intensities, resulting in $\hat{B}=-B_0+\sum_{j=1}^n 2\pi(1-\alpha_j)\delta_j$. Now according to Theorem \[thm:AC\] the number of zero modes of ${\ensuremath{\mathfrak{P}}}_{-h}$ is equal to $$\dim\ker{\ensuremath{\mathfrak{P}}}_{-h}=\{\hat{\Phi}\}+\{n-\hat{\Phi}\}=\{n-\Phi\}+\{\Phi\}=\dim\ker{\ensuremath{\mathfrak{P}}}_h,$$ where we have used that $\hat{\Phi}=\frac{1}{2\pi}\int \hat{B}d\lambda(z)=n-\Phi$. ------------------------------------------------------------------------ Approximation by regular fields {#sec:approx} =============================== We have mentioned that the different Pauli extensions depend on which boundary conditions are induced at the AB fluxes. Let us now make this more precise. Since the self-adjoint extension only depends on the boundary condition at the AB solenoids it is enough to study the case of one such solenoid and no smooth field. For simplicity, let the solenoid be located at the origin, with intensity $\alpha\in(0,1)$, that is, let the magnetic field be given by $B=2\pi\alpha\delta_0$. We consider self-adjoint extensions of the Pauli operator $P$ that can be written in the form $$P=\begin{pmatrix}P_+ & 0\\ 0 & P_-\end{pmatrix}= \begin{pmatrix} Q_+^*Q_+ & 0\\ 0 & Q_-^*Q_-, \end{pmatrix}$$ with some explicitly chosen closed operators $Q_\pm$. It is exactly such extensions $P$ that can be defined by the quadratic form . A function $\psi_+$ belongs to ${\ensuremath{\mathscr{D}}}(P_+)$ if and only if $\psi_+$ belongs to ${\ensuremath{\mathscr{D}}}(Q_+)$ and $Q_+\psi_+$ belongs to ${\ensuremath{\mathscr{D}}}(Q_+^*)$, and similarly for $P_-$. With each self-adjoint extension $P_\pm=Q_{\pm}^*Q_{\pm}$ one can associate (see [@dast; @exstvy; @gest; @ta]) functionals $c_{-\alpha}^\pm$, $c_{\alpha}^{\pm}$, $c_{\alpha-1}^\pm$ and $c_{1-\alpha}^\pm$, by $$\begin{aligned} c_{-\alpha}^{\pm}(\psi_\pm) &= \lim_{r\to 0}r^\alpha\frac{1}{2\pi}\int_{0}^{2\pi}\psi_\pm d\theta,\\ c_{\alpha}^{\pm}(\psi_\pm) &= \lim_{r\to 0}r^{-\alpha}\left(\frac{1}{2\pi}\int_{0}^{2\pi}\psi_\pm d\theta-r^{-\alpha}c_{\alpha}^{\pm}(\psi_\pm)\right),\\ c_{\alpha-1}^{\pm}(\psi_\pm) &= \lim_{r\to 0}r^{1-\alpha}\frac{1}{2\pi}\int_{0}^{2\pi}\psi_\pm e^{i\theta} d\theta,\ \text{and}\\ c_{1-\alpha}^{\pm}(\psi_\pm) &= \lim_{r\to 0}r^{\alpha-1}\left(\frac{1}{2\pi}\int_{0}^{2\pi}\psi_\pm e^{i\theta}d\theta-r^{\alpha-1}c_{1-\alpha}^{\pm}(\psi_\pm)\right).\end{aligned}$$ such that $\psi_\pm\in{\ensuremath{\mathscr{D}}}(P_\pm)$ if and only if $$\psi_\pm\sim c_{-\alpha}^\pm r^{-\alpha}+c_{\alpha}^\pm r^{\alpha}+c_{\alpha-1}^\pm r^{\alpha-1}e^{-i\theta}+c_{1-\alpha}^\pm r^{1-\alpha}e^{-i\theta}+O(r^\gamma)$$ as $r\to 0$, where $\gamma=\min(1+\alpha,2-\alpha)$ and $z=re^{i\theta}$. Any two nontrivial independent linear relations between these functionals determine a self-adjoint extension. In order that the operator be rotation-invariant, none of these relations may involve both $\alpha$ and $1-\alpha$ terms simultaneously. Accordingly, the parameters $\nu_0^\pm=c_\alpha^\pm/c_{-\alpha}^\pm$ and $\nu_1^\pm=c_{1-\alpha}^\pm/c_{\alpha-1}^\pm$, with possible values in $(-\infty,\infty]$, are introduced in [@bopu], and it is proved that the operators $P_\pm$ can be approximated by operators with regularized magnetic fields in the norm resolvent sense if and only if $\nu_0^\pm=\infty$ and $\nu_1^\pm\in(-\infty,\infty)$ or if $\nu_0^\pm\in(-\infty,\infty)$ and $\nu_1^\pm=\infty$. We are now going to check what parameters the Maximal and EV Pauli operators corresponds to. Generally, for the function $\psi_+$ to be in ${\ensuremath{\mathscr{D}}}(P_+)$, it must belong to ${\ensuremath{\mathscr{D}}}(Q_+)$ and $Q_+\psi_+$ must belong to ${\ensuremath{\mathscr{D}}}(Q_+^*)$. We will find out what is required for a function $g$ to be in ${\ensuremath{\mathscr{D}}}(Q_+^*)$. Take any $\phi_+\in{\ensuremath{\mathscr{D}}}(Q_+)$, then the integration by parts on the domain $\epsilon<|z|$ gives $$\begin{aligned} \langle g,Q_+\phi_+\rangle &= \lim_{\epsilon\to0}\int_{|z|>\epsilon}g(z)\overline{{\ensuremath{\frac{\partial}{\partial \bar{z}}}}(e^{-h}\phi_+(z))e^h} d\lambda(z)\\ \mbox{}& = -\lim_{\epsilon\to0}\int_{|z|>\epsilon}{\ensuremath{\frac{\partial}{\partial z}}}(g(z)e^h)e^{-h}\overline{\phi_+(z)}d\lambda(z)\\ \mbox{}&\quad\quad -\lim_{\epsilon\to0}\frac{\epsilon}{2}\int_0^{2\pi}g(\epsilon e^{i\theta})\overline{\phi_+(\epsilon e^{i\theta})}e^{-i\theta}d\theta\\ \mbox{}& = \langle -Q_-g,\phi_+\rangle-\lim_{\epsilon\to0}\frac{\epsilon}{2}\int_0^{2\pi}g(\epsilon e^{i\theta})\overline{\phi_+(\epsilon e^{i\theta})}e^{-i\theta}d\theta\end{aligned}$$ Hence, for $g$ to belong to ${\ensuremath{\mathscr{D}}}(Q_+^*)$ it is necessary and sufficient that $$\lim_{\epsilon\to0}\epsilon\int_0^{2\pi}g(\epsilon e^{i\theta})\overline{\phi_+(\epsilon e^{i\theta})}e^{-i\theta}d\theta=0$$ for all $\phi_+\in{\ensuremath{\mathscr{D}}}(p_+)$, and thus for $Q_+\psi_+$ to belong to ${\ensuremath{\mathscr{D}}}(Q_+^*)$ it is necessary and sufficient that $$\lim_{\epsilon\to0}\epsilon\int_0^{2\pi}\left({\ensuremath{\frac{\partial}{\partial \bar{z}}}}(e^{-h}\psi_+)e^h\right)\Big|_{z=\epsilon e^{i\theta}}\overline{\phi_+(\epsilon e^{i\theta})}e^{-i\theta}d\theta=0$$ for all $\phi_+\in{\ensuremath{\mathscr{D}}}(p_+)$. We know that $\psi_+$ has asymptotics $\psi_+\sim c_{-\alpha}^+r^{-\alpha}+c_{\alpha}^+r^{\alpha}+c_{\alpha-1}^+r^{\alpha-1}e^{-i\theta}+c_{1-\alpha}^+r^{1-\alpha}e^{-i\theta}+O(r^\gamma)$ and that${\ensuremath{\frac{\partial}{\partial \bar{z}}}}=\frac{e^{i\theta}}{2}\left(\frac{\partial}{\partial r}+\frac{i}{r}\frac{\partial}{\partial\theta}\right)$ in polar coordinates. A calculation gives $$\epsilon{\ensuremath{\frac{\partial}{\partial \bar{z}}}}(e^{-h}\psi_+)e^he^{-i\theta}\Big|_{z=\epsilon e^{i\theta}}\sim -2\alpha c_{-\alpha}^+\epsilon^{-\alpha}+2(1-\alpha)c_{1-\alpha}^+\epsilon^{1-\alpha}e^{-i\theta}+O(r^\gamma),$$ hence we must have $$\label{eq:spinupkrav} \lim_{\epsilon\to0} \int_0^{2\pi}\left(-2\alpha c_{-\alpha}^+\epsilon^{-\alpha}+2(1-\alpha)c_{1-\alpha}^+\epsilon^{1-\alpha}e^{-i\theta}\right)\overline{\phi_+(\epsilon e^{i\theta})}d\theta=0$$ for all $\phi_+\in{\ensuremath{\mathscr{D}}}(p_+)$. A similar calculation for the spin-down component yields $$\label{eq:spindownkrav} \lim_{\epsilon\to0} \int_0^{2\pi}\left(2\alpha c_{\alpha}^-\epsilon^{\alpha}+2(\alpha-1)c_{\alpha-1}^-\epsilon^{\alpha-1}e^{i\theta}\right)\overline{\phi_-(\epsilon e^{i\theta})}d\theta=0.$$ We will now calculate what parameters $\nu_0^\pm$ and $\nu_1^\pm$ the Maximal and EV Pauli extensions correspond to. To do so, it is enough to study the asymptoics of the functions in the form core. Let us first consider the Maximal Pauli extension. Functions on the form $(\phi_0^+c/z)e^h$ constitute a form core for ${\ensuremath{\mathfrak{p}}}^h_+$, where $\phi_0$ is smooth. Hence there are elements in ${\ensuremath{\mathscr{D}}}({\ensuremath{\mathfrak{p}}}^h_+)$ that asymptotically behave as $r^\alpha$ and also elements with asymptotics $r^{\alpha-1}e^{-i\theta}$. According to  this means that $c_{-\alpha}^+$ and $c_{1-\alpha}^+$ must be zero. Similarly, the elements that behave like $r^{-\alpha}$ and elements that behave like $r^{1-\alpha}e^{i\theta}$ constitute a form core for ${\ensuremath{\mathfrak{p}}}^h_-$, which by  forces $c_{\alpha}^-$ and $c_{\alpha-1}^-$ to be zero. The parameters $\nu_0^\pm$ and $\nu_1^\pm$ are given by $\nu_0^+=c_{\alpha}^+/c_{-\alpha}^+=\infty$, $\nu_1^+=c_{1-\alpha}^+/c_{\alpha-1}^+=0$, $\nu_0^-=c_{\alpha}^-/c_{-\alpha}^-=0$ and $\nu_1^-=c_{1-\alpha}^-/c_{\alpha-1}^-=\infty$. Hence the Maximal Pauli operator ${\ensuremath{\mathfrak{P}}}_h$ can be approximated in the sense of [@bopu]. Let us now consider the EV Pauli extension, and study the case when $\alpha\in(0,1/2)$. The case $\alpha<0$ follows in a a similar way. A form core for ${\ensuremath{\pi}}^h_+$ is given by $e^h\phi_0$ where $\phi_0$ is smooth, see [@ervo]. These functions have asymptotic behavior $r^\alpha$. From  follows that $c_{-\alpha}^+$ must vanish. However, $\psi_+$ belonging to ${\ensuremath{\mathscr{D}}}(Q_+)$ must also belong to ${\ensuremath{\mathscr{D}}}({\ensuremath{\pi}}^h_+)$ and since the functions in the form core for ${\ensuremath{\pi}}^h_+$ behave as $r^{\alpha}$ or nicer, we see that the term $c_{\alpha-1}^+r^{\alpha-1}e^{-i\theta}$ gets too singular to be in ${\ensuremath{\mathscr{D}}}(Q_+)$ if $c_{\alpha-1}^+\neq 0$, and hence $c_{\alpha-1}^+$ must be zero. Similarly, a form core for ${\ensuremath{\pi}}^h_-$ is given by $e^{-h}\phi_0$, with $\phi_0$ smooth. Functions in this form core have asymptotic behavior $r^{-\alpha}$ or $r^{-\alpha+1}e^{i\theta}$ which forces $c_{\alpha}^-$ and $c_{\alpha-1}^-$ to be zero. Hence the parameters $\nu_0^\pm$ and $\nu_1^\pm$ are given by $\nu_0^+=c_{\alpha}^+/c_{-\alpha}^+=\infty$, $\nu_1^+=c_{1-\alpha}^+/c_{\alpha-1}^+=\infty$, $\nu_0^-=c_{\alpha}^-/c_{-\alpha}^-=0$ and $\nu_1^-=c_{1-\alpha}^-/c_{\alpha-1}^-=\infty$. We conclude that the spin-up part of ${\ensuremath{P}}_h$ can not be approximated in the sense of [@bopu], while the spin-down part can. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank my supervisor Professor Grigori Rozenblum for introducing me to this problem and for giving me all the support I needed. [99]{} Adami, R. and Teta, A. *On the Aharonov-Bohm Hamiltonian.* Lett. Math. Phys. 43 (1998), no. 1, 43–53. Aharonov, Y. and Bohm, D. *Significance of electromagnetic potentials in the quantum theory.*, Phys. Rev. 115 (1959), 485-491. Aharonov, Y. and Casher, A. *Ground state of spin-1/2 charged particle in a two-dimensional magnetic field*, Phys. Rev. A 19 (1979), 2461-2462. Bordag, M. and Voropaev, S. *Charged particle with magnetic moment in the Aharonov-Bohm potential* J. Phys. A 26, no. 24 (1993), 7637–7649. Borg, J.L., Pulé, J.V. *Pauli Approximations to the Self-Adjoint Extensions of the Aharonov-Bohm Hamiltonian* J. Math. Phys. 44 (2003), no. 10, 4385–4410. Cycon, H.L., Froese, R.G., Kirsch, W. and Simon, B. *Schödinger Operators with Application to Quantum Mechanics and Global Geometry*, Berlin–Heidelberg–New York Springer-Verlag, (1987). Dabrowski, L. and Šťovíček, P. *Aharonov-Bohm effect with $\delta$-type interaction.* J. Math. Phys. 39 (1998), no. 1, 47–62. [Erdős]{}, L. and Vougalter, V. *Pauli Operator and Aharonov-Casher Theorem for Measure Valued Magnetic Fields*, Comm. Math. Phys. 225 (2002), 399–421. Exner, P., Šťovíček, P. and Vytřas, P. *Generalized boundary conditions for the Aharonov-Bohm effect combined with a homogeneous magnetic field* J. Math. Phys. 43 (2002), no. 5, 2151–2168. Geyler, V. A. and Grishanov, E. N. *Zero Modes in a Periodic System of Aharonov-Bohm Solenoids* JETP Letters, 75 (2002), no. 7, 354–356. Geyler, V. A. and Šťovíček, P. *On the Pauli operator for the Aharonov-Bohm effect with two solenoids*, J. Math. Phys. 45 (2004), no. 1, 51–75. Geyler, V. A. and Šťovíček, P. *Zero modes in a system of Aharonov-Bohm fluxes*, Rev. Math. Phys. 16 (2004), no. 7, 851–907. Hirokawa, M. and Ogurisu, O. *Ground state of a spin-$1/2$ charged particle in a two-dimensional magnetic field* J. Math. Phys. 42 (2001), no. 8, 3334–3343. Landau, L. D. and Lifshitz, E. M. *Quantum Mechanics: Non-Relavistic Theory*, Pergamon Press, Oxford, (1977). Miller, K. *Bound states of Quantum Mechanical Particles in Magnetic Fields*, Ph.D. Thesis, Princeton University, (1982). Rozenblum, G. and Shirokov, N. *Infiniteness of zero modes for the Pauli operator with singular magnetic field*, Preprint, `xxx.lanl.gov/abs/math-ph/0501059`. Sobolev, A. *On the Lieb-Thirring estimates for the Pauli operator*, Duke J. Math. 82 (1996), 607–635. Tamura, H. *Resolvent convergence in norm for Dirac operator with Aharonov-Bohm field* J. Math. Phys. 44 (2003), no. 7, 2967–2993.
**** Conformal scattering on the Schwarzschild metric \ [Jean-Philippe NICOLAS]{}\ \ \ \ [**Abstract.**]{} We show that existing decay results for scalar fields on the Schwarzschild metric are sufficient to obtain a conformal scattering theory. Then we re-interpret this as an analytic scattering theory defined in terms of wave operators, with an explicit comparison dynamics associated with the principal null geodesic congruences. The case of the Kerr metric is also discussed. [**Keywords.**]{} Conformal scattering, black holes, wave equation, Schwarzschild metric, Goursat problem. [**Mathematics subject classification.**]{} 35L05, 35P25, 35Q75, 83C57. Introduction ============ Conformal time dependent scattering originates from the combination of the ideas of R. Penrose on spacetime conformal compactification [@Pe1963; @Pe1964; @Pe1965; @PeRi], the Lax-Phillips theory of scattering [@LaPhi] and F.G. Friedlander’s notion of radiation fields [@Fri1962; @Fri1964; @Fri1967]. The Lax-Phillips scattering theory for the wave equation is a construction on flat spacetime. It is based on a translation representer of the solution, which is re-interpreted as an asymptotic profile of the field along outgoing radial null geodesics, analogous to Friedlander’s radiation field[^1]. Observing this, Friedlander formulated the first version of conformal time-dependent scattering in 1980 [@Fri1980]. The framework was a static spacetime with a metric approaching the flat metric fast enough at infinity (like $1/r^2$) so as to ensure that the conformal spacetime has a regular null infinity (denoted ${{\mathscr I}}$). This allowed him to construct radiation fields as traces on ${{\mathscr I}}$ of conformally rescaled fields. The scattering theory as such was obtained by the resolution of a Goursat (characteristic Cauchy) problem on null infinity, whose data are the radiation fields. Then he went on to recover the analytically explicit aspects of the Lax-Phillips theory, in particular the translation representation of the propagator, a feature which is tied in with the staticity of the geometry[^2]. His ideas were taken up by J.C. Baez, I.E. Segal and Zhou Z.F. in 1989-1990 [@Ba1989a; @Ba1989b; @Ba1990; @BaSeZho1990; @BaZho1989] to develop conformal scattering theories on flat spacetime for non linear equations. Note that the resolution of the characteristic Cauchy problem was the object of a short paper by L. Hörmander in 1990 [@Ho1990], in which he described a method of resolution based entirely on energy estimates and weak compactness, for the wave equation on a general spatially compact spacetime. Friedlander himself came back to conformal scattering just before his death in a paper published posthumously in 2001 [@Fri2001]. It is on the whole quite surprising that his idea did not entail more active research in the domain. It is even more puzzling that the research it did entail remained strictly focused on static geometries. In fact, the observation that a complete scattering theory in the physical spacetime, amounts to the resolution of a Goursat problem on the compactified spacetime, is the door open to the development of scattering theories on generic non stationary geometries. Probably Friedlander’s wish to recover all the analytic richness of the Lax-Phillips theory prevented him from pushing his theory this far. However, the door being open, somebody had to go through it one day. This was done in 2004 by L.J. Mason and the author in [@MaNi2004], a paper in which a conformal scattering theory was developed for scalar waves[^3], Dirac and Maxwell fields, on generically non stationary asymptotically simple spacetimes. A conformal scattering theory for a non linear wave equation on non stationary backgrounds was then obtained by J. Joudioux in 2012 [@Jo2012]. The purpose of the present work is to show how existing decay results can be used to obtain conformal scattering constructions on black hole backgrounds. We treat the case of the wave equation on the Schwarzschild metric, for which the analytic scattering theory is already known (see J. Dimock and B.S. Kay in 1985-1987 [@Di1985; @DiKa1986; @DiKa1987]). The staticity of the exterior of the black hole gives a positive definite conserved quantity on spacelike slices, which can be extended to the conformally rescaled spacetime ; the known decay results (we use those of M. Dafermos and I. Rodnianski, see for example their lecture notes [@DaRoLN]) are then enough to obtain a complete scattering theory. It is in some sense unsatisfactory to use decay results, because they require a precise understanding of the trapping by the photon sphere, which is much more information than is needed for a scattering theory. However, such results should by nature be fairly robust under small perturbations. So the conformal scattering theories on stationary black hole backgrounds obtained using them can in principle be extended to non stationary perturbations. Not that this is at all trivial. This work is to be considered a first step in the developent of conformal scattering theories on black hole backgrounds, to be followed by extensions to other equations and to more general, non stationary situations. The paper is organized as follows. Section \[GeomFrame\] contains the description of the geometrical framework for the case of the wave equation on the Schwarzschild metric. We describe the conformal compactification of the geometry and the corresponding rescaling of the wave equation. In section \[EnIdent\], we derive the main energy estimates on the compactified spacetime. Section \[Scattering\] is devoted to the conformal scattering construction and to its re-interpretation in terms of wave operators associated to a comparison dynamics. This type of structure, contrary to the translation representation, would survive in a non stationary situation (see [@MaNi2004] for an analogous construction on non stationary asymptotically simple spacetimes). This re-interpretation concerns the most difficult aspects of analytic scattering theory : the existence of inverse wave operators and asymptotic completeness. For the existence of direct wave operators, which is the easy part, we keep the analytic approach using Cook’s method ; this is explained in appendix \[AppendixCook\]. The reason for this choice is the simplicity of the method and its easy entendibility to fairly general geometries, using a geometric transport equation as comparison dynamics, provided we have a precise knowledge of the asymptotic behaviour of the metric and good uniform energy estimates (which are in any case crucial for developing a conformal scattering theory). Some technical aspects of the resolution of the Goursat problem on the conformal boundary, which is at the core of the conformal scattering theory, are explained in appendix \[HormGP\]. Section \[Kerr\] is devoted to remarks concerning the extension of these results to the Kerr metric and some concluding comments. Since the first version of this work, this last section has been entirely re-written in order to take the new results by M. Dafermos, I. Rodnianski and Y. Shlapentokh-Rothman [@DaRoShla] into account. [**Notations and conventions.**]{} Given a smooth manifold $M$ without boundary, we denote by ${\cal C}^\infty_0 (M)$ the space of smooth compactly supported scalar functions on $M$ and by ${\cal D}' (M)$ its topological dual, the space of distributions on $M$. Concerning differential forms and Hodge duality, following R. Penrose and W. Rindler [@PeRi], we adopt the following convention : on a spacetime $({\cal M},g)$ (i.e. a $4$-dimensional Lorentzian manifold that is oriented and time-oriented), the Hodge dual of a $1$-form $\alpha$ is given by $$(*\alpha)_{abc} = e_{abcd} \alpha^d\, ,$$ where $e_{abcd}$ is the volume form on $({\cal M} , g)$, which in this paper we simply denote ${\mathrm{dVol}}$. We shall use two important properties of the Hodge star : - given two $1$-forms $\alpha$ and $\beta$, we have $$\label{HStarP1} \alpha \wedge * \beta = -\frac{1}{4} \alpha_a \beta^a\, {\mathrm{dVol}}\, ;$$ - for a $1$-form $\alpha$ that is differentiable, $$\label{HStarP2} {\mathrm{d}}* \alpha = -\frac{1}{4} (\nabla_a \alpha^a ) {\mathrm{dVol}}\, .$$ Throughout this work, we shall talk about analytic and conformal scattering as two different approaches to scattering theory. In most cases, we mean that the former is based on spectral techniques and the latter relies on a conformal compactification. The truly significant difference however is that conformal scattering understands the scattering construction as the resolution of a Goursat problem on the conformal boundary, described as a finite hypersurface, whereas analytic scattering sees the scattering channels as asymptotic regions. Geometrical framework {#GeomFrame} ===================== The Schwarzschild metric is given on ${\mathbb{R}}_t \times ]0,+\infty [_r \times S^2_\omega$ by $$g = F {\mathrm{d}}t^2 - F^{-1} {\mathrm{d}}r^2 - r^2 {\mathrm{d}}\omega^2 \, ,~ F = F(r) = 1 -\frac{2M}{r} \, ,$$ where ${\mathrm{d}}\omega^2$ (also denoted $e_{S^2}$ below) is the euclidean metric on $S^2$ and $M>0$ is the mass of the black hole. We work on the exterior of the black hole $\{ r>2M \}$, which is the only region of spacetime perceived by static observers at infinity (think for instance of a distant telescope pointed at the black hole). Introducing the Regge-Wheeler coordinate $r_* = r + 2M \log (r-2M)$, such that ${\mathrm{d}}r = F {\mathrm{d}}r_*$, the metric $g$ takes the form $$g = F ({\mathrm{d}}t^2 - {\mathrm{d}}r_*^2 ) - r^2 {\mathrm{d}}\omega^2 \, .$$ The Schwarzschild metric has a four-dimensional space of global Killing vector fields, generated by $$\label{KVF} K:=\partial_t \, ,~ X:=\sin \varphi \, \partial_\theta + \cot \theta \cos \varphi \, \partial_\varphi \, ,~Y:=\cos \varphi \, \partial_\theta - \cot \theta \sin \varphi \, \partial_\varphi \, ,~Z:=\partial_\varphi \, ,$$ which are the timelike (outside the black hole) Killing vector field $\partial_t$ and the three generators of the rotation group. Some other essential vector fields are the principal null vector fields (the vectors we give here are “unnormalized”, they are not the first two vectors of a normalized Newman-Penrose tetrad) $$\label{PND} l = \partial_t + \partial_{r_*} \, ,~ n = \partial_t - \partial_{r_*} \, .$$ We perform a conformal compactification of the exterior region using the conformal factor $\Omega = 1/r$, i.e. we put $$\hat{g} = \Omega^2 g \, .$$ To express the rescaled Schwarzschild metric, we use coordinates $u = t-r_*$, $R=1/r$, $\omega$ : $$\label{ghatu} \hat{g} = R^2 (1-2MR) {\mathrm{d}}u^2 - 2 {\mathrm{d}}u {\mathrm{d}}R - {\mathrm{d}}\omega^2 \, .$$ The inverse metric is $$\label{InvRescSchwaMet} \hat{g}^{-1} = - \partial_u \otimes \partial_R - \partial_R \otimes \partial_u - R^2 (1-2MR) \partial_R\otimes \partial_R - e^{-1}_{S^2} \, .$$ The non-zero Christoffel symbols for $\hat{g}$ in the coordinates $u,R,\omega$ are : $$\begin{gathered} {\hat\Gamma}^0_{00} = R (1-3MR) \, ,~ {\hat\Gamma}^1_{00} = R^3 (1-2MR)(1-3MR) \, ,~ {\hat\Gamma}^1_{01} = -R(1-3MR) \, , \\ {\hat\Gamma}^2_{33} = -\sin \theta \cos \theta \, ,~ {\hat\Gamma}^3_{23} = \cot \theta \, .\end{gathered}$$ If we use the coordinates $(t,r,\theta , \varphi )$, we get instead (still for the metric $\hat{g}$) $$\begin{gathered} {\hat\Gamma}^0_{01} = \frac{3M-r}{r(r-2M)} \, ,~ {\hat\Gamma}^1_{00} = \frac{(r-2M)(3M-r)}{r^3} \, ,~ {\hat\Gamma}^1_{11} = \frac{M-r}{r(r-2M)} \, , \\ {\hat\Gamma}^2_{33} = -\sin \theta \cos \theta \, ,~ {\hat\Gamma}^3_{23} = \cot \theta \, ,\end{gathered}$$ the others being zero. Future null infinity ${{\mathscr I}}^+$ and the past horizon ${{\mathscr H}}^-$ are null hupersurfaces of the rescaled spacetime $${{\mathscr I}}^+ = {\mathbb{R}}_u \times \{ 0\}_R \times S^2_\omega \, ,~ {{\mathscr H}}^- = {\mathbb{R}}_u \times \{ 1/2M \}_R \times S^2_\omega \, .$$ If instead of $u,R,\omega$ we use the coordinates $v=t+r_*,R,\omega$, the metric $\hat{g}$ takes the form $$\label{ghatv} \hat{g} = R^2 (1-2MR) {\mathrm{d}}v^2 + 2 {\mathrm{d}}v {\mathrm{d}}R - {\mathrm{d}}\omega^2 \, .$$ In these coordinates we have access to past null infinity ${{\mathscr I}}^-$ and the future horizon ${{\mathscr H}}^+$ described as the null hypersurfaces $${{\mathscr I}}^- = {\mathbb{R}}_v \times \{ 0\}_R \times S^2_\omega \, ,~ {{\mathscr H}}^+ = {\mathbb{R}}_v \times \{ 1/2M \}_R \times S^2_\omega \, .$$ The compactification is not complete ; spacelike infinity $i^0$ and the timelike infinities $i^\pm$ remain at infinity for $\hat{g}$. The crossing sphere $S^2_\mathrm{c}$, which is the boundary of all level hypersurfaces of $t$ outside the black hole and the place where the future and past horizons meet, is not at infinity but it is not described by the coordinate systems $\{u,R,\omega \}$ and $\{v,R,\omega \}$ ; it is the only place in $\{ r\geq 2M \} \cup {{\mathscr I}}^\pm$ where $\partial_t$ vanishes. See Figure \[PenD\] for a Carter-Penrose diagram of the compactified exterior. ![Carter-Penrose diagram of the conformal compactification of the exterior of the black hole.[]{data-label="PenD"}](PenroseD.jpg){width="4in"} A crucial feature of the conformal compactification using the conformal factor $1/r$ is that it preserves the symmetries : the vector fields are still Killing for $\hat{g}$. In particular, the vector field $\partial_t$ becomes $\partial_u$ in the $(u,R,\omega )$ coordinate system, respectively $\partial_v$ in the $(v,R,\omega )$ coordinate system ; thus it extends as the future-oriented null generator of null infinities ${{\mathscr I}}^\pm$ and the future and past horizons ${{\mathscr H}}^\pm$. We shall denote by $\cal M$ the exterior of the black hole, ${\cal M}={\mathbb{R}}_t \times ]2M , +\infty [_r \times S^2$, and by $\bar{\cal M}$ its conformal compactification, i.e. $$\bar{\cal M} = {\cal M} \cup {{\mathscr I}}^+ \cup {{\mathscr H}}^+ \cup {{\mathscr I}}^- \cup {{\mathscr H}}^- \cup S^2_c \, .$$ The constructions of the horizons and of null infinities are of a very different nature. Understanding the horizons as smooth null hypersurfaces of the analytically extended Schwarzschild exterior only requires a change of coordinates, for instance the advanced and retarded Eddington-Finkelstein coordinates $(u,R,\omega)$ and $(v,R,\omega)$. For the construction of null infinities however, the conformal rescaling is necessary and ${{\mathscr I}}^\pm$ are boundaries of the exterior of the black hole endowed with the metric $\hat{g}$, not of the physical exterior $({\cal M},g)$. The main hypersurfaces that we shall use in this paper are the following : $$\begin{aligned} \Sigma_t &=& \{ t \} \times \Sigma \, ,~ \Sigma = ]2M , +\infty [_r \times S^2_\omega = {\mathbb{R}}_{ r_*} \times S^2_\omega \, , \label{Sigt} \\ S_T &=& \left\{ (t,r_* , \omega) \in {\mathbb{R}}\times {\mathbb{R}}\times S^2 \, ;~ t = T+ \sqrt{1+r_*^2} \right\} \label{ST} \, , \\ {{\mathscr I}}^+_T &=& {{\mathscr I}}^+ \cap \{ u \leq T\} = ]-\infty , T]_u \times \{ 0 \}_R \times S^2_\omega \, , \label{scriT} \\ {{\mathscr H}}^+_T &=& S^2_{\mathrm{c}} \cup ({{\mathscr H}}^+ \cap \{ v \leq T\} ) = S^2_{\mathrm{c}} \cup (]-\infty , T]_v \times \{ 1/2M \}_R \times S^2_\omega ) \, . \label{scrhT}\end{aligned}$$ For $T>0$, the hypersurfaces $\Sigma_0$, ${{\mathscr H}}^+_T$, $S_T$ and ${{\mathscr I}}^+_T$ form a closed — except for the part where ${{\mathscr I}}^+$ and $\Sigma_0$ touch $i^0$ — hypersurface on the compactified exterior (see Figure \[3surface\]). We make such an explicit choice for the hypersurface $S_T$ for the sake of clarity but it is not strictly necessary, all that is required of $S_T$ is that it is uniformly spacelike for the rescaled metric, or even achronal, and forms a closed hypersurface with $\Sigma_0$, ${{\mathscr H}}^+_T$, and ${{\mathscr I}}^+_T$. ![The main hypersurfaces represented on the compactified exterior.[]{data-label="3surface"}](3Surface.jpg){width="4in"} The scalar curvature of the rescaled metric $\hat{g}$ is $$\mathrm{Scal}_{\hat{g}} = 12MR \, .$$ So $\phi \in {\cal D}' ({\mathbb{R}}_t \times ]0,+\infty [_r \times S^2_\omega)$ satisfies $$\label{WEqPhys} \square_g \phi =0$$ if and only if $\hat{\phi} = \Omega^{-1} \phi$ satisfies $$\label{WEqResc} (\square_{\hat{g}} + 2MR ) \hat{\phi} =0 \, .$$ By the classic theory of hyperbolic partial differential equations (see Leray [@Le1953]), for smooth and compactly supported initial data $\hat{\phi}_0$ and $\hat{\phi}_1$ on $\Sigma_0$, we have the following properties : - there exists a unique $\hat{\phi} \in {\cal C}^\infty ({\cal M})$ solution of such that $$\hat{\phi} \vert_{\Sigma_0} = \hat{\phi}_0 \mbox{ and } \partial_t \hat{\phi} \vert_{\Sigma_0} = \hat{\phi}_1 \, ,$$ - $\hat{\phi}$ extends as a smooth function on $\bar{\cal M}$ and therefore has a smooth trace on ${{\mathscr H}}^\pm \cup {{\mathscr I}}^\pm$. The D’Alembertians for the metrics $g$ and $\hat{g}$ have the following expressions in variables $(t, r_*, \omega)$ : $$\begin{aligned} \square_g &=& \frac{1}{F} \left( \frac{\partial^2}{\partial t^2} - \frac{1}{r^2} \frac{\partial}{\partial r_*} r^2 \frac{\partial}{\partial r_*} \right) - \frac{1}{r^2} \Delta_{S^2} \, ,\\ \square_{\hat{g}} &=& \frac{r^2}{F} \left( \frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial r_*^2} \right) - \Delta_{S^2} \, .\end{aligned}$$ The volume forms associated with $g$ and $\hat{g}$ are $$\begin{aligned} {\mathrm{dVol}}_g &=& r^2 \sin \theta {\mathrm{d}}t \wedge {\mathrm{d}}r \wedge {\mathrm{d}}\theta \wedge {\mathrm{d}}\varphi = r^2 {\mathrm{d}}t \wedge {\mathrm{d}}r \wedge {\mathrm{d}}^2 \omega = r^2 F {\mathrm{d}}t \wedge {\mathrm{d}}r_* \wedge {\mathrm{d}}^2 \omega \, ,\\ {\mathrm{dVol}}_{\hat{g}} &=& \Omega^4 {\mathrm{dVol}}_g = R^2{\mathrm{d}}t \wedge {\mathrm{d}}r \wedge {\mathrm{d}}^2 \omega = R^2 F {\mathrm{d}}t \wedge {\mathrm{d}}r_* \wedge {\mathrm{d}}^2 \omega \, ,\end{aligned}$$ ${\mathrm{d}}^2 \omega$ being the euclidean area element on $S^2$. Energy identities {#EnIdent} ================= The usual stress-energy tensor for the wave equation is not conformally invariant. We have therefore two possible approaches to establish energy identities or inequalities. 1. Work with the rescaled quantities $\hat{\phi}$ and $\hat{g}$. The main advantage is that for all $T>0$, the four hypersurfaces $\Sigma_0$, ${{\mathscr H}}^+_T$, $S_T$ and ${{\mathscr I}}^+_T$ are finite hypersurfaces in our rescaled spacetime (except for the part of $\Sigma_0$ and ${{\mathscr I}}^+$ near $i^0$, but we will work with solutions supported away from $i^0$ to establish our energy identities). However, we encounter a problem of a different kind : equation does not admit a conserved stress-energy tensor. Fortunately, it turns out that if we use the stress-energy tensor for the wave equation on the rescaled spacetime, and contract it with $\partial_t$, the error term is a divergence. Therefore, we recover an exact conservation law. 2. Work with the physical quantities $\phi$ and $g$. We have an immediate conserved stress energy tensor associated with the equation. The drawback here is that ${{\mathscr I}}$ is at infinity. So we must use our conservation law to get energy identities on finite closed hypersurfaces, then take the limit of these identities as some parts of the hypersurfaces approach ${{\mathscr I}}$. Both methods are in principle absolutely fine. We choose the first one since, thanks to the stationarity of Schwarzschild’s spacetime, it gives energy identities in a more direct manner[^4]. By the finite propagation speed, we know that for smooth compactly supported data on $\Sigma_0$, i.e. supported away from $i^0$, the associated solution of vanishes in a neighbourhood of $i^0$. For such solutions, the singularity of the conformal metric at $i^0$ can be ignored and we obtain energy identities for all $T>0$ between the hypersurfaces $\Sigma_0$, ${{\mathscr H}}^+_T$, $S_T$ and ${{\mathscr I}}^+_T$. Then we show, using known decay results, that the energy flux through $S_T$ tends to zero as $T\rightarrow +\infty$. This yields an energy identity between $\Sigma_0$, ${{\mathscr H}}^+$ and ${{\mathscr I}}^+$, which carries over by density to initial data in a Hilbert space on $\Sigma_0$ (see section \[EnEstTInfinite\] for details). Conserved energy current for the rescaled field ----------------------------------------------- The stress-energy tensor for the wave equation associated with $\hat{g}$ is given by $$\label{SET} \hat{T}_{ab}= \hat\nabla_a \hat\phi \hat\nabla_b \hat\phi - \frac12 \langle \hat\nabla \hat\phi \, ,~ \hat\nabla \hat\phi \rangle_{\hat{g}} \, \hat{g}_{ab}\, .$$ When $\hat\phi$ is a solution of , the divergence of $\hat{T}$ is $$\hat\nabla^a \hat{T}_{ab} = (\square_{\hat{g}} \hat\phi ) \hat\nabla_b \hat\phi = -2MR \hat\phi \hat\nabla_b \hat\phi \, .$$ The energy current $1$-form associated with static observers is obtained by contracting $\hat{T}$ with the timelike Killing vector $K=\partial_t$ : $$\hat{J}_a = K^b \hat{T}_{ab} \, .$$ This is not conserved since $$\label{DivCurrent} \hat\nabla^a \hat{J}_a = -2MR \hat\phi \partial_t \hat\phi \, .$$ Putting $$V = MR\hat\phi^2 \partial_t \, ,$$ it is easy to see that $$2MR \hat\phi \partial_t \hat\phi = \mathrm{div} V \, .$$ Indeed $$\mathrm{div} V = \hat\nabla_a V^a = \frac{\partial}{\partial t} \left( MR \hat\phi^2 \right) + {\hat\Gamma}^{\mathbf{a}}_{{\mathbf{a}}0} V^0$$ and in the coordinate system $(t,r,\theta,\varphi)$, all the Christoffel symbols ${\hat\Gamma}^{\mathbf{a}}_{{\mathbf{a}}0}$ are zero. So can be written as an exact conservation law $$\label{ConsLaw} \hat\nabla_a \left( \hat{J}^a + V^a \right) =0 \, ,~ \mbox{with } V = MR\hat\phi^2 \partial_t \, .$$ The vector $V$ is causal and future oriented on $\bar{\cal M}$, timelike on $\cal M$, and the stress-energy tensor $\hat{T}_{ab}$ satisfies the dominant energy condition. Therefore, the energy flux across achronal hypersurfaces will be non negative and that across spacelike hypersurfaces will be positive definite. We will observe these properties on the explicit expressions of the fluxes that we calculate in the next section. Energy identity up to $S_T$ {#EnIdST} --------------------------- The conservation law gives an exact energy identity between the hypersurfaces $\Sigma_0$, ${{\mathscr H}}^+_T$, $S_T$ and ${{\mathscr I}}^+_T$, for solutions of the rescaled equation associated with smooth and compactly supported initial data. We denote by $\hat{\cal E}_{\partial_t , S}$ the rescaled energy flux, associated with $\partial_t$, across an oriented hypersurface $S$, i.e.[^5] $$\label{RescEnS} \hat{\cal E}_{\partial_t, S} = -4 \int_{S} * (\hat{J}_a + V_a ){\mathrm{d}}x^a \, .$$ For any $T>0$, we have $$\label{EnIdentityT} \hat{\cal E}_{\partial_t, \Sigma_0} = \hat{\cal E}_{\partial_t, {{\mathscr I}}^+_T} + \hat{\cal E}_{\partial_t, {{\mathscr H}}^+_T} + \hat{\cal E}_{\partial_t, S_T} \, .$$ The property of the Hodge star gives us an easy way to express the energy flux across an oriented $3$-surface $S$ $$\hat{\cal E}_{\partial_t, S} = -4\int_{S} * (\hat{J}_a + V_a ){\mathrm{d}}x^a = \int_S (\hat{J}_a+V_a)\hat{N}^a \, \hat{L}\lrcorner {\mathrm{dVol}}_{\hat{g}} \, ,$$ where $\hat{L}$ is a vector field transverse to $S$ and compatible with the orientation of the hypersurface, and $\hat{N}$ is the normal vector field to $S$ such that $\hat{g} (\hat{L},\hat{N})=1$. On $\Sigma_0$, we take $$\hat{L}= \frac{r^2}{F} \partial_t \, ,~\hat{N} = \partial_t \, .$$ On ${{\mathscr I}}^+$, we take for $\hat{L}$ the future-oriented null vector $\hat{L}_{{{\mathscr I}}^+} =-\partial_R$ in coordinates $u,R,\omega$. The vector field $-\partial_R$ in the exterior of the black hole is equal to $r^2 F^{-1} l$, with $l$ being the first principal null vector field given in , and extends smoothly to ${{\mathscr I}}^+$ : $$\hat{L}_{{{\mathscr I}}^+} = \left. r^2 F^{-1} l \right\vert_{{{\mathscr I}}^+} \, .$$ On ${{\mathscr H}}^+$, we choose $\hat{L}_{{{\mathscr H}}^+} = \partial_R$ (in coordinates $v,R,\omega$), i.e. $$\hat{L}_{{{\mathscr H}}^+} = \left. r^2 F^{-1} n \right\vert_{{{\mathscr H}}^+}\, ,$$ where $n$ is the second principal null vector field in . On both ${{\mathscr I}}^+$ and ${{\mathscr H}}^+$, we therefore have $\hat{N} = \partial_t$ (i.e. $\partial_v$ on ${{\mathscr H}}^+$ and $\partial_u$ on ${{\mathscr I}}^+$). Since $V \propto \partial_t$ and on ${{\mathscr I}}$ and ${{\mathscr H}}$ the vector field $\partial_t$ is null, we have $\hat{g} (V,\hat{N})=0$. The energy identity reads $$\begin{gathered} \int_{S_T} ((\hat{J}_a+V_a)\hat{N}^a) \, \hat{L}\lrcorner {\mathrm{dVol}}_{\hat{g}} + \int_{{{\mathscr I}}^+_T} (\hat{J}_a K^a) \, \hat{L}_{{{\mathscr I}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} + \int_{{{\mathscr H}}^+_T} (\hat{J}_a K^a) \, \hat{L}_{{{\mathscr H}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} \nonumber \\ = \int_{\Sigma_0} ((\hat{J}_a+V_a)K^a) \, r^2 F^{-1} \partial_t \lrcorner {\mathrm{dVol}}_{\hat{g}} \, . \label{EIT}\end{gathered}$$ We calculate the explicit expressions of the energy fluxes through ${{\mathscr I}}^+_T$, ${{\mathscr H}}^+_T$ and $\Sigma_0$ : $$\begin{aligned} \hat{\cal E}_{\partial_t, \Sigma_0} &=& \int_{\Sigma_0} (\hat{J}_a+V_a)K^a \, r^2 F^{-1} \partial_t \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &=& \frac12 \int_{\Sigma_0} \left( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 + R^2 F \vert \nabla_{S^2} \hat\phi \vert^2 + 2 MFR^3\hat\phi^2 \right) {\mathrm{d}}r_* {\mathrm{d}}^2 \omega \, ; \\ \hat{\cal E}_{\partial_t, {{\mathscr I}}^+_T} &=& \int_{{{\mathscr I}}^+_T} \hat{J}_a K^a \hat{L}_{{{\mathscr I}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} = \int_{{{\mathscr I}}^+_T} (\hat\nabla_K \hat\phi )^2 \hat{L}_{{{\mathscr I}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &=& \int_{{{\mathscr I}}^+_T} (\partial_u (\hat\phi \vert_{{{\mathscr I}}^+} ) )^2 {\mathrm{d}}u {\mathrm{d}}^2 \omega\, ; \\ \hat{\cal E}_{\partial_t, {{\mathscr H}}^+_T} &=& \int_{{{\mathscr H}}^+_T} \hat{J}_a K^a \hat{L}_{{{\mathscr H}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} = \int_{{{\mathscr H}}^+_T} (\hat\nabla_K \hat\phi )^2 \hat{L}_{{{\mathscr H}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &=& \int_{{{\mathscr H}}^+_T} (\partial_v (\hat\phi \vert_{{{\mathscr H}}^+}) )^2 {\mathrm{d}}v {\mathrm{d}}^2 \omega \, .\end{aligned}$$ We observe that the first flux defines a positive definite quadratic form and the two others non-negative quadratic forms. We now calculate the flux through $S_T$. To this purpose, we make explicit choices of vectors $\hat{L}$ and $\hat{N}$ on $S_T$. Let us denote $$\Psi (t,r_*,\omega ) = t - \sqrt{1+r_*^2} \, ,$$ so the hypersurface $S_T$ is $$S_T = \{ (t,r_*,\omega) \, ; ~\Psi (t,r_*,\omega ) = T \} \, .$$ A co-normal to $S_T$ is given by $$N_a {\mathrm{d}}x^a = {\mathrm{d}}\Psi = {\mathrm{d}}t - \frac{r_*}{\sqrt{1+r_*^2}} {\mathrm{d}}r_*$$ and the associated normal vector field is $$\hat{N}^a = \hat{g}^{ab} N_b \, ,~ \mbox{i.e. } \hat{N}^a \frac{\partial}{\partial x^a} = r^2 F^{-1} \left( \frac{\partial}{\partial t} + \frac{r_*}{\sqrt{1+r_*^2}} \frac{\partial}{\partial r_*} \right) \, .$$ For the transverse vector $\hat{L}$, we can take $$\hat{L}^a \frac{\partial}{\partial x^a} = \frac{1+r_*^2}{1+2r_*^2} \left( \frac{\partial}{\partial t} - \frac{r_*}{\sqrt{1+r_*^2}} \frac{\partial}{\partial r_*} \right) \, ,$$ which is future-oriented and satisfies $\hat{L}_a \hat{N}^a =1$. We can now calculate the energy flux through $S_T$. First we have $$(\hat{J}_a+V_a )\hat{N}^a = MR \hat\phi^2 + \frac{r^2}{2F} \left( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 + \frac{2r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \partial_{r_*} \hat\phi + R^2F \vert \nabla_{S^2} \hat\phi \vert^2 \right) \, .$$ The contraction of $\hat{L}$ into the volume form for $\hat{g}$ is as follows $$\hat{L}\lrcorner {\mathrm{dVol}}_{\hat{g}} = \frac{1+r_*^2}{1+2r_*^2} R^2 F \sin \theta \left( {\mathrm{d}}r_* \wedge {\mathrm{d}}\theta \wedge {\mathrm{d}}\varphi + \frac{r_*}{\sqrt{1+r_*^2}} {\mathrm{d}}t \wedge {\mathrm{d}}\theta \wedge {\mathrm{d}}\varphi \right) \, .$$ On $S_T$, we have $${\mathrm{d}}t = \frac{r_*}{\sqrt{1+r_*^2}} {\mathrm{d}}r_* \, ,$$ and therefore $$\begin{aligned} \hat{L}\lrcorner {\mathrm{dVol}}_{\hat{g}} \vert_{S_T} &=& \frac{1+r_*^2}{1+2r_*^2} R^2 F \sin \theta \left( 1+ \frac{r_*^{2}}{1+r_*^2} \right) {\mathrm{d}}r_* \wedge {\mathrm{d}}\theta \wedge {\mathrm{d}}\varphi \\ &=& R^2 F \sin \theta {\mathrm{d}}r_* \wedge {\mathrm{d}}\theta \wedge {\mathrm{d}}\varphi \, .\end{aligned}$$ So we obtain $$\begin{aligned} \hat{\cal E}_{\partial_t, S_T} &:=& \int_{S_T} ((\hat{J}_a+V_a)N^a) \, \hat{L}\lrcorner {\mathrm{dVol}}_{\hat{g}} \nonumber \\ &=& \int_{S_T} \bigg[ MR \hat\phi^2 + \frac{r^2}{2F} \bigg( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 \nonumber \\ && \hspace{0.3in}+ \frac{2r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \partial_{r_*} \hat\phi + R^2F \vert \nabla_{S^2} \hat\phi \vert^2 \bigg)\bigg] R^2 F {\mathrm{d}}r_* {\mathrm{d}}^2 \omega \, . \label{FluxST}\end{aligned}$$ This is positive definite since $\vert r_* \vert < \sqrt{1+r_*^2}$ (and degenerates asymptotically as $\vert r_* \vert \rightarrow +\infty$). The energy fluxes across ${{\mathscr I}}^+_T$ and ${{\mathscr H}}^+_T$ are increasing non negative functions of $T$ and their sum is bounded by ${\cal E}_{\Sigma_0}$, by and the positivity of ${\cal E}_{S_T}$. Therefore they admit limits as $T \rightarrow +\infty$ and these limits are $\hat{\cal E}_{\partial_t, {{\mathscr I}}^+}$ and $\hat{\cal E}_{\partial_t, {{\mathscr H}}^+}$. We have the following result : For smooth and compactly supported initial data on $\Sigma_0$, the energy fluxes of the rescaled solution across ${{\mathscr I}}^+$ and ${{\mathscr H}}^+$ are finite and satisfy $$\hat{\cal E}_{\partial_t, {{\mathscr I}}^+} + \hat{\cal E}_{\partial_t, {{\mathscr H}}^+} \leq \hat{\cal E}_{\partial_t, \Sigma_0} \, .$$ We have equality in the estimate above if any only if $$\label{LidVanishing} \lim_{T \rightarrow +\infty} \hat{\cal E}_{\partial_t, S_T} =0 \, .$$ In order to construct a conformal scattering theory, we merely need to prove for a dense class of data, say smooth and compactly supported. The final identity will extend to minimum regularity initial data by density. Moreover, we can allow any loss of derivatives in the proof of for smooth compactly supported data, since we do not need to prove that $\hat{\cal E}_{\partial_t, S_T}$ tends to zero uniformly in terms of the data. This is the object of subsection \[EnEstTInfinite\]. Function space of initial data ------------------------------ The scattering theory we are about to construct will be valid for a function space of initial data defined by the finiteness of the rescaled energy $\hat{\cal E}_{\partial_t, \Sigma_0}$. The analytic scattering theory constructed in [@Di1985] was valid for a function space of initial data defined by the finiteness of the energy of the physical field. It is interesting to notice that although the stress-energy tensor is not conformally invariant, the physical energy and the rescaled energy on $\Sigma_0$ are the same. Therefore, the function space of initial data for our conformal scattering theory is the same as in the analytic scattering theory of Dimock. Let us prove this. Consider the stress-energy tensor for the wave equation on the Schwarzschild metric $$\label{PhysicalSET} T_{ab} = \nabla_a \phi \nabla_b \phi - \frac12 \langle \nabla \phi \, ,~ \nabla \phi \rangle_g g_{ab} \, ,$$ which satisfies $$\nabla^a T_{ab} =0$$ for $\phi$ solution to the wave equation. The physical energy current $1$-form associated with static observers is $$J_a = K^b T_{ab}$$ where $K$ is the timelike Killing vector field $K=\partial_t$. This is conserved $$\nabla^a J_a = 0 \, .$$ The associated energy flux through an oriented hypersurface $S$ is given by[^6] $$\label{PhysEnS} {\cal E}_{\partial_t , S} = -4 \int_S * J_a {\mathrm{d}}x^a \, .$$ Similarly to what we saw for the rescaled energy fluxes, can be expressed more explicitely as $${\cal E}_{\partial_t, S} = \int_S J_a N^a \, L\lrcorner {\mathrm{dVol}}_{g} \, ,$$ where $L$ is a vector field transverse to $S$ and compatible with the orientation of the hypersurface, and $N$ is the normal vector field to $S$ such that $g (L,N)=1$. The energy fluxes $\hat{\cal E}_{\partial_t, \Sigma_0}$ and ${\cal E}_{\partial_t, \Sigma_0}$ are the same. [**Proof.**]{} A direct calculation shows that the physical energy flux across $\Sigma_0$ can be expressed in terms of $\hat\phi$ as follows $$\begin{aligned} {\cal E}_{\partial_t , \Sigma_0} &=& \frac12 \int_{\Sigma_0} \left( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 + \frac{F}{r^2} \vert \nabla_{S^2} \hat\phi \vert^2 + \frac{FF'}{r} \hat\phi^2 \right) {\mathrm{d}}r_* \wedge {\mathrm{d}}^2 \omega \, , \\ &=& \frac12 \int_{\Sigma_0} \left( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 + \frac{F}{r^2} \vert \nabla_{S^2} \hat\phi \vert^2 + F\frac{2M}{r^3} \hat\phi^2 \right) {\mathrm{d}}r_* \wedge {\mathrm{d}}^2 \omega \, ,\end{aligned}$$ which is exactly the expression of the rescaled energy flux $\hat{\cal E}_{\partial_t , \Sigma_0}$. We denote by $\cal H$ the completion of ${\cal C}^\infty_0 (\Sigma ) \times {\cal C}^\infty_0 (\Sigma )$ in the norm $$\Vert (\hat{\phi}_0 \, ,~ \hat{\phi}_1 ) \Vert_{\cal H} = \frac{1}{\sqrt{2}} \left( \int_{\Sigma} \left( (\hat\phi_1 )^2 + (\partial_{r_*} \hat\phi_0 )^2 + \frac{F}{r^2} \vert \nabla_{S^2} \hat\phi_0 \vert^2 + F\frac{2M}{r^3} \hat\phi_0^2 \right) {\mathrm{d}}r_* \wedge {\mathrm{d}}^2 \omega \right)^{1/2} \, .$$ The following result is classic. Its second part can be proved by Leray’s theorem combined with energy identities. Its first part may be established by either the same method or by a spectral approach (showing that the Hamiltonian for is self-adjoint on $\cal H$ as this was done in [@Di1985] and [@Ni1995]). \[CauchyPb\] The Cauchy problem for on $\cal M$ (and therefore also for ) is well-posed in $\cal H$, i.e. for any $(\hat{\phi}_0 \, ,~ \hat{\phi}_1 ) \in {\cal H}$, there exists a unique $\phi \in {\cal D}' ({\cal M})$ solution of such that : $$(r \phi \, ,~ r\partial_t \phi ) \in {\cal C} ({\mathbb{R}}_t \, ;~ {\cal H}) \, ; ~ r \phi \vert_{t=0} = \hat{\phi}_0 \, ;~ r \partial_t \phi \vert_{t=0} = \hat{\phi}_1 \, .$$ Moreover, $\hat{\phi}=r\phi$ belongs to $H^1_{\mathrm{loc}} (\bar{\cal M} )$ (see Remark \[Hsloc1\] and Definition \[Hsloc2\] below). \[Hsloc1\] The notation $H^s_\mathrm{loc} (\bar{\cal M})$ is a perhaps not ideal, Sobolev spaces being defined on open sets. What we mean by this notation is merely that the conformal boundary is seen as a finite boundary : only the neighbourhoods of $i^\pm$ and $i^0$ are considered as asymptotic regions in $\cal M$. With this in mind the definition of $H^s_\mathrm{loc} (\bar{\cal M})$, $s\in [0,+\infty [$ is unambiguous and goes as follows. \[Hsloc2\] Let $s \in [0,+\infty [$, a scalar function $u$ on ${\cal M}$ is said to belong to $H^s_{\mathrm{loc}} (\bar{\cal M})$ if for any local chart $(\Omega , \zeta )$, such that $\Omega \subset \cal M$ is an open set with smooth compact boundary in $\bar{\cal M}$ (note that this excludes neighbourhoods of either $i^\pm$ or $i^0$ but allows open sets whose boundary contains parts of the conformal boundary) and $\zeta$ is a smooth diffeomorphism from $\Omega$ onto a bounded open set $U \subset {\mathbb{R}}^4$ with smooth compact boundary, we have $u \circ \zeta^{-1} \in H^s ( U )$. Energy identity up to $i^+$ and trace operator {#EnEstTInfinite} ---------------------------------------------- Here, we prove for smooth and compactly supported data, using the estimates obtained in M. Dafermos and I. Rodnianski [@DaRoLN]. Theorem 4.1 in [@DaRoLN] contains sufficient information : an estimate giving decay of energy with a loss of 3 angular derivatives and one order of fall-off, as well as uniform decay estimates for more regular solutions with sufficiently fast fall-off at infinity. These are expressed in terms of quantities on the physical spacetime, i.e. unrescaled quantities. We need to make sure that they give the correct information for our energy on $S_T$, which is entirely expressed in terms of rescaled quantities ; this is not completely direct since the usual stress-energy tensor for the wave equation is not conformally invariant. We start by translating their estimates using the notations we have adopted here. Theorem 4.1 in [@DaRoLN] is expressed for a spacelike hypersurface for the metric $\hat{g}$ that crosses ${{\mathscr H}}^+$ and ${{\mathscr I}}^+$, i.e. an asymptotically hyperbolic hypersurface for $g$, defined by translation along $\partial_t$ of a reference asymptotically hyperbolic hypersurface. Our hypersurface $S_T$ fits in this framework. The content of the theorem is the following. (i) : Consider the stress-energy tensor for the wave equation on the Schwarzschild metric : $T_{ab}$ given by and let $\phi$ be a solution to the wave equation associated with smooth compactly supported data. Consider also a timelike vector field $\tau$ that is transverse to the horizon and equal to $\partial_t$ for $r$ large enough ; the vector $\tau^a$ is of the form $$\tau^a \partial_a = \alpha \partial_t + \beta \frac{1}{F} (\partial_t - \partial_{r_*} ) \, ,$$ where $\alpha \geq 1$, $\alpha =1$ for $r$ large enough and $\beta \geq 0$, $\beta =0$ for $r$ large enough. Denote by $j_a$ the unrescaled energy current $1$-form associated with $\tau$, $$j_a = \tau^b T_{ab} \, .$$ The physical energy flux, associated with $\tau$, of the solution $\phi$ across $S_T$ is given by $${\cal E}_{\tau ,S_T} = \int_{S_T} j_a N^a L \lrcorner {\mathrm{dVol}}_g \, ,$$ where $N^a$ is the normal vector field to $S_T$ associated via the metric $g$ to the co-normal ${\mathrm{d}}\Psi$, $$N_a {\mathrm{d}}x^a = {\mathrm{d}}\Psi \, ,~ N^a \frac{\partial}{\partial x^a} = g^{ab} N_b \frac{\partial}{\partial x^a} = F^{-1} \left( \frac{\partial}{\partial t} + \frac{r_*}{\sqrt{1+r_*^2}} \frac{\partial}{\partial r_*} \right) = \frac{1}{r^2} \hat{N}^a \frac{\partial}{\partial x^a} \, ,$$ and $$L^a \frac{\partial}{\partial x^a} = \hat{L}^a \frac{\partial}{\partial x^a} = \frac{1+r_*^2}{1+2r_*^2} \left( \frac{\partial}{\partial t} - \frac{r_*}{\sqrt{1+r_*^2}} \frac{\partial}{\partial r_*} \right) \, ,$$ so that $L_a N^a = g_{ab} L^a N^b = \hat{g}_{ab} \hat{L}^a \hat{N}^b = 1$. The energy flux ${\cal E}_{\tau ,S_T}$ decays as follows : $$\label{31} {\cal E}_{\tau ,S_T} \lesssim 1/T^2 \, .$$ (ii) : The solution also satisfies the following uniform decay estimates : $$\label{32} \sup_{S_T} \sqrt{r} \phi \lesssim 1/T\, ,~ \sup_{S_T} r \phi \lesssim 1/\sqrt{T} \, .$$ The constants in front of the powers of $1/T$ in the estimates of Theorem 4.1 in [@DaRoLN] involve some higher order weighted energy norms (third order for (i) and sixth order for (ii)) of the data, which are all finite in our case. The details of these norms are not important to us here. We merely need to establish that for any smooth and compactly supported data, the energy of the rescaled field on $S_T$ tends to zero as $T\rightarrow +\infty$. For smooth and compactly supported data $\phi$ and $\partial_t \phi$ at $t =0$ there exists $K>0$ such that for $T \geq 1$ large enough, $${\cal E}_{\partial_t ,S_T} \leq \frac{K}{T} \, .$$ [**Proof.**]{} First note that since $\alpha \geq 1$ and $\beta \geq 0$, thanks to the dominant energy condition, we have $$T_{ab} \tau^a N^b = \alpha T_{ab} (\partial_t)^a N^b + \beta T_{ab} (\partial_u)^a N^b \geq T_{ab} (\partial_t)^a N^b \, .$$ Hence the physical energy on $S_T$ associated with the vector field $\tau^a$ controls the physical energy on $S_T$ associated with the vector field $\partial_t$ : $$\label{EstEnDtTau} {\cal E}_{\tau ,S_T} \geq {\cal E}_{\partial_t ,S_T} \, .$$ Let us now compare the physical energy flux ${\cal E}_{\partial_t ,S_T}$ and the rescaled energy flux $\hat{\cal E}_{\partial_t ,S_T}$ using the relation $\hat\phi = r \phi$. First, we have $$\begin{aligned} T_{ab} (\partial_t)^a N^b &=& \frac{1}{2F} \left( (\partial_t \phi )^2 + (\partial_{r_*} \phi )^2 + 2 \frac{r_*}{\sqrt{1+r_*^2}} \partial_t \phi \partial_{r_*} \phi + \frac{F}{r^2} \vert \nabla_{S^2} \phi \vert^2 \right) \\ &=& \frac{1}{2r^2F} \left( (\partial_t \hat\phi )^2 + F^2(\partial_{r} \hat\phi - \frac{\hat\phi}{r})^2 + 2 \frac{Fr_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi (\partial_{r} \hat\phi - \frac{\hat\phi}{r}) + \frac{F}{r^2} \vert \nabla_{S^2} \hat\phi \vert^2 \right) \end{aligned}$$ and since $L^a = \hat{L}^a$, $$L \lrcorner {\mathrm{dVol}}_g = r^4 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \, .$$ Therefore $${\cal E}_{\partial_t , S_T} = \frac12 \int_{S_T} \left( (\partial_t \hat\phi )^2 + F^2(\partial_{r} \hat\phi - \frac{\hat\phi}{r})^2 + 2 \frac{Fr_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi (\partial_{r} \hat\phi - \frac{\hat\phi}{r}) + \frac{F}{r^2} \vert \nabla_{S^2} \hat\phi \vert^2 \right) \frac{r^4}{r^2 F} \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}}$$ and comparing with , we obtain $$\begin{aligned} \hat{\cal E}_{\partial_t , S_T} &=& {\cal E}_{\partial_t , S_T} + \int_{S_T} \frac{M}{r} \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} - \frac12 \int_{S_T} \left( F \frac{\hat\phi^2}{r^2} - 2 \frac{\hat\phi}{r} \partial_{r_*} \hat\phi - 2 \frac{r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \frac{\hat\phi}{r} \right) r^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &=& {\cal E}_{\partial_t , S_T} + \frac12 \int_{S_T} (\frac{2M}{r} -F) \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} + \frac12 \int_{S_T} 2 \frac{\hat\phi}{r} \left( \partial_{r_*} \hat\phi + \frac{r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \right) r^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ & \leq & {\cal E}_{\partial_t , S_T} + \frac12 \int_{S_T} (\frac{2M}{r} -F) \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ && + \frac12 \int_{S_T} \left( 2F \frac{\hat\phi^2}{r^2} + \frac{1}{2F} \left( \partial_{r_*} \hat\phi + \frac{r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \right)^2 \right)r^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &\leq & {\cal E}_{\partial_t , S_T} + \frac12 \int_{S_T} (\frac{2M}{r} +F) \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ && + \frac12 \int_{S_T} \frac{1}{2F} \left( ( \partial_{r_*} \hat\phi )^2+ \frac{2r_*}{\sqrt{1+r_*^2}} \partial_t \hat\phi \partial_{r_*} \hat\phi + ( \partial_{t} \hat\phi )^2 \right)r^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &\leq & {\cal E}_{\partial_t , S_T} + \frac12 \int_{S_T} \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} + \frac12 \hat{\cal E}_{\partial_t , S_T} \, .\end{aligned}$$ This gives us $$\hat{\cal E}_{\partial_t , S_T} \leq 2 {\cal E}_{\partial_t , S_T} + \int_{S_T} \hat\phi^2 \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \, .$$ The second estimate in says exactly that $$\sup_{S_T} \hat\phi^2 \lesssim \frac{1}{T} \, .$$ Since moreover $$\int_{S_T} \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} = \int_{{\mathbb{R}}\times S^2} \frac{F}{r^2} \sin \theta {\mathrm{d}}r_* {\mathrm{d}}\theta {\mathrm{d}}\varphi = \int_{[2M , +\infty [ \times S^2} \frac{1}{r^2} \sin \theta {\mathrm{d}}r {\mathrm{d}}\theta {\mathrm{d}}\varphi =\frac{2\pi}{M} <\infty$$ and ${\cal E}_{\partial_t , S_T} \lesssim 1/T^2$ by and , this concludes the proof of the proposition. The finiteness of the last integral in the proof is strongly related to the finiteness of the volume of $S_T$ for the measure $\hat{\mu}_{S_T}$ induced by $\hat{g}$. As one can readily guess from the definitions of $S_T$ and $\hat{g}$, the volume of $S_T$ for the measure $\hat{\mu}_{S_T}$ is independent of $T$. Figure \[3surface\] may be a little misleading in giving the impression that $S_T$ shrinks to a point, we must not forget that due to the way $\hat{g}$ is rescaled, $i^+$ is still at infinity. The volume of $S_T$ for $\hat{\mu}_{S_T}$ is easy to calculate. First we restrict $\hat{g}$ to $S_T$ using the explicit dependence of $t$ on $r_*$ on $S_T$ : $$\hat{g}\vert_{S_T} = - \left[ \frac{R^2 F}{1+r_*^2} {\mathrm{d}}r_*^2 +{\mathrm{d}}\omega^2 \right] \, .$$ Then we calculate $\hat{\mu}_{S_T}$ : $${\mathrm{d}}\hat{\mu}_{S_T} = \frac{R\sqrt{F}}{\sqrt{1+r_*^2}} {\mathrm{d}}r_* {\mathrm{d}}^2 \omega = \frac{R}{\sqrt{F} \sqrt{1+r_*^2}} {\mathrm{d}}r {\mathrm{d}}^2 \omega\, .$$ So the volume of $S_T$ for $\hat{\mu}_{S_T}$ is $$\mathrm{Vol}_{\hat{g}} (S_T) = 4\pi \int_{2M}^{+\infty} \frac{1}{r\sqrt{F} \sqrt{1+r_*^2}} {\mathrm{d}}r = 4\pi \int_{2M}^{+\infty} \frac{{\mathrm{d}}r}{\sqrt{r^2-2Mr} \sqrt{1+r_*^2}} < +\infty \, .$$ Note that $${\mathrm{d}}\hat{\mu}_{S_T} = \sqrt{\hat{g}_{ab} \hat{N}^a \hat{N}^b} \hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \, .$$ The two measures $\hat{\mu}_{S_T}$ and $\hat{L} \lrcorner {\mathrm{dVol}}_{\hat{g}}$ on $S_T$ are not uniformly equivalent since $$\sqrt{\hat{g}_{ab} \hat{N}^a \hat{N}^b} = \frac{r}{\sqrt{F} \sqrt{1+r_*^2}} \left\{ \begin{array}{ccl} { \simeq 1} & {\mbox{as}} & { r\rightarrow +\infty \, ,} \\ {\rightarrow +\infty} & {\mbox{as}} & { R \rightarrow 2M \, ,} \end{array} \right.$$ but this is integrable in the neighbourhood of $2M$. If we had normalized $N$ to start with, $$\tilde{N}^a = \frac{1}{\sqrt{\hat{g}_{cd} \hat{N}^c \hat{N}^d}} \hat{N}^a \, ,$$ and put $$\tilde{L}^a = \sqrt{\hat{g}_{cd} \hat{N}^c \hat{N}^d} \hat{L}^a \, ,$$ so that $\hat{g}_{ab} \tilde{N}^a \tilde{L}^b =1$, then we would have $${\mathrm{d}}\hat{\mu}_{S_T} = \tilde{L} \lrcorner {\mathrm{dVol}}_{\hat{g}} \, .$$ So we have the following result : \[PropEnergyIdentityFuture\] For smooth and compactly supported initial data on $\Sigma_0$, we have $$\label{EnergyIdentityFuture} \hat{\cal E}_{\partial_t, {{\mathscr I}}^+} + \hat{\cal E}_{\partial_t, {{\mathscr H}}^+} = \hat{\cal E}_{\partial_t, \Sigma_0} \, ,$$ with $$\begin{aligned} \hat{\cal E}_{\partial_t, \Sigma_0} &=& \int_{\Sigma_0} (\hat{J}_a+V_a)K^a \, r^2 F^{-1} \partial_t \lrcorner {\mathrm{dVol}}_{\hat{g}} \\ &=& \frac12 \int_{\Sigma_0} \left( (\partial_t \hat\phi )^2 + (\partial_{r_*} \hat\phi )^2 + R^2 F \vert \nabla_{S^2} \hat\phi \vert^2 + 2 MFR^3\hat\phi^2 \right) {\mathrm{d}}r_* {\mathrm{d}}^2 \omega \, ; \\ \hat{\cal E}_{\partial_t, {{\mathscr I}}^+} &=& \int_{{{\mathscr I}}^+} (\hat\nabla_K \hat\phi )^2 \hat{L}_{{{\mathscr I}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} = \int_{{{\mathscr I}}^+} (\partial_u (\hat\phi \vert_{{{\mathscr I}}^+} ) )^2 {\mathrm{d}}u {\mathrm{d}}^2 \omega\, ; \\ \hat{\cal E}_{\partial_t, {{\mathscr H}}^+} &=& \int_{{{\mathscr H}}^+} (\hat\nabla_K \hat\phi )^2 \hat{L}_{{{\mathscr H}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} = \int_{{{\mathscr H}}^+} (\partial_v (\hat\phi \vert_{{{\mathscr H}}^+}) )^2 {\mathrm{d}}v {\mathrm{d}}^2 \omega \, .\end{aligned}$$ We can extend this result to minimum regularity initial data (i.e. data in $\cal H$) by standard density arguments, provided we give a meaning to the energy fluxes across ${{\mathscr I}}$ and the horizon. We define a trace operator that to smooth and compactly supported initial data associates the future scattering data : Let $(\hat\phi_0 , \hat\phi_1 ) \in {\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 )$. Consider the solution of $\hat{\phi} \in {\cal C}^\infty ( \bar{\cal M} )$ such that $$\hat{\phi} \vert_{\Sigma_0} = \hat{\phi}_0 \, ,~ \partial_t \hat{\phi} \vert_{\Sigma_0} = \hat{\phi}_1 \, .$$ We define the trace operator ${\cal T}^+$ from ${\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 )$ to ${\cal C}^\infty ({{\mathscr H}}^+ ) \times {\cal C}^\infty ({{\mathscr I}}^+ )$ as follows $${\cal T}^+ (\hat\phi_0 , \hat\phi_1 ) = (\hat\phi \vert_{{{\mathscr H}}^+} , \hat\phi \vert_{{{\mathscr I}}^+} ) \, .$$ Then we extend this trace operator by density to $\cal H$ with values in the natural function space on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$ inherited from . \[FuncSpaceScatt\] We define on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$ the function space ${\cal H}^+$, completion of ${\cal C}^\infty_0 ({{\mathscr H}}^+ ) \times {\cal C}^\infty_0 ({{\mathscr I}}^+ )$ in the norm $$\begin{aligned} \Vert (\xi , \zeta ) \Vert_{{\cal H}^+} &=& \sqrt{ \int_{{{\mathscr H}}^+} \left( \hat\nabla_K \xi \right)^2 \hat{L}_{{{\mathscr H}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}} + \int_{{{\mathscr I}}^+} \left( \hat\nabla_K \zeta \right)^2 \hat{L}_{{{\mathscr I}}^+} \lrcorner {\mathrm{dVol}}_{\hat{g}}} \\ &=& \sqrt{ \int_{{{\mathscr H}}^+} \left( \partial_v \xi \right)^2 {\mathrm{d}}v {\mathrm{d}}^2 \omega + \int_{{{\mathscr I}}^+} \left( \partial_u \zeta \right)^2 {\mathrm{d}}u {\mathrm{d}}^2 \omega} \, ,\end{aligned}$$ i.e. $${\cal H}^+\simeq \dot{H}^1 ({\mathbb{R}}_v \, ;~ L^2 (S^2_\omega)) \times \dot{H}^1 ({\mathbb{R}}_u \, ;~ L^2 (S^2_\omega)) \, .$$ The homogeneous Sobolev space (also referred to as the Beppo-Levi space) of order one on ${\mathbb{R}}$ is a delicate object. It is not a function space in the usual sense that its elements should belong to $L^1_\mathrm{loc}$, nor is it even a distribution space (see for example [@So1983] for a precise study of the one and two-dimensional cases). It is a space of classes of equivalence modulo constants. The reason is that constants have zero $\dot{H}^1$ norm and can in addition be approached in $\dot{H}^1$ norm by smooth and compactly supported functions on ${\mathbb{R}}$ (using a simple dilation of a given smooth compactly supported function whose value at the origin is the constant we wish to approach). The definition of $\dot{H}^1 ({\mathbb{R}})$ by completion of ${\cal C}^\infty_0 ({\mathbb{R}})$ in the $\dot{H}^1$ norm makes it the space of the limits of Cauchy sequences where indistiguishable limits are identified, i.e. a space of classes of equivalence modulo constants. If one is reluctant to using classes of equivalence as scattering data, a more comfortable solution is to consider that the scattering data are in fact the traces of $\partial_t \hat{\phi}$ on ${{\mathscr H}}^+$ and ${{\mathscr I}}^+$ and the function space in each case is then merely $L^2 ({\mathbb{R}}\times S^2 )$. This is what Friedlander did in is 1980 paper [@Fri1980]. It is however not clear to me that he did so for precisely this reason. He had, in my opinion, deeper motives for making this choice, guided as he was by the desire to recover the Lax-Phillips translation representer. Whether one chooses to consider the scattering data as the traces of $\hat{\phi}$ (in $\dot{H}^1 ({\mathbb{R}}\, ;~ L^2 (S^2 ))$), or as the traces of $\partial_t \hat{\phi}$ (in $L^2 ({\mathbb{R}}\times S^2 )$) is purely a matter of taste, the two choices are equivalent. We infer from Proposition \[PropEnergyIdentityFuture\] the following theorem : \[ThmPartialIsometry\] The trace operator ${\cal T}^+$ extends uniquely as a bounded linear map from $\cal H$ to ${\cal H}^+$. It is a partial isometry, i.e. for any $(\hat{\phi}_0 , \hat{\phi}_1 ) \in {\cal H}$, $$\Vert {\cal T}^+ (\hat{\phi}_0 , \hat{\phi}_1 ) \Vert_{{\cal H}^+} = \Vert (\hat{\phi}_0 , \hat{\phi}_1 ) \Vert_{{\cal H}} \, .$$ An interesting property of second order equations is that once extended to act on minimal regularity solutions, the operator ${\cal T}^+$ can still be understood as a trace operator acting on the solution. We have seen in Proposition \[CauchyPb\] that finite energy solutions of belong to $H^1_\mathrm{loc} (\bar{\cal M})$ (see Remark \[Hsloc1\] and Definition \[Hsloc2\] for the definition of this function space). Elements of $H^1_\mathrm{loc} (\bar{\cal M})$ admit a trace at the conformal boundary that is locally $H^{1/2}$ on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$. This is a consequence of a standard property of elements of $H^s (\Omega )$ for $\Omega$ a bounded open set of ${\mathbb{R}}^n$ with smooth boundary ; a function $f \in H^s (\Omega )$, $s > 1/2$, admits a trace on the boundary $\partial \Omega$ of $\Omega$ that is in $H^{s-1/2} (\partial \Omega )$. Hence the extended operator ${\cal T}^+$ is still a trace operator in a usual sense, i.e. $${\cal T}^+ (\hat{\phi}_0 , \hat{\phi}_1 ) = (\hat{\phi} \vert_{{{\mathscr H}}^+} , \hat{\phi} \vert_{ {{\mathscr I}}^+ } ) \, .$$ This is in sharp contrast with what happens when working with first order equations like Dirac or Maxwell. In this case, finite energy solutions are in $L^2_\mathrm{loc} (\bar{\cal M})$ but in general not in $H^s_\mathrm{loc} (\bar{\cal M})$ for $s>0$. The density argument used in Theorem \[ThmPartialIsometry\] would still give us an extension of the operator ${\cal T}^+$, whose range would be $L^2 ({{\mathscr H}}^+) \times L^2 ({{\mathscr I}}^+ )$. This extended operator could not however be understood as a trace operator in the usual sense mentionned above, the regularity of the solutions being too weak. Scattering theory {#Scattering} ================= The construction of a conformal scattering theory on the Schwarzschild spacetime consists in solving a Goursat problem for the rescaled field on ${{\mathscr H}}^- \cup {{\mathscr I}}^-$ and on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$. In this section, we first solve the Goursat problem on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$, the construction being similar in the past. Then we show that the conformal scattering theory entails a conventional analytic scattering theory defined in terms of wave operators. Since the exterior of a Schwarzschild black hole is static and the global timelike Killing vector $\partial_t$ extends as the null generator of ${{\mathscr I}}^\pm$ and ${{\mathscr H}}^\pm$, it is easy to show that the past (resp. future) scattering data, i.e. the trace of the rescaled field on ${{\mathscr H}}^- \cup {{\mathscr I}}^-$ (resp. on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$) is a translation representer of the scalar field. We have a natural link between the conformal scattering theory and the Lax-Phillips approach, analogous to the one Friedlander established in his class of spacetimes. The difference is that in our case, the scattering data consist of a pair of data : the trace of the rescaled field on null infinity (which is exactly the radiation field) and on the horizon. The Goursat problem and the scattering operator ----------------------------------------------- We solve the Goursat problem on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$ following Hormander [@Ho1990] : the principle is to show that the trace operator ${\cal T}^+$ is an isomorphism between $\cal H$ and ${\cal H}^+$. Theorem \[ThmPartialIsometry\] entails that ${\cal T}^+$ is one-to-one and that its range is a closed subspace of ${\cal H}^+$. Therefore, all we need to do is to show that its range is dense in ${\cal H}^+$. Let $(\xi , \zeta) \in {\cal C}^\infty_0 ({{\mathscr H}}^+) \times {\cal C}^\infty_0 ({{\mathscr I}}^+)$, i.e. the support of $\xi$ remains away from both the crossing sphere and $i^+$ and the support of $\zeta$ remains away from both $i^+$ and $i^0$ ; in other words, the values of $v$ remain bounded on the support of $\xi$ and the values of $u$ remain bounded on the support of $\zeta$. We wish to show the existence of $\hat\phi$ solution of such that $$( \hat{\phi } , \partial_t \hat{\phi} ) \in {\cal C} ({\mathbb{R}}_t \, ;~ {\cal H} ) \mbox{ and } {\cal T}^+ (\hat{\phi} \vert_{\Sigma_0} \, ,~ \partial_t \hat{\phi} \vert_{\Sigma_0} ) = (\xi , \zeta ) \, .$$ For such data the singularity at $i^+$ is not seen. We must however deal with the singularity at $i^0$. We proceed in two steps. First, we consider $\cal S$ a spacelike hypersurface for $\hat{g}$ on $\bar{\cal M}$ that crosses ${{\mathscr I}}^+$ in the past of the support of $\zeta$ and meets the horizon at the crossing sphere. The compact support of the data on ${{\mathscr I}}^+$ allows us to apply the results of Hörmander [@Ho1990] even though we are not working with a spatially compact spacetime with a product structure (see Appendix \[HormGP\] for details). We know from [@Ho1990] that there exists a unique solution $\hat{\Phi}$ of such that : - $\hat{\Phi} \in H^1 ({\cal I}^+ ({\cal S}))$, where ${\cal I}^+ ({\cal S})$ is the causal future of $\cal S$ in $\bar{\cal M}$ ; here we do not need to distinguish between $H^1 ({\cal I}^+ ({\cal S}))$ and $H^1_\mathrm{loc} ({\cal I}^+ ({\cal S}))$ because, due to the compact support of the Goursat data, the solution vanishes in a neighbourhood of $i^+$ ; - given any foliation of ${\cal I}^+ ({\cal S})$ by $\hat{g}$-spacelike hypersurfaces $\{ {\cal S}_{\tau} \}_{\tau \geq 0}$, such that ${\cal S}_0 = {\cal S}$ (see figure \[Foliation\]), $\hat{\Phi}$ is continuous in $\tau$ with values in $H^1$ of the slices and ${\cal C}^1$ in $\tau$ with values in $L^2$ of the slices ; in fact what we have a slightly stronger property, see Appendix \[HormGP\] for a precise statement ; - $\hat{\Phi} \vert_{{{{\mathscr I}}^+}} = \zeta$, $\hat{\Phi} \vert_{{{{\mathscr H}}^+}} = \xi$. ![A foliation $\{ {\cal S}_{\tau} \}_{\tau \geq 0}$ of $\overline{{\cal I}^+ ({\cal S})}$.[]{data-label="Foliation"}](Foliation.jpg){width="4in"} Second, we extend the solution down to $\Sigma_0$ in a manner that avoids the singularity at $i^0$. The crucial remark is that the restriction of $\hat{\Phi}$ to $\cal S$ is in $H^1 ({\cal S})$ and its trace on ${\cal S} \cap {{\mathscr I}}^+$ is also the trace of $\zeta$ on ${\cal S} \cap {{\mathscr I}}^+$, which is zero because of the way we have chosen $\cal S$. It follows that $\hat{\Phi} \vert_{\cal S}$ can be approached by a sequence $\{ \hat{\phi}^n_{0,{\cal S}} \}_{n\in {\mathbb{N}}}$ of smooth functions on $\cal S$ supported away from ${{\mathscr I}}^+$ that converge towards $\hat{\Phi} \vert_{\cal S}$ in $H^1 ({\cal S})$. And of course $\partial_t \hat{\Phi} \vert_{{\cal S}}$ can be approached by a sequence $\{ \hat{\phi}^n_{1,{\cal S}} \}_{n\in {\mathbb{N}}}$ of smooth functions on $\cal S$ supported away from ${{\mathscr I}}^+$ that converge towards $\partial_t \hat{\Phi} \vert_{{\cal S}}$ in $L^2 ({\cal S})$. Consider $\hat{\phi}^n$ the smooth solution of on $\overline{\cal M}$ with data $( \hat{\phi}^n_{0,{\cal S}}\, ,~ \hat{\phi}^n_{1,{\cal S}})$ on $\cal S$. This solution vanishes in the neighbourhood of $i^0$ and we can therefore perform energy estimates for $\hat{\phi}^n$ between $\cal S$ and $\Sigma_0$ : we have the energy identity $$\label{EnIdentSSigma0} {\cal E}_{\partial_t} ({\cal S} , \hat{\phi}^n ) = {\cal E}_{\partial_t} (\Sigma_0 , \hat{\phi}^n ) \, .$$ The $H^1 \times L^2$ norm on $S$ is that induced by the rescaled metric $\hat{g}$. This is not equivalent to the norm induced by the energy ${\cal E}_{\partial_t}$ on $\cal S$, but the $H^1 \times L^2$ norm controls the other, which degenerates near null infinity and the crossing sphere. Consequently, $( \hat{\phi}^n_{0,{\cal S}}\, ,~ \hat{\phi}^n_{1,{\cal S}})$ is a Cauchy sequence also in the energy norm on $\cal S$. Similar energy identities between $\cal S$ and the hypersurfaces $\Sigma_t$ entail that $(\hat{\phi}^n , \partial_t \hat{\phi}^n )$ converges in ${\cal C} ({\mathbb{R}}_t \, ;~{\cal H} ) $ towards $(\hat{\phi} \, ,~ \partial_t \hat{\phi} )$, where $\hat{\phi}$ is a solution of . By local uniqueness $\hat{\phi}$ coincides with $\hat{\Phi}$ in the future of $\cal S$. Hence if we denote $$\hat{\phi}_0 = \hat{\phi} \vert_{\Sigma_0} \, ,~ \hat{\phi}_1 = \partial_t \hat{\phi} \vert_{\Sigma_0} \, ,$$ we have $$( \hat{\phi}_0 \, ,~ \hat{\phi}_1 ) \in {\cal H}$$ and $$(\xi , \zeta ) = {\cal T}^+ ( \hat{\phi}_0 \, ,~ \hat{\phi}_1 ) \, .$$ This shows that the range of ${\cal T}^+$ contains ${\cal C}^\infty_0 ({{\mathscr H}}^+) \times {\cal C}^\infty_0 ({{\mathscr I}}^+)$ and is therefore dense in ${\cal H}^+$. We have proved the following theorem. \[ThmGoursatPb\] The trace operator ${\cal T}^+$ is an isometry from $\cal H$ onto ${\cal H}^+$. We introduce in a similar manner the past trace operator ${\cal T}^-$ and the space ${\cal H}^-$ of past scattering data[^7]. We define the scattering operator $S$ as the operator that to the past scattering data associates the future scattering data, i.e. $$S := {\cal T}^+ ({\cal T}^-)^{-1} \, .$$ The scattering operator is an isometry from ${\cal H}^-$ onto ${\cal H}^+$. Wave operators {#WaveOps} -------------- A conformal scattering construction such as the one we have just established can be re-interpreted as a scattering theory defined in terms of wave operators. This re-interpretation is more an a posteriori embellishment than a fundamental aspect of the theory, but it is interesting to realize that such fundamental objects of analytic scattering as wave operators, can be recovered from a purely geometrical construction which remains valid in time dependent geometries. To be completely precise, it is the inverse wave operators and the asymptotic completeness that we recover from the conformal scattering theory ; the direct wave operators are obtained in the classic analytic manner involving Cook’s method. This choice is guided by simplicity and the flexibility of the method. The proof of existence of direct wave operators using Cook’s method is the simplest part of analytic time-dependent scattering theory. Moreover, provided we have sufficiently explicit asymptotic information on our spacetime and good uniform energy estimates (without which we have in any case little hope of constructing a conformal scattering theory), it can be easily extended to fairly general non-stationary geometries, using a comparison dynamics that is defined geometrically, namely the flow of a family of null geodesics in the neighbourhood of the conformal boundary. The existence of inverse wave operators and asymptotic completeness, that we deduce from the conformal scattering construction in a direct manner, are the difficult aspects of analytic scattering. When constructing wave operators using a conformal scattering theory, there is, just as for analytic scattering, some freedom in the choice of comparison dynamics, as well as some complications inherent to the fact that the full and simplified dynamics often act on different function spaces, defined on different manifolds that may not have the same topology. The freedom of choice is two-fold. First we may choose different types of dynamics : for the wave equation, we may wish to compare near infinity with the wave equation on flat spacetime or with a geometrically defined transport equation. In analytic scattering, the choice of comparison dynamics essentially fixes the space of scattering data as the finite energy space for the simplified Hamiltonian. In contrast, in conformal scattering, the energy space of scattering data is imposed by the energy estimates ; that is to say, the choice of vector field that we contract the stress-energy tensor with in order to get an energy current, fixes the functional framework, for both the scattering data and the initial data in fact. The comparison dynamics is then an additional choice, not completely determined by the space of scattering data. For instance, with a rather strong control on scattering data that seems to indicate the full flat spacetime wave equation as a natural simplified dynamics, we may yet choose a transport equation. All we really need is that the function space and the dynamics are compatible : the comparison dynamics can usually be expressed as an evolution equation on the space of scattering data, whose coefficients are independent of the time parameter ; this compatibility then simply means that the Hamiltonian should be self-adjoint. Second, for a given type of dynamics, there may still be some freedom. Say, if we choose a transport equation along a family of curves whose end-points span the conformal boundary, two different families of curves with the same end-points would work just as well. In [@MaNi2004], a conformal scattering construction on asymptotically simple spacetimes was re-interpreted as an analytic scattering theory defined in terms of wave operators. The comparison dynamics was determined by a null geodesic congruence in the neighbourhood of ${{\mathscr I}}$, for which there are many choices. Also, some cut-off was required in a compact region in space, in order to avoid caustics. In the case we are considering here, the Schwarzschild geometry is sufficiently special that it singles out two congruences of null geodesics. Moreover, the topology of the spacetime (or equivalently the fact that the scattering data are specified on two disjoint null hypersurfaces instead of one in the asymptotically simple case) means that no cut-off is required. The Schwarzschild spacetime is algebraically special of Petrov type D ; the four roots of the Weyl tensor are grouped at each point as two double principal null directions : $\partial_t \pm \partial_{r_*}$. The two principal null congruences provide two preferred families of null curves along which to define a comparison dynamics. We now proceed to introduce the full and the comparison dynamics as well as the other ingredients of the wave operators. We denote by ${\cal U} (t)$ the propagator for the wave equation on the finite energy space $\cal H$, i.e. for data $(\hat{\phi}_0 \, ,~ \hat{\phi}_1 ) \in {\cal H}$ at $t=0$, given $(\hat{\phi} \, ,~ \partial_t \hat{\phi} ) \in {\cal C} ({\mathbb{R}}_t \, ;~ {\cal H} )$ the associated solution of , we have $${\cal U} (t) (\hat{\phi}_0 \, ,~ \hat{\phi}_1 ) = (\hat{\phi} (t) \, ,~ \partial_t \hat{\phi} (t) ) \, .$$ The propagator ${\cal U} (t) $ is a strongly continuous one-parameter group of isometries on $\cal H$. The comparison dynamics, denoted by ${\cal U}_0 (t)$, acts on pairs of functions on $\Sigma_0$ as the push-forward along the flow of the incoming principal null geodesics on the first function, and the push-forward along the flow of the outgoing principal null geodesics on the second function. Considered as an operator on pairs of functions on the generic slice $\Sigma$, it acts as a translation to the left on the first function and a translation to the right on the second : $${\cal U}_0 (t) (\xi , \zeta ) (r_*,\omega) = \left( \xi (r_*+t , \omega ) , \zeta (r_*-t , \omega ) \right) \, .$$ It is a strongly continuous one-parameter group of isometries on $$\label{EnSpaceFree} {\cal H}_0 = \dot{H}^1 ({\mathbb{R}}_{r_*} \, ;~ L^2 (S^2_\omega )) \times \dot{H}^1 ({\mathbb{R}}_{r_*} \, ;~ L^2 (S^2_\omega )) \, .$$ For our definition of direct and inverse wave operators, we need, in addition to the two dynamics ${\cal U} (t)$ and ${\cal U}_0 (t)$, an identifying operator, two cut-off functions and a pull-back operator between functions on the future conformal boundary and pairs of functions on $\Sigma_0$. 1. In order to obtain explicit formulae, we use on ${{\mathscr H}}^+$ the coordinates $(v,\omega)$, on ${{\mathscr I}}^+$ the coordinates $(-u,\omega)$ and on $\Sigma_0$ we use $(r_* , \omega )$. Both for functions on ${{\mathscr H}}^+$ and ${{\mathscr I}}^+$, we shall denote by $\partial_s$ the partial derivative with respect to their first variable, i.e. for $\xi$ a function on ${{\mathscr H}}^+$, $$\partial_s \xi = \partial_v \xi$$ and for a function $\zeta$ on ${{\mathscr I}}^+$, $$\partial_s \zeta = - \partial_u \zeta \, .$$ 2. We define the identifying operator $${\cal J} \, :~ {\cal C}^\infty_0 (\Sigma ) \times {\cal C}^\infty_0 (\Sigma ) \rightarrow {\cal C}^\infty_0 (\Sigma ) \times {\cal C}^\infty_0 (\Sigma )$$ by $${\cal J} (\xi, \zeta ) (r_* , \omega ) = \left( \xi (r_* , \omega ) + \zeta (r_* , \omega ) \, ,~ \partial_s \xi (r_* , \omega ) - \partial_s\zeta (r_* , \omega ) \right) \, .$$ It combines pairs of functions on $\Sigma$ into initial data for equation . 3. We also define two cut-off functions $\chi_\pm \in {\cal C}^\infty ({\mathbb{R}}) $, $\chi_+$ non decreasing on ${\mathbb{R}}$, $\chi_+ \equiv 0$ on $]-\infty , -1]$, $\chi_+ \equiv 1$ on $[1,+\infty [$, $\chi_- = 1-\chi_+$. They will be used with the variable $r*$ in order to isolate a part of the field living in a neighbourhood of either null infinity or the horizon. They can also be understood as functions on the exterior of the black hole ; we shall usually simply denote $\chi_\pm$ without referring explicitely to their argument. 4. We introduce the operator $$P^+ \, : ~ {\cal C}^\infty_0 ({{\mathscr H}}^+) \times {\cal C}^\infty_0 ({{\mathscr I}}^+) \longrightarrow {\cal C}^\infty_0 (\Sigma_0) \times {\cal C}^\infty_0 (\Sigma_0)$$ that pulls back the first function along the flow of incoming principal null geodesics and the second along the flow of outgoing principal null geodesics. By the definition of the variables $u=t-r_*$ and $v=t+r_*$, in terms of coordinates $(r_*,\omega)$ on $\Sigma_0$, $(v,\omega )$ on ${{\mathscr H}}^+$ and $(-u,\omega)$ on ${{\mathscr I}}^+$, the action of $P^+$ can be described very simply : take $(\xi (v,\omega) , \zeta (-u,\omega)) \in {\cal C}^\infty_0 ({{\mathscr H}}^+) \times {\cal C}^\infty_0 ({{\mathscr I}}^+)$, $$P^+ (\xi , \zeta ) (r_* , \omega ) = (\xi (r_*, \omega) , \zeta (r_*, \omega) ) \, .$$ The operator $P^+$ is an isometry from ${\cal H}^+$ onto ${\cal H}_0$ (see and Definition \[FuncSpaceScatt\]). The operator $P^+$ provides an identification between the conformal scattering data (that are functions defined on the conformal boundary) and initial data for the comparison dynamics (seen as acting between the slices $\Sigma_t$). \[WaveOps\] The direct future wave operator, defined for smooth compactly supported scattering data $$(\xi , \zeta) \in {\cal C}^\infty_0 ({{\mathscr H}}^+ ) \times {\cal C}^\infty_0 ({{\mathscr I}}^+ )$$ by $$W^+ (\xi , \zeta) := \lim_{t\rightarrow +\infty} {\cal U} (-t) {\cal J} \, {\cal U}_0 (t) P^+ (\xi , \zeta) \, ,$$ extends as an isometry from ${\cal H}^+$ onto $\cal H$. The inverse future wave operator, defined for smooth compactly supported initial data for $$(\hat{\phi}_0 , \hat{\phi}_1 ) \in {\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 )$$ by $$\tilde{W}^+ (\hat{\phi}_0 , \hat{\phi}_1 ) = \lim_{t\rightarrow +\infty} (P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t) (\hat{\phi}_0 , \hat{\phi}_1 ) \, ,$$ extends as an isometry from $\cal H$ onto ${\cal H}^+$. Moreover, we have $$\begin{gathered} \tilde{W}^+ = {\cal T}^+ = \operatorname*{\mathrm{s}\,--\lim ~}_{t\rightarrow +\infty} (P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t) \, , \label{InvWOpSLim} \\ \tilde{W}^+ = (W^+)^{-1} \, .\label{InvWOp}\end{gathered}$$ It is important to understand that as soon as we have proved that $\tilde{W}^+ = {\cal T}^+$, we have established the asymptotic completeness, since ${\cal T}^+$ is an isometry from $\cal H$ onto ${\cal H}^+$. The proof of only relies on the conformal scattering construction. Once is established, all that remains to prove is the existence of the direct wave operator, which we do using Cook’s method. The fact that $W^+$ is the inverse of $\tilde{W}^+$ is an immediate consequence of as we shall see. The expressions of the wave operators can be simplified a little if we consider ${{\mathscr I}}^+$ as the family of outgoing principal null geodesics and ${{\mathscr H}}^+$ as the family of incoming principal null geodesics. With this viewpoint, the comparison dynamics seen as acting on functions on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$ reduces to the identity. We introduce a family of projections ${\cal P}_t$ that to a pair of functions $(\xi , \zeta) \in {\cal C}^\infty_0 ({{\mathscr H}}^+ ) \times {\cal C}^\infty_0 ({{\mathscr I}}^+)$ associates its realization as a pair of functions on $\Sigma_t$, which as functions of $(r_*,\omega)$ have the following expression : $$( \xi (r_*+t,\omega ) , \zeta (r_*-t , \omega )) \, .$$ The direct and inverse wave operators acting on $(\xi , \zeta)$ then become : $$\begin{aligned} W^+ (\xi , \zeta) &=& \lim_{t\rightarrow +\infty} {\cal U} (-t) {\cal J}{\cal P}_t (\xi , \zeta) \, ;\\ \tilde{W}^+ (\hat{\phi}_0 , \hat{\phi}_1 ) &=& \lim_{t\rightarrow +\infty} {\cal P}_t^{-1} \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t) (\hat{\phi}_0 , \hat{\phi}_1 ) \, .\end{aligned}$$ We keep the version of the theorem however in order to get a closer similarity with the usual analytic expression of wave operators. [**Proof of Theorem \[WaveOps\].**]{} All we need to do is prove that on a dense subspace of $\cal H$, $\tilde{W}^+$ is well-defined and coincides with ${\cal T}^+$, and that similarly, on a dense subspace of ${\cal H}^+$, $W^+$ is well-defined and coincides with $({\cal T}^+)^{-1}$. Let us consider $(\hat{\phi}_0 , \hat{\phi}_1 ) \in {\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 ) \subset {\cal H}$. We denote by $\hat{\phi}$ the associated solution of such that $(\hat{\phi} , \partial_t \hat{\phi}) \in {\cal C} ({\mathbb{R}}_t ; {\cal H})$ and put $(\xi , \zeta ) = {\cal T}^+ (\hat{\phi}_0 , \hat{\phi}_1 )$. For $t>0$, the operator $$(P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t)$$ first propagates the solution $\hat{\phi}$ up to the slice $\Sigma_t$, then cuts-off using $\chi_-$ (resp. $\chi_+$) the part of $\hat{\phi} (t)$ near infinity (resp. near the horizon) and puts the result in the first (resp. second) slot. Finally, the combination $(P^+)^{-1} \, {\cal U}_0 (-t)$ is the push-forward of the function in the first slot onto ${{\mathscr H}}^+$ along the flow of incoming principal null geodesics, and the push-forward of the function in the second slot onto ${{\mathscr I}}^+$ along the flow of outgoing principal null geodesics. Since the support of the non constant part of the cut-off functions $\chi_\pm$ on $\Sigma_t$ remains away from both ${{\mathscr I}}^+$ and ${{\mathscr H}}^+$ and accumulates at $i^+$ as $t \rightarrow +\infty$ (see figure \[SuppDerivativeCutOff\]), we have the following pointwise limit ![The support of the derivatives of the cut-off functions $\chi_\pm$[]{data-label="SuppDerivativeCutOff"}](SupportCutOff.jpg){width="4in"} $$\lim_{t\rightarrow +\infty} (P^+ )^{-1} {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t) (\hat{\phi}_0 , \hat{\phi}_1 ) = (\xi , \zeta ) \, .$$ This already proves that $\tilde{W}^+$ is well-defined on smooth compactly supported initial data and coincides with ${\cal T}^+$ on this dense subset of $\cal H$. Therefore $\tilde{W}^+$ extends as the isometry ${\cal T}^+$ from $\cal H$ onto ${\cal H}^+$. Let us now prove that the convergence above takes place in ${\cal H}^+$. This means that $$\begin{aligned} \lim_{t \rightarrow +\infty} \int_{{\mathbb{R}}\times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega &=&0 \, , \label{CvHorCInfty} \\ \lim_{t \rightarrow +\infty} \int_{{\mathbb{R}}\times S^2} \left( \frac{\partial}{\partial u} \left( \chi_+ (t-u)\hat{\phi} (t, t-u, \omega) - \zeta (-u , \omega) \right) \right)^2 {\mathrm{d}}u {\mathrm{d}}\omega &=&0 \, . \label{CvInftyCInfty}\end{aligned}$$ We prove , the proof of is similar. Since $\hat{\phi} \in {\cal C}^\infty ( \bar{\cal M})$, we have $$\frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \rightarrow 0 \mbox{ in } L^2_\mathrm{loc} ({\mathbb{R}}_{v} \, ;~ L^2 (S^2 )) \, .$$ In particular due to the compact support of the initial data, for any $v_0 \in {\mathbb{R}}$, $$\lim_{t \rightarrow +\infty} \int_{]-\infty , v_0 [ \times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega =0 \, . \label{ConvCompact}$$ Let $\varepsilon >0$, consider $T>0$ large enough such that $\hat{\cal E}_{\partial_t , S_T} < \varepsilon$. As a consequence, we also have that the energy flux across the part of ${{\mathscr H}}^+$ in the future of $S_T$ is lower than $\varepsilon$ : $$\hat{\cal E}_{\partial_t , ({{\mathscr H}}^+ \setminus {{\mathscr H}}^+_T)} < \varepsilon \, .$$ We choose $t_0 >0$ large enough such that for all $t>t_0$, the intersection of $\Sigma_t$ with the support of $\chi_-'$ is entirely in the future of $S_T$ ; we also choose $v_0 >0$ such that the null hypersurface $\{ v=v_0 \}$ intersects all $\Sigma_t$, $t>t_0$, entirely in the future of $S_T$ (see figure \[StrongLimit\] for an illustration of both choices). Then we have $$\begin{aligned} \int_{] v_0 , +\infty [ \times S^2} \left( \frac{\partial \xi}{\partial v} (v , \omega) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega &<& \varepsilon \, , \label{Small1}\\ \int_{] v_0 , +\infty [ \times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega &<& \varepsilon \, ,\hspace{0.1in} \mbox{ for all } t > t_0 \, . \label{Small2}\end{aligned}$$ Now thanks to , we can choose $t_1 > t_0$ such that for all $t>t_1$ we have $$\label{Small3} \int_{]-\infty , v_0 [ \times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega < \varepsilon \, .$$ Putting , and together, we obtain that for $t>t_1$ $$\int_{{\mathbb{R}}\times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega < 5 \varepsilon \, .$$ This proves for data in ${\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 ) \subset {\cal H}$. ![The ingredients of the proof of .[]{data-label="StrongLimit"}](StrongLimit.jpg){width="4in"} Let us now consider initial data $(\hat{\phi}_0 , \hat{\phi}_1 ) \in {\cal H}$. Still denoting $\hat\phi$ the associated solution of and $(\xi , \zeta ) = {\cal T}^+ (\hat{\phi}_0 , \hat{\phi}_1 )$, we prove for such data. Let $\varepsilon >0$, consider $(\hat{\Phi}_0 , \hat{\Phi}_1 ) \in {\cal C}^\infty_0 (\Sigma_0 ) \times {\cal C}^\infty_0 (\Sigma_0 )$, $\hat{\Phi}$ the associated solution and $(\Xi , \mathrm{Z} ) = {\cal T}^+ (\hat{\Phi}_0 , \hat{\Phi}_1 )$, such that $$\Vert (\hat{\phi}_0 , \hat{\phi}_1 ) - (\hat{\Phi}_0 , \hat{\Phi}_1 ) \Vert_{{\cal H}}^2 < \varepsilon \, .$$ Then the energy fluxes, on ${{\mathscr H}}^+$ and $\Sigma_t$ for all $t$, of $\hat{\phi} - \hat{\Phi}$, are all lower than $\varepsilon$. Since is valid for $\hat\Phi$, we can find $t_0 >0$ such that for all $t >t_0$ we have $$\int_{{\mathbb{R}}\times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\Phi} (t, -t+v, \omega) - \Xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega < \varepsilon \, .$$ It follows that for $t>t_0$, we have $$\int_{{\mathbb{R}}\times S^2} \left( \frac{\partial}{\partial v} \left( \chi_- (-t+v)\hat{\phi} (t, -t+v, \omega) - \xi (v , \omega) \right) \right)^2 {\mathrm{d}}v {\mathrm{d}}\omega < 9 \varepsilon \, .$$ This proves for finite energy data. We have therefore established . Let us now consider $(\xi , \zeta ) \in {\cal C}^\infty_0 ({{\mathscr H}}^+ ) \times {\cal C}^\infty_0 ({{\mathscr I}}^+ ) \subset {\cal H}^+$. For $t>0$, the operator $${\cal U} (-t) {\cal J} \, {\cal U}_0 (t) P^+$$ first (by the combination ${\cal U}_0 (t) P^+$) pulls back $\xi$ along the flow of incoming principal null geodesics and $\zeta$ along the flow of outgoing principal null geodesics, as a pair functions on $\Sigma_t$. Then ${\cal J}$ combines these two functions to obtain the initial data on $\Sigma_t$ for the wave equation : $$\hat{\phi} \vert_{\Sigma_t} (r_*,\omega)= \xi (t+r_* , \omega) + \zeta (r_*-t , \omega) \, ,~ \partial_t \hat{\phi} \vert_{\Sigma_t} (r_* , \omega)= \partial_s \xi (t+r_* , \omega) - \partial_s \zeta (r_*-t , \omega)\, .$$ After which ${\cal U} (-t)$ propagates the corresponding solution of down to $\Sigma_0$. In order to prove that ${\cal U} (-t) {\cal J} \, {\cal U}_0 (t) P^+ (\xi , \zeta )$ converges in $\cal H$ as $t \rightarrow +\infty$, we use Cook’s method ; the details of the proof can be found in Appendix \[AppendixCook\]. Then it is easy to conclude that $W^+$ is the inverse of $\tilde{W}^+$. Let us consider for $(\xi , \zeta ) \in {\cal C}^\infty_0 ({{\mathscr H}}^+ ) \times {\cal C}^\infty_0 ({{\mathscr I}}^+ )$ the quantity $$\label{CandidateInverse} (P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal U} (t) {\cal U} (-t) {\cal J} \, {\cal U}_0 (t) P^+ (\xi , \zeta ) \, .$$ By the strong convergence part of and the convergence in $\cal H$ of $${\cal U} (-t) {\cal J} \, {\cal U}_0 (t) P^+ (\xi , \zeta ) \, ,$$ converges in ${\cal H}^+$ towards $\tilde{W}^+ W^+ (\xi , \zeta )$. But simplifies as $$(P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & 0 \\ \chi_+ & 0 \end{array} \right) {\cal J} \, {\cal U}_0 (t) P^+ (\xi , \zeta ) = (P^+)^{-1} \, {\cal U}_0 (-t) \left( \begin{array}{cc} \chi_- & \chi_- \\ \chi_+ & \chi_+ \end{array} \right) \, {\cal U}_0 (t) P^+ (\xi , \zeta ) \, .$$ Thanks to the compact support of $\xi$ and $\zeta$, this is equal to $(\xi , \zeta)$ for $t$ large enough. This concludes the proof. Let us define similarly the past wave operators $W^-$ and $\tilde{W}^-$. We have $$\tilde{W}^- = (W^-)^{-1} ={\cal T}^- \, .$$ The scattering operator is related to the wave operators as follows $$S = \tilde{W}^+ W^- = (W^+)^{-1} W^- \, .$$ Translation representer, scattering data, radiation field --------------------------------------------------------- The conformal scattering theory we have constructed allows us, using the staticity of the exterior of a Schwarzschild black hole, to re-interpret immediately the scattering data as the crucial structure of the Lax-Phillips theory : the translation representer. This is expressed in the following theorem. The scattering data are a translation representer of the associated scalar field. More precisely, consider $(\hat\phi , \partial_t \hat\phi ) \in {\cal C} ({\mathbb{R}}_t \, ;~ {\cal H} ) $ a solution to , put $\hat\phi_0 := \hat\phi \vert_{\Sigma_0}$, $\hat\phi_1 := \partial_t \hat\phi \vert_{\Sigma_0}$ and $$(\xi , \zeta ) := {\cal T}^+ (\hat\phi_0 , \hat\phi_1 ) \, .$$ Then (expressing the functions using variables $(v,\omega)$ on ${{\mathscr H}}^+$ and $(-u,\omega )$ on ${{\mathscr I}}^+$), $${\cal T}^+ (\hat\phi \vert_{\Sigma_t} , \partial_t \hat\phi \vert_{\Sigma_t} ) = (\xi (v+t , \omega ) , \zeta (-u-t , \omega )) \, .$$ [**Proof.**]{} If instead of $(\hat\phi_0 , \hat\phi_1 ) $ we take $(\hat\phi \vert_{\Sigma_t} , \partial_t \hat\phi \vert_{\Sigma_t} )$ for initial data, since $\partial_t$ is Killing, this is equivalent to pulling back the whole solution $\hat\phi$ of a time interval $t$ along the flow of $\partial_t$. Moreover $\partial_t$ extends as $\partial_v$ on ${{\mathscr H}}^+$ and as $\partial_u$ on ${{\mathscr I}}^+$. This concludes the proof. Note that the part of the scattering data on ${{\mathscr I}}^+$ is the trace of $\hat\phi = r \phi$ on ${{\mathscr I}}^+$ and is therefore exactly the future radiation field. The essential difference from the theory of Lax-Phillips and the construction of Friedlander in 1980 [@Fri1980] is that we have a scattering theory with two scattering channels and therefore we need one extra scattering data. The important thing to understand here is that the translation representer is intimately related to the stationarity of the spacetime. If we give up stationarity, we also have to give up the translation representer but the conformal scattering construction would still be valid provided we have good estimates and a well-defined conformal boundary. Extension to the Kerr metric and concluding remarks {#Kerr} =================================================== At the time when this paper first appeared on the arXiv, the analysis in the Kerr framework was not as advanced as in the Schwarzschild setting. A variety of decay results were available for scalar waves and one for Maxwell fields, some of them establishing Price’s law (which is the decay generically expected both in timelike directions and up the generators of null infinity, see R. Price [@Pri1972a] for scalar fields and [@Pri1972b] for zero-rest-mass fields) : these results were due to L. Andersson and P. Blue [@ABlu], M. Dafermos and I. Rodnianski [@DaRoLN], F. Finster, N. Kamran, J. Smoller and S.T. Yau [@FiKaSmoYa1; @FiKaSmoYa2], F. Finster and J. Smoller [@FiSmo], J. Metcalfe, D. Tataru and M. Tohaneanu [@MeTaTo], D. Tataru and M. Tohaneanu [@TaTo] for the wave equation, and to L. Andersson and P. Blue [@ABlu2] for Maxwell fields. All these papers, except [@FiKaSmoYa1; @FiKaSmoYa2; @FiSmo], deal with slowly rotating Kerr black holes. Decay estimates are useful in our conformal scattering construction in order to prove that the energy on the far future hypersurface $S_T$ tends to zero as $T \rightarrow +\infty$ (see subsection \[EnEstTInfinite\]). This step however relies on the solidity of the foundations laid in subsection \[EnIdST\] : uniform energy estimates both ways, without loss, between a Cauchy hypersurface and the union of $S_T$ and the parts of ${{\mathscr H}}^+$ and ${{\mathscr I}}^+$ in the past of $S_T$. Among the works we have just cited, the only one providing, if not exactly this kind of estimate, at least a way of obtaining them using the symmetry of the Kerr metric and the decay estimates, is [@ABlu]. They are the only ones establishing uniform estimates, for a positive definite energy, on a foliation by Cauchy hypersurfaces of the Kerr exterior. Many of the other works use the redshift effect near the horizon, see M. Dafermos and I. Rodnianski [@DaRo2009]. This is perfectly fine for proving decay, but the estimates cannot be reversed because when we go backwards in time, it is a blueshift effect that we have to deal with. The works of F. Finster, N. Kamran, J. Smoller and S.T. Yau rely on a different technique, which is an integral representation of the propagator for the wave equation ; they do not however obtain the type of estimate we need. The main drawback of the energy used by L. Andersson and P. Blue is that it is of too high order to be convenient for scattering theory. In fact, this is rather an aesthetic consideration than any serious scientific objection and it would be interesting to try an develop a conformal scattering theory based on their energy. Since then, M. dafermos, I. Rodnianski and Y. Shlapentokh-Rothman [@DaRoShla] have obtained the missing uniform energy equivalence, without slow rotation assumption, and used it to construct a complete analytic scattering theory for the wave equation on the Kerr metric. They make the comment that it is crucial to chose an energy that does not see the redshift effect. Such an energy is based on a vector field that reduces at ${{\mathscr H}}^+$ to the null generator of the horizon, i.e. that is timelike outside the black hole but tangent to the horizon. This has interesting connections with our comments in section \[WaveOps\]. It appears that with the results of [@DaRoShla], our construction in the Schwarzschild case can now be extended to Kerr black holes essentially without change. It could be interesting to write this in detail provided we use only the relevent estimates and not the full scattering theory. Indeed, the re-interpretation of an analytic scattering theory as a conformal one is in many cases easy and purely a matter of understanding the scattering data as radiation fields (see A. Bachelot [@Ba1991] and D. Häfner and J.-P. Nicolas [@HaNi2004]). In the case of [@DaRoShla] the re-interpretation would be totally trivial since their scattering data are already described as radiation fields. The question of inferring an analytic scattering, defined in terms of wave operators, from a conformal scattering theory is more delicate however. It has been addressed in [@MaNi2004] and in the present work but still needs to be understood precisely in general. As far as the development of conformal scattering theory per se is concerned, it appears essential to find a way of replacing pointwise decay estimates, such as Price’s law, by integrated decay estimates requiring a less precise knowledge of the local geometry and that are closer in nature to the minimal velocity estimates one obtains in the spectral approach to scattering theory (involving Mourre estimates and commutator methods). Acknowledgments =============== I would like to thank Dean Baskin, Fang Wang and Jérémie Joudioux for interesting discussions that contributed to improve this paper. I am also indebted to the anonymous referee for his/her useful comments. This research was partly supported by the ANR funding ANR-12-BS01-012-01. Cook’s method for the direct wave operator {#AppendixCook} ========================================== In this proof we represent the free dynamics in a slightly different but equivalent manner. The space ${\mathbb H}_0 = \dot{H}^1 ({\mathbb{R}}_{r_*} \, ;~ L^2 (S^2 ) ) \times L^2 ({\mathbb{R}}_{r_*} \times S^2 )$ is the direct orthogonal sum of two supplementary subspaces : $${\mathbb H}_0^\pm = \{ (\psi_0 , \psi_1 ) \, ;~ \psi_1 = \mp \partial_{r_*} \psi_0 \} \, ;$$ via the operator $P^+$, ${\mathbb H}^+_0$ corresponds to $\dot{H}^1 ({\mathbb{R}}_{u} \, ;~ L^2 (S^2 ) )$ on ${{\mathscr I}}^+$ and ${\mathbb H}^-_0$ to $\dot{H}^1 ({\mathbb{R}}_{v} \, ;~ L^2 (S^2 ) )$ on ${{\mathscr H}}^+$. On ${\mathbb H}_0$, we consider the free Hamiltonian $$H_0 = -i \left( \begin{array}{cc} 0 & 1 \\ {\partial_{r_*}^2 } & 0 \end{array} \right) \, .$$ The equation $\partial_t U = iH_0 U$ is the Hamiltonian form of the free equation $$\partial_t^2 \psi - \partial_{r_*}^2 \psi =0 \, .$$ The operator $H_0$ is self-adjoint on ${\mathbb H}_0$ and the free propagator ${\cal U}_0 (t)$ is just the group $e^{itH_0}$ conjugated by the identifying operator : $${\cal J} {\cal U}_0 (t) = e^{itH_0} {\cal J} \, .$$ With this description of the comparison dynamics, we need neither $P^+$ nor the identifying operator in the expression of the limit defining the direct wave operator $W^+$. On $\cal H$ we consider the operator $$H = -i \left( \begin{array}{cc} 0 & 1 \\ {\partial_{r_*}^2 + \frac{F}{r^2} \Delta_{S^2} - \frac{FF'}{r}} & 0 \end{array} \right) \, ;$$ the equation $\partial_t U = iHU$ is the Hamiltonian form of . The operator $H$ is self-adjoint on $\cal H$ and the propagator ${\cal U} (t)$ is equal to $e^{itH}$. For all $(U^h , U^\infty ) \in {\mathbb H}_0^- \times {\mathbb H}_0^+$, smooth and compactly supported, the following limits exist in $\cal H$ : $$\begin{gathered} \lim_{t\rightarrow +\infty} e^{-itH} e^{itH_0} U^h \, , \label{LimHorizon} \\ \lim_{t\rightarrow +\infty} e^{-itH} e^{itH_0} U^\infty \, .\label{LimInfinity}\end{gathered}$$ [**Proof.**]{} Take $$U^h = \left( \begin{array}{c} \psi_0 \\ \psi_1 = \partial_{r_*} \psi_0 \end{array} \right) \, ,~ \psi_0 \in {\cal C}^\infty_0 ( {\mathbb{R}}\, ;~ {\cal C}^\infty (S^2) ) \, .$$ A sufficient condition for the limit to exist is that $$\frac{{\mathrm{d}}}{{\mathrm{d}}t} e^{-itH} e^{itH_0} U^h = e^{-itH} \left( -i H + i H_0 \right) e^{itH_0} U^h \in L^1 ({\mathbb{R}}_t^+ \, ;~ {\cal H} ) \, .$$ Since $e^{-itH}$ is a group of unitary operators on $\cal H$, the condition is equivalent to $$\left( -i H + i H_0 \right) e^{itH_0} U^h \in L^1 ({\mathbb{R}}_t^+ \, ;~ {\cal H} ) \, .$$ This is easy to check : $$\Vert \left( -i H + i H_0 \right) e^{itH_0} U^h \Vert^2_{{\cal H}} = \frac12 \int_{{\mathbb{R}}\times S^2} \left( -\frac{F}{r^2} \Delta_{S^2} \psi_0 (t+r_*) + \frac{FF'}{r} \psi_0 (t+r_*) \right)^2 {\mathrm{d}}r_* {\mathrm{d}}^2 \omega$$ and this falls off exponentially fast as $t \rightarrow +\infty$ thanks to the compact support and the smoothness of $\psi_0$ and to the fact that $$F(r) = 1 - \frac{2M}{r} = \frac{1}{r} e^{(r_*-r)/2M}$$ fall-off exponentially fast as $r_* \rightarrow -\infty$. The proof of the existence of the other limit is similar, but we do not get exponential decay in this case. Take $$U^\infty = \left( \begin{array}{c} \psi_0 \\ \psi_1 = -\partial_{r_*} \psi_0 \end{array} \right) \, ,~ \psi_0 \in {\cal C}^\infty_0 ( {\mathbb{R}}\, ;~ {\cal C}^\infty (S^2) ) \, .$$ This time we have $$\Vert \left( -i H + i H_0 \right) e^{itH_0} U^\infty \Vert^2_{{\cal H}} = \frac12 \int_{{\mathbb{R}}\times S^2} \left( -\frac{F}{r^2} \Delta_{S^2} \psi_0 (r_*-t) + \frac{FF'}{r} \psi_0 (r_*-t) \right)^2 {\mathrm{d}}r_* {\mathrm{d}}^2 \omega$$ and this falls-off like $1/t^4$ as $t\rightarrow +\infty$, thanks to the compact support and the smoothness of $\psi_0$ and to the fact that $$\frac{F}{r^2} \simeq \frac{1}{r^2} \mbox{ and } r_* \simeq r \mbox{ at infinity.}$$ The other term falls off faster since $$\frac{FF'}{r} \simeq \frac{2M}{r^3} \mbox{ at infinity.}$$ So we still obtain the integrability in time of $\Vert \left( -i H + i H_0 \right) e^{itH_0} U^\infty \Vert_{{\cal H}}$ and this concludes the proof. As a consequence, for all $U_0 \in \mathbb{H}_0$, smooth and compactly supported, the limit $$\lim_{t\rightarrow +\infty} e^{-itH} e^{itH_0} U_0$$ exists in $\cal H$. This is equivalent to the existence for smooth and compactly supported scattering data of the limit defining $W^+$. Applying L. Hörmander’s results in the Schwarzschild setting {#HormGP} ============================================================ ![A cut-extend construction for the solution of the Goursat problem from ${{\mathscr I}}^+$.[]{data-label="CutExtend"}](CutExtendBis.jpg){width="80.00000%"} The work of L. Hörmander [@Ho1990] is a proof of the well-posedness of a weakly spacelike Cauchy problem, for a wave equation $$\label{ModWEq} \partial_t^2 u - \Delta u + L_1 u = f \, ,$$ on a Lorentzian spacetime that is a product ${\mathbb{R}}_t \times X$, with metric ${\mathrm{d}}t^2 - g$, $X$ being a compact manifold without boundary of dimension $n$ and $g(t)$ a Riemannian metric on $X$ smoothly varying with $t$. In , $\Delta$ is a modified version of the Laplace-Beltrami operator in which the volume density associated with the metric is replaced by a given smooth density on $X$ ; the operator $L_1$ is a first order differential operator with smooth coefficients and $f$ is a source. Any non degenerate change in the metric or the volume density can be absorbed in the operator $L_1$ so the results of [@Ho1990] are valid for the wave equation on any spatially compact globally hyperbolic spacetime. The data for the Cauchy problem are set on a hypersurface $\Sigma$ that is assumed Lipschitz and achronal (i.e. weakly spacelike), meaning that the normal vector field (which in the case of a Lipschitz hypersurface is defined almost everywhere) is causal wherever it is defined. In the present work, we are not dealing with a spatially compact spacetime, but an easy construction allows us to understand the resolution of the Goursat problem for compactly supported data on the future conformal boundary as a Goursat problem on a cylindrical spacetime, for which Hörmander’s results are valid. The construction, described schematically in figure \[CutExtend\], goes as follows. The data on ${{\mathscr H}}^+ \cup {{\mathscr I}}^+$ are compactly supported, which guarantees that the past of their support remains away from $i^+$. We simply consider the future ${\cal I}^+ ({\cal S})$ of the hypersurface $\cal S$ in $\bar{\cal M}$ (recall that $\cal S$ is a spacelike hypersurface on ${\bar{\cal M}}$ whose intersection with the horizon is the crossing sphere and which crosses ${{\mathscr I}}^+$ strictly in the past of the support of the data) and we cut off the future $\cal V$ of a point in $\cal M$ lying in the future of the past of the support of the Goursat data (see figure \[CutExtend\]). We denote by $\mathfrak{M}$ the resulting spacetime. Then we extend $\mathfrak{M}$ as a cylindrical globally hyperbolic spacetime $({\mathbb{R}}_t \times S^3 \, ,~ \mathfrak{g})$. We also extend the part of ${{\mathscr I}}^+ \cup {{\mathscr H}}^+$ inside ${\cal I}^+ ({\cal S}) \setminus {\cal V}$ as a null hypersurface $\cal C$ (see figure \[CutExtend\]) that is the graph of a Lipschitz function over $S^3$ and the data by zero on the rest of the extended hypersurface. The Goursat problem for equation $$\square_\mathfrak{g} \psi + \frac16 \mathrm{Scal}_\mathfrak{g} \psi =0 \, ,$$ with the data we have just constructed has a unique solution (see [@Ho1990]) $$\psi \in {\cal C} ({\mathbb{R}}\, ;~ H^1 (S^3 )) \cap {\cal C}^1 ({\mathbb{R}}\, ;~ L^2 (S^3 )) \, .$$ Then by local uniqueness and causality, using in particular the fact that as a consequence of the finite propagation speed, the solution $\psi$ vanishes in ${\cal I}^+ ({\cal S}) \setminus \mathfrak{M}$, the Goursat problem that we are studying has a unique smooth solution in the future of $\cal S$, that is the restriction of $\psi$ to $\mathfrak{M}$. [100]{} L. Andersson, P. Blue, [*Hidden symmetries and decay for the wave equation on the Kerr spacetime*]{}, arXiv:0908.2265. L. Andersson, P. Blue, Uniform energy bound and asymptotics for the Maxwell field on a slowly rotating Kerr black hole exterior, arxiv:1310.2664. A. Bachelot, [*Gravitational scattering of electromagnetic field by Schwarzschild black hole*]{}, Annales de l’I.H.P. A [**54**]{} (1991), 261–320. J.C. Baez, [*Scattering and the geometry of the solution manifold of $\square f+ \lambda f^3 =0$*]{}, J. Funct. Anal. [**83**]{} (1989), 317–332. J.C. Baez, [*Scattering for the Yang-Mills equations*]{}, Trans. Amer. Math. Soc. [**315**]{} (1989), 2, 823–832. J.C. Baez, [*Conserved quantities for the Yang-Mills equations*]{}, Adv. Math. [**82**]{} (1990), 1, 126–131. J.C. Baez, I.E. Segal, Zhou Z.F., [*The global Goursat problem and scattering for nonlinear wave equations*]{}, J. Funct. Anal. [**93**]{} (1990), 2, 239–269. J.C. Baez, Zhou Z.F., [*The global Goursat problem on ${\mathbb{R}}\times S^1$*]{}, J. Funct. Anal. [**83**]{} (1989), 364–382. M. Dafermos, I. Rodnianski, [*Lectures on black holes and linear waves*]{}, Evolution equations, 97–205, Clay Math. Proc., 17, Amer. Math. Soc., Providence, RI, 2013, arXiv:0811.0354, 2008. M. Dafermos, I. Rodnianski, [*The red-shift effect and radiation decay on black hole spacetimes*]{}, Comm. Pure Appl. Math. [**62**]{} (2009), 7, 859–919. M. Dafermos, I. Rodnianski, Y. Shlapentokh-Rothman, [*A scattering theory for the wave equation on Kerr black hole exteriors*]{}, arXiv:1412.8379. J. Dimock, [*Scattering for the wave equation on the Schwarzschild metric*]{}, Gen. Rel. Grav. [**17**]{} (1985), 4, 353–369. J. Dimock, B.S. Kay, [*Classical and quantum scattering theory for linear scalar fields on the Schwarzschild metric. II*]{}, J. Math. Phys. [**27**]{} (1986), 10, 2520–2525. J. Dimock, B.S. Kay, [*Classical and Quantum Scattering theory for linear scalar fields on the Schwarzschild metric I*]{}, Ann. Phys. [**175**]{} (1987), 366–426. F. Finster, N. Kamran, J. Smoller, Yau, S.-T., [*Decay of solutions of the wave equation in the Kerr geometry*]{}, Comm. Math. Phys. [**264**]{} (2006), 2, 465–503. F. Finster, N. Kamran, J. Smoller, Yau, S.-T., [*Erratum: "Decay of solutions of the wave equation in the Kerr geometry”*]{} \[Comm. Math. Phys. [**264**]{} (2006), no. 2, 465–503; MR2215614\]. Comm. Math. Phys. [**280**]{} (2008), no. 2, 563–573. F. Finster, J. Smoller, [*A time-independent energy estimate for outgoing scalar waves in the Kerr geometry*]{}, J. Hyperbolic Differ. Equ. [**5**]{} (2008), 1, 221–255. F.G. Friedlander, [*On the radiation field of pulse solutions of the wave equation*]{}, Proc. Roy. Soc. Ser. A [**269**]{} (1962), 53–65. F.G. Friedlander, [*On the radiation field of pulse solutions of the wave equation II*]{}, Proc. Roy. Soc. Ser. A [**279**]{} (1964), 386–394. F.G. Friedlander, [*On the radiation field of pulse solutions of the wave equation III*]{}, Proc. Roy. Soc. Ser. A [**299**]{} (1967), 264–278. F.G. Friedlander, [*Radiation fields and hyperbolic scattering theory*]{}, Math. Proc. Camb. Phil. Soc. [**88**]{} (1980), 483–515. F.G. Friedlander, [*Notes on the Wave Equation on Asymptotically Euclidean Manifolds*]{}, J. Funct. Anal. [**184**]{} (2001), 1–18. D. Häfner, J.-P. Nicolas, [*Scattering of massless Dirac fields by a Kerr black hole*]{}, Rev. Math. Phys. [**16**]{} (2004), 1, 29?123. L. Hörmander, [*A remark on the characteristic Cauchy problem*]{}, J. Funct. Ana. [**93**]{} (1990), 270–277. J. Joudioux, [*Conformal scattering for a nonlinear wave equation*]{}, J. Hyperbolic Differ. Equ. [*9*]{} (2012), 1, 1–65. P.D. Lax, R.S. Phillips, [*Scattering theory*]{}, Academic Press 1967. J. Leray, [*Hyperbolic differential equations*]{}, lecture notes, Princeton Institute for Advanced Studies, 1953. L.J. Mason, [*On Ward’s integral formula for the wave equation in plane-wave spacetimes*]{}, Twistor Newsletter [**28**]{} (1989), 17–19. L.J. Mason, J.-P. Nicolas, [*Conformal scattering and the Goursat problem*]{}, J. Hyperbolic Differ. Equ. [**1**]{} (2004), 2, 197–233. L.J. Mason, J.-P. Nicolas, [*Regularity an space-like and null infinity*]{}, J. Inst. Math. Jussieu [**8**]{} (2009), 1, 179–208. J. Metcalfe, D. Tataru, M. Tohaneanu, [*Price’s law on nonstationary space-times*]{}, Adv. Math. [**230**]{} (2012), 3, 995–1028. J.-P. Nicolas, [*Non linear Klein-Gordon equation on Schwarzschild-like metrics*]{}, J. Math. Pures Appl. 74 (1995), p. 35-58. J.-P. Nicolas, [*On Lars Hörmander’s remark on the characteristic Cauchy problem*]{}, Annales de l’Institut Fourier, [**56**]{} (2006), 3, 517–543. R. Penrose, [*Asymptotic properties of fields and spacetime*]{}, Phys. Rev. Lett. [**10**]{} (1963), 66–68. R. Penrose, [*Conformal approach to infinity*]{}, in Relativity, groups and topology, Les Houches 1963, ed. B.S. De Witt and C.M. De Witt, Gordon and Breach, New-York, 1964. R. Penrose, [*Zero rest-mass fields including gravitation : asymptotic behaviour*]{}, Proc. Roy. Soc. London [**A284**]{} (1965), 159–203. R. Penrose, W. Rindler, [*Spinors and space-time*]{}, Vol. I & II, Cambridge monographs on mathematical physics, Cambridge University Press 1984 & 1986. R. Price, [*Nonspherical Perturbations of Relativistic Gravitational Collapse. I. Scalar and Gravitational Perturbations*]{}, Phys. Rev. D [**5**]{} (1972), 10, p. 2419–2438. R. Price, [*Nonspherical Perturbations of Relativistic Gravitational Collapse. II. Integer-Spin, Zero-Rest-Mass Fields*]{}, Phys. Rev. D [**5**]{} (1972), 10, p. 2439–2454. H. Soga, [*Singularities of the scattering kernel for convex obstacles*]{}, J. Math. Kyoto Univ. [**22**]{}-4 (1983), 729–765. D. Tataru, M. Tohaneanu, [*A local energy estimate on Kerr black hole backgrounds*]{}, Int. Math. Res. Not. IMRN 2011, 2, 248–292. R. Ward, [*Progressing waves in flat spacetime and in plane-wave spacetimes*]{}, Class. Quantum Grav. [**4**]{} (1987), 775–778. E.T. Whittaker, [*On the partial differential equations of mathematical physics*]{}, Mathematische Annalen [**57**]{} (1903), p. 333-355. [^1]: It is interesting to note that the integral formula, obtained by Lax and Phillips, that recovers the field in terms of its scattering data, was in fact discovered by E.T. Whittaker in 1903 [@Whi]. This does not seem to have been known to them or to Friedlander. The Lax-Phillips theory gave Whittaker’s formula its rightful interpretation as a scattering representation of the solutions of the wave equation. There is an interesting extension of this formula to plane wave spacetimes due to R.S. Ward [@Wa], developed further by L.J. Mason [@Ma]. [^2]: More precisely, the existence of a translation representation of the propagator is tied in with the existence of a timelike Killing vector field that extends as the null generator of null infinity. [^3]: The treatment of the wave equation was not completed in this paper, the additional ingredients required can be found in another work by the same authors, dealing with the peeling of scalar fields, published in 2009 [@MaNi2009]. [^4]: We will still however work with both the rescaled and the physical field when comparing our energy norms with those used by other authors. Of course the indices of vectors and $1$-forms will have to be raised and lowered using the rescaled metric $\hat{g}$ when working with rescaled quantities and using the physical metric $g$ when working with unrescaled quantities. [^5]: The factor $-4$ comes form the identity applied to $\hat{J}_a + V_a$, i.e. $${\mathrm{d}}* ((\hat{J}_a + V_a ){\mathrm{d}}x^a) = -(1/4) \nabla_a (\hat{J}^a + V^a ) {\mathrm{dVol}}\, .$$ [^6]: Of course the Hodge star in equation is associated with the physical metric, whereas in the expression of the rescaled energy, it is associated with the rescaled metric. [^7]: Note that the spaces ${\cal H}^\pm$ are naturally identified via a time reflexion $t \mapsto -t$.
--- abstract: 'The problem was introduced in 2006 in the context of biological networks. It consists of deciding whether or not a multiset of colors occurs in a connected subgraph of a vertex-colored graph. has been mostly analyzed from the standpoint of parameterized complexity. The main parameters which came into consideration were the size of the multiset and the number of colors. In the many utilizations of , however, the input graph originates from real-life applications and has structure. Motivated by this prosaic observation, we systematically study its complexity relatively to graph structural parameters. For a wide range of parameters, we give new or improved FPT algorithms, or show that the problem remains intractable. For the FPT cases, we also give some kernelization lower bounds as well as some ETH-based lower bounds on the worst case running time. Interestingly, we establish that is $\wone$-hard (while in $\wpp$) for parameter max leaf number, which is, to the best of our knowledge, the first problem to behave this way.' author: - 'Édouard Bonnet[^1]' - 'Florian Sikora[^2]' bibliography: - 'structural.bib' title: 'The Graph Motif problem parameterized by the structure of the input graph[^3]' --- Introduction ============ Preliminaries and previous work {#sec:prelim} =============================== $\fpt$ algorithms, kernelization and ETH-based lower bounds {#sec:easy} =========================================================== Parameters for which is hard {#sec:hard} ============================ Conclusion and open problems {#sec:conclusion} ============================ Figure \[fig:recap\] sums up the parameterized complexity landscape of with respect to structural parameters. For parameter maximum independent set the complexity status of remains unknown. Even when the problem is in $\fpt$, polynomial kernels tend to be unlikely; be it for the natural parameter even on comb graphs [@Ambalath2010] or for the vertex cover number or the distance to clique (Theorem \[th:nokernelvc\]). Is it also the case for parameter cluster editing number? On the one hand, we saw that our algorithm running in $O^*(3^k)$ for parameter distance to clique is probably close to optimal, since $O^*((2-\varepsilon)^k)$ is unlikely. On the other hand, for parameter vertex cover number, for instance, we have a larger room for improvement between the $2^{O(k \log k)}$-upper bound and the $2^{o(k)}$-lower bound under ETH. Can we improve the algorithm to time $2^{O(k)}$, or, on the contrary, show a stronger lower bound of $2^{o(k \log k)}$ (potentially with the framework developed by Lokshtanov *et al.* [@Lokshtanov11])? A possible future work would be to see if the FPT algorithms presented in the article can be extended to the more general <span style="font-variant:small-caps;">List Graph Motif</span>, where a vertex can choose its color among a private list of colors, without damaging too much their running time. Finally, one could consider more restricted versions (when, for instance, the number of colors, or the maximum multiplicity of the motif, or the maximum number of occurences of a color in the graph, is bounded). This line of work is sometimes called *multi-parameter analysis*, where one seeks for FPT algorithms with respect to subset of parameters. Let us recall, as an example, that is in $\xp$ if the parameter is the treewidth of the graph plus the number of colors in the motif [@fellows2011]. Acknowledgments {#acknowledgments .unnumbered} =============== The work of the first author is supported by the European Research Council (ERC) grant “PARAMTIGHT: Parameterized complexity and the search for tight complexity results,” reference 280152. [^1]: Institute for Computer Science and Control, Hungarian Academy of Sciences, (MTA SZTAKI) [^2]: Université Paris-Dauphine, PSL Research University, CNRS, LAMSADE, Paris, France, [^3]: An extended abstract of this work appears in [@DBLP:conf/iwpec/BonnetS15].
--- abstract: 'We apply the morphological descriptions of two-dimensional contour map, the so-called Minkowski functionals (the area fraction, circumference, and Euler characteristics), to the convergence field $\kappa({\mbox{\boldmath$\theta$}})$ of the large-scale structure reconstructed from the shear map produced by the ray-tracing simulations. The perturbation theory of structure formation has suggested that the non-Gaussian features on the Minkowski functionals with respect to the threshold in the weakly nonlinear regime are induced by the three skewness parameters of $\kappa$ that are sensitive to the density parameter of matter, $\Omega_{\rm m}$. We show that, in the absence of noise due to the intrinsic ellipticities of source galaxies with which the perturbation theory results can be recovered, the accuracy of $\Omega_{\rm m}$ determination is improved by $\sim 20\%$ using the Minkowski functionals compared to the conventional method of using the direct measure of skewness.' author: - 'Jun’ichi Sato$^{1}$, Masahiro Takada$^{1}$, Y. P. Jing$^{2}$ and Toshifumi Futamase$^{1}$' title: | Implication of $\Omega_m$ through the Morphological Analysis\ of Weak Lensing Fields --- \#1 \#1[[1=]{}]{} Introduction ============ Weak gravitational lensing caused by the large-scale structure (LSS) of the universe distorts the images of distant galaxies. This phenomenon is the so-called [*cosmic shear*]{}, which offers us the unique opportunity to measure directly the projected power spectrum of dark matter fluctuations regardless of the relation between dynamical states of the dark matter and luminous matter ([@Blandford]; [@Miralda]; [@Kaiser92]). Recently, several independent measurements of cosmic shear have been made from deep ’blank-field’ CCD imaging surveys, and reported significant detections of shear variance ([@Ludvic00]; [@Wittman]; [@Bacon]; [@KNL00]; [@Maoli]). Due to the nonlinear evolution of density fluctuation field in the large-scale structure, the cosmic shear field on small angular scale is expected to display significant non-Gaussian features. Even in this case, for the second moment analysis it has been shown that the numerical results from the ray-tracing simulations are in remarkably good agreements with the theoretical predictions using the fitting formula for the nonlinear matter power spectrum ([@JSW]; [@Hamana]). On the other hand, the higher order statistics can provide additional cosmological information associated with the non-Gaussian features. Especially, the normalized skewness parameter of the convergence field can be a sensitive indicator of the density parameter of matter, $\Omega_{m}$ ([@BWM]). However, unfortunately the highly nonlinear evolution of third order statistics cannot be simply described by the fitting formula for the nonlinear power spectrum alone. Recently, the extended method that allows us to perform the skewness calculations in the strongly nonlinear regime has been developed using “hyper-extended perturbation theory” (HEPT) ([@Hui]; [@Scoccimarro99]). Nevertheless, several works using the ray-tracing simulations have revealed that the value predicted by HEPT at relevant angular scales does not agree so well with the numerical results of skewness parameter ([@JSW]; [@White]; [@Hamanab]). Moreover, we would like to stress that it is difficult to have a physical meaning for the fitting formula beyond an empirical one. Therefore, it will be worth exploring again a new method to effectively extracting the non-Gaussian features of the convergence field in the weakly nonlinear regime based on the perturbation theory, which relies on a more firm physical basis of the structure formation. A possible method we propose is to use the Minkowski functionals with respect to level threshold; this is motivated by the fact that the functionals give the complete morphological descriptions of a considered field ([@SB]). For a two-dimensional case, the Minkowski functionals consist of the area fraction, circumference and Euler characteristics of the isocontour curves, where the Euler characteristics is equivalent to the genus statistics often used in the cosmology ([@Gott86]). Recently, Matsubara & Jain (2000) applied the genus curve to the convergence field reconstructed from the ray-tracing simulations, and found that the nonlinear evolution of convergence induces a deviation from the specific curve of genus for the Gaussian case. On the other hand, the theoretical predictions based on the perturbation theory have shown that the non-Gaussian features on the Minkowski functionals are completely characterized by the skewness parameters of the convergence field in the weakly nonlinear regime ([@Matsu00] and see also equation (\[eqn:mink\])). These results offer a possibility to extract the skewness parameters using the Minkowski functionals of the reconstructed convergence field. The purpose of this Letter is thus to investigate how accurately $\Omega_{\rm m}$ can be determined from the skewness parameters estimated by fitting the numerical results to theoretical predictions of the Minkowski functionals. The Ray-Tracing Simulation and the Minkowski Functionals {#method} ======================================================== We use shear and convergence fields modeled from the ray tracing simulations through the dark matter distribution of N-body simulations following the previous methods by [@Hamana] and [@White]. The original N-body simulations of the large-scale structure were performed with the P$^{3}$M code (see [@Jing94] and [@Jing98] in detail). The following discussions focus on two cosmological models, summarized in Table 1, and we used three different realizations for each model. As for the power spectrum of matter fluctuations, we assume the cold dark matter (CDM) model with the transfer function given by Bardeen et al. (1986) and the shape parameter $\Gamma=\Omega_{0}h$. All the simulations employ $256^{3}$ ($\approx17$ million) particles in a ($100h^{-1}$Mpc)$^{3}$ comoving box and start at redshift $z_{i}=36$. The gravitational softening length $\epsilon$ is $39h^{-1}$kpc. We use the multiple lens-plane algorithm to follow the propagations of light rays through the simulated matter distributions. In this algorithm, the matter content of each box at a certain redshift is projected onto a single plane perpendicular to the line of sight. We use typically $\sim20$ equally spaced lens-planes in the comoving distance between source and observer. The particle positions on each plane are interpolated onto a grid of size $2048^{2}$. In order to avoid possible correlations between different lens-planes, in each plane we choose one of the three realizations at the considered redshift and then project the mass distribution along a randomly chosen one of the three coordinate axes, translate the mass distribution by a random vector, and randomly rotate it in a unit of $\pi/2$. We consider a set of lens-planes between the source and observer as a different realization and use ten such realizations to estimate the cosmic variance associated with the measurements of weak lensing fields. Further details of the ray-tracing simulation are given in Hamana, Martel & Futamase (2000). The fields we use are $3^{\circ}$ on a side. Each light ray is traced by the Born approximation and hence can be handled as a straight line that radially extends from observer. Throughout this Letter, we assume that all source galaxies are at a redshift of $z_{s}=1$ and that their number densities is $n=30 \rm{~arcmin}^{-2}$. We then make the cosmic shear field, $\gamma({\mbox{\boldmath$\theta$}})$, on each grid from the ray-tracing simulations, and perform the smoothing on $\gamma({\mbox{\boldmath$\theta$}})$ by using a top-hat filter. Using the relation between the Fourier transforms $\kappa({\mbox{\boldmath$\theta$}})$ and $\gamma({\mbox{\boldmath$\theta$}})$, $ \hat{\kappa}(\mbox{\boldmath$l$}) = [(l_{1}^{2}-l_{2}^{2})\hat{\gamma}_{1}(\mbox{\boldmath$l$}) + 2l_{1}l_{2}\hat{\gamma}_{2}(\mbox{\boldmath$l$})]/l^2, $ and assuming the periodic boundary condition, $\kappa({\mbox{\boldmath$\theta$}})$ is reconstructed on each grid from the cosmic shear field. Figure \[kappamap\] shows examples of the reconstructed convergence field. To compute the Minkowski functionals, we label the convergence field by the threshold value $\nu({\mbox{\boldmath$\theta$}})$ that is defined by $\nu({\mbox{\boldmath$\theta$}})\equiv \kappa({\mbox{\boldmath$\theta$}})/\sigma_{0}$, where $\sigma_0$ is the rms of $\kappa$ defined by $\sigma^2_0\equiv{\langle{\kappa^2}\rangle}$. In a two-dimensional case, the Minkowski functionals are the area fraction $v_{0}(\nu)$, circumference $v_{1}(\nu)$, and Euler characteristics $v_{2}(\nu)$ for the isocontour curve with threshold $\nu$ that fully characterize the morphology of the field. The Euler characteristic is a purely topological quantity, which is equal to the number of isolated high-threshold regions minus the number of isolated low-threshold regions. To calculate the Minkowski functionals for the reconstructed convergence field given as pixel data, we employed the method developed by Winitzki & Kosowsky (1997). On the other hand, under the hypothesis that the initial perturbations are Gaussian as supported by the inflationary scenario, Matsubara (2000) recently derived the analytical formula of the Minkowski functionals based on the perturbation theory that can be applied to the weakly nonlinear convergence field: $$\begin{aligned} v_{0}(\nu)&=&\frac{1}{2}\mbox{erfc}\left( \frac{\nu}{\sqrt{2}} \right) + \frac{1}{6 \sqrt{2 \pi}}\mbox{e}^{-\nu^{2}/2} \sigma_{0}s_{0}\mbox{H}_{2}(\nu),\nonumber \\ v_{1}(\nu)&=&\frac{1}{8\sqrt{2}}\frac{\sigma_{1}}{\sigma_{0}} \mbox{e}^{-\nu^{2}/2}\left\{1+\sigma_0\left( \frac{s_{0}\mbox{H}_{3}(\nu)}{6}+\frac{s_{1}\mbox{H}_{1}(\nu)}{3} \right)\right\},\nonumber\\ v_{2}(\nu)&=&\frac{1}{2(2 \pi)^{\frac{3}{2}}} \frac{\sigma_{1}^{2}}{\sigma_{0}^{2}} \mbox{e}^{-\nu^{2}/2} \left\{ \mbox{H}_{1}(\nu)+\sigma_{0}\left( \frac{s_{0}\mbox{H}_{4}(\nu)}{6} +\frac{2s_{1}\mbox{H}_{2}(\nu)}{3} +\frac{s_{2}}{3} \right)\right\}, \label{eqn:mink}\end{aligned}$$ where $\sigma_{1}$ is defined by $\sigma^2_1\equiv{\langle{(\nabla\kappa)^2}\rangle}$ and H$_{n}(\nu)$ is the $n$th order Hermite polynomial. $s_{0}$, $s_{1}$, and $s_{2}$ denote the skewness parameters defined by $s_0\equiv{\langle{\kappa^3}\rangle}/\sigma_0^4$, $s_1\equiv-(3/4){\langle{\kappa^2(\nabla^2\kappa)}\rangle}/(\sigma_0^2\sigma_1^2)$, and $s_2\equiv-3{\langle{(\nabla\kappa\cdot\nabla\kappa)(\nabla^2\kappa)}\rangle} /\sigma_1^4$, respectively, where the quantity $s_{0}$ is the skewness parameter conventionally used in the previous works of weak lensing. Equation (\[eqn:mink\]) indicates that those skewness parameters can be new statistical indicators of the deviations from the specific Gaussian predictions of $v_0(\nu)$, $v_1(\nu)$ and $v_2(\nu)$ with $s_0=s_1=s_2=0$. It should be noted that $s_0$, $s_1$ and $s_2$ themselves can be given as functions of the cosmological parameters for the CDM model and the smoothing scale of the top-hat filter ([@BWM]), which reveals that the skewness parameters are particularly sensitive to $\Omega_m$. Therefore, we propose that comparing the theoretical predictions (\[eqn:mink\]) to their numerical (or observed) results could place constraints on the cosmological parameters. In some previous work (e.g., [@MJ]), the area fraction to labeling the Minkowski functionals has been used instead of the threshold in order to cancel out the horizontal shift of those functionals that is due to the nonlinear evolution of the underlying density fluctuations on the high threshold side. However, this operation merely means that the area function $v_0(\nu)$ for the non-Gaussian field is transformed closer to its specific curve for the Gaussian case. For this reason, we do not employ the operation and simply use the threshold $\nu$. Results ======= Figure \[figv0v1v2\] shows both the analytical and numerical results of the area fraction $v_{0}(\nu)$ (left panels), circumference $v_{1}(\nu)$ (middle panels), and Euler characteristics $v_{2}(\nu)$ (right panels) per square arcmin as a function of the threshold $\nu$ for the convergence fields with two different smoothing scales of $\theta=1'$ (upper panels) and $\theta=8'$ (lower panels), respectively. In those plots, normalizations of the analytical predictions except $v_0(\nu)$ are determined by minimizing the $\chi^{2}$ value for the fitting between the predictions (\[eqn:mink\]) and the numerical results. The mean values and error bars in each bin of $\nu$ are estimated from the ten different realizations with the area of $3\times 3$ square degrees, and the error corresponds to the cosmic variance associated with the measurements of the Minkowski functionals. Non-Gaussian features on the functionals for the noise-free convergence field are due to nonlinear gravitational clustering; at negative $\nu$ it has a cutoff related to the minimum $\kappa$ resulting from empty beams and has a tail at positive $\nu$ due to collapsed halos. For the small smoothing scale of $\theta=1'$, there are large differences between the analytical predictions and the numerical results. This is because the highly nonlinear evolution of the density field has a large effect on the convergence field. For the large smoothing scale of $\theta=8'$, on which the convergence field is expected to be in weakly nonlinear regime, the numerical results are broadly consistent with the analytical predictions. Note that the reason that the result of $\theta=8'$ has larger error bars than that of $\theta=1'$ is due to the fewer number of statistical samples. Figure \[fig:skew\] shows the values of $s_0$, $s_1$ and $s_2$ calculated by the perturbation theory, the direct measurement of $s_0$ (top left) from the reconstructed convergence field and the estimations of $s_0$ (top right), $s_1$ (bottom left) and $s_2$ (bottom right) obtained from the $\chi^2$ fitting between the theoretical predictions (\[eqn:mink\]) of the Minkowski functionals and their simulation results. Here we have used only the simulation data in the range of $-1.5\le\nu\le 1.5$, because we expect that the convergence field in this range is still in the weakly nonlinear regime and therefore can be applied to the perturbation theory predictions. In these figures, assuming that the survey of weak lensing is performed over the area of $9\times 9$ square degrees, we estimated the error bars by multiplying the variance directly obtained from the ten realizations as shown in Figure \[fig:mink\] by a factor of $1/3$. We have confirmed that the measurement of Euler characteristics, $v_2$, is also sensitive to the discreteness effect of pixel data. Therefore, to minimize the unresolved uncertainties, we determined the parameters of $s_0$, $s_1$, and $s_2$ in the following procedure. First, we determine $s_0$ from the fitting of $v_0(\nu)$ because the non-Gaussian features of $v_0(\nu)$ in the theoretical prediction (\[eqn:mink\]) depends on $s_0$ and $\sigma_0$, where $\sigma_0$ is also computed directly from the reconstructed convergence field according to the definition $\sigma_0^2={\langle{\kappa^2}\rangle}$. Similarly, by using the already determined value of $s_0$, we determine $s_1$ from the shape of $v_1$. Finally, we use the shape of Euler characteristics $v_2(\nu)$ to determine the $s_2$ parameter. Note that this fitting procedure causes the large error of $s_2$. The top left panel in Figure \[fig:skew\] shows that for all smoothing scales the direct measurement of $s_0$ tends to largely overestimate the value of $s_0$ calculated by the perturbation theory. This is because the direct measurement is more sensitive to the strong nonlinear rare events in the convergence distribution such as halos of dark matter. On the other hand, for SCDM model with $\theta=2',4'$ and $8'$, the values of $s_{0}$ obtained from our method using $v_0(\nu)$ fairly improve the estimations for $s_0$ predicted by the perturbation theory. For comparison, thin lines in the top left panel of Figure \[fig:skew\] show the direct measurement of $s_{0}$ in the same range of $\nu$ ($-1.5 \le\nu\le 1.5$) as used in our method. It is still clear that the modified direct measurement of $s_0$ also fails to predict its value from the perturbation theory for all the smoothing scales. Similarly, the values of $s_1$ obtained from our method are very similar to the values of $s_1$ from the perturbation theory for the smoothing scales of $\theta{\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}}2'$. However, one can see that the result of $s_2$ from our method cannot reproduce the value of the perturbation theory mainly because of the fitting procedure described above, and the results of $s_0$ and $s_1$ for the smallest smoothing scale of $\theta=1'$ do not work well. For these reasons, we will not use the results of $s_0$ and $s_1$ for $\theta=1'$ and $s_2$ for the determination of $\Omega_m$. On the other hand, it apparently seems that the errors of the skewness determinations for $\Lambda$CDM model are larger than those of SCDM. This result comes from the fact that the skewness variation $\Delta s_0=10$ for the flat universe models around $\Lambda$CDM model corresponds to $\mid \Delta \Omega_m \mid=0.05$, while around SCDM model $\Delta s_{0}=0.9$ corresponds to the same $\mid \Delta \Omega_m \mid$. Actually, as will be shown, the relative accuracy of the $\Omega_m$ determination is not so different in both SCDM and $\Lambda$CDM models. Table \[tab:om\] summarizes the results for the $\Omega_m$ determination with a best-fit value and $1\sigma$ error, which are obtained from the direct measurements of $s_0$ and from the estimations of $s_0$ and $s_1$ using the Minkowski functionals for the smoothing scales of $\theta=2', 4'$ and $8'$, respectively. We here employed the current favored flat universe models with $\Omega_m+\Omega_\lambda=1$. The table clearly shows that our method improves the accuracy of $\Omega_m$ determination by $\sim 20\%$ compared to that determined from the direct measure of skewness. Discussion ========== In this Letter we addressed the issue of how accurately the density parameter, $\Omega_{m}$, can be determined from the non-Gaussian signatures in the simulated weak lensing field based on the perturbation theory of structure formation instead of the empirical fitting formula. For this purpose, we have shown that the Minkowski functionals of convergence maps reconstructed from the cosmic shear field can be a useful new method. This is because the Minkowski functionals can effectively pick up the weakly nonlinear non-Gaussian features in the appropriate range of threshold, in which the perturbation theory can be safely applied. In fact, our numerical results have shown that the $\Omega_m$ determination of using the Minkowski functionals produces $\sim 20\%$ accurate best-fit value to the input value of $\Omega_m$ compared with the result of using the direct measurement of skewness. However, we still have to further investigate possible uncertainties due to the limited number of numerical realizations used in this Letter by increasing the number, and this will be our future work. In this Letter, we have not considered the effect of intrinsic ellipticities of source galaxies on our method. Nevertheless, for the practical purpose, it is critical to take into account this effect, and therefore we will need the theoretical predictions of the Minkowski functionals, including the noise effect. This study is now in progress and will be presented elsewhere. In practice, it will also be necessary to take into account the redshift distribution of source galaxies. However, previous works have quantitatively shown that, even if using a more realistic model for the redshift distribution of source galaxies as expressed by $n(z)\propto z^{2}\exp[-(z/z_{0})^{2.5}]$ with the mean redshift of unity, the magnitude of cosmic shear signal is changed only by $\sim 10\%$ compared to the result of using all the sources distributed at $z_s=1$ (e.g., [@JSW]). Therefore, we prospect that the change of source distribution does not largely affect our results. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank for T. Hamana, K. Umetsu and J. Schmalzing for useful discussions and valuable comments. M.T. also acknowledges a support from a JSPS fellowship. Y.P.J. is supported in part by the One-Hundred-Talent Program and by NKBRSF (G19990754). Bacon, D., Refregier, A. & Ellis, R., 2000, MNRAS, 318, 625B Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S., 1986, , 304, 15 Bernardeau, F., van Waerbeke, L., & Mellier, Y., 1997, , 322, 1 Blandford, R. D., Saust, R. D., Brainerd, A. B., & Villumsen, J. V., 1991, , 251, 600 Gott, J. R., Melott, A. L, & Dickinson, M., 1986, , 306, 341 Hamana, T., Martel, H., & Futamase, T., 2000, , 529, 56 Hamana, T., Colombi, S. T., Thion, A., Devrient, J. E. G. T., Mellier, Y., & Bernardeau, F., 2000, (astro-ph/0012200) Hui, L. 1999, , 519, 9 Jain, B., Seljak, U., & White, S., 2000, , 530, 547 Jing, Y. P., 1998, , 503, L9 Jing, Y. P., & Fang, L. Z., 1994, , 432, 438 Kaiser, N., 1992, , 388, 272 Kaiser, N., Willson, G. & Luppino, G. A., 2000, (astro-ph/0003338) Maoli, R., et al., 2000, (astro-ph/0011251) Matsubara, T., 2000, (astro-ph/0006269) Matsubara, T. & Jain, B., 2000, (astro-ph/0009402) Miralda-Escude, J., 1991, , 380, 1 Scoccimarro, R., & Frieman, J., 1999, , 520, 35 Schmalzing, J., & Buchert, T., 1997, , 482, L1 van Waerbeke, L. et al. 2000, , 358, 30 White, M., & Hu, W., 2000, , 537, 1 Winitzki, S., & Kosowsky, A., 1997, New Astronomy, 3, 75 Wittman, D. M., Tyson, J. A., Kirkman, D., Dell’Antonoio, I., & Bernstein, G., 2000, , 405, 143 \[kappamap\] \[figv0v1v2\] \[figparams\] [l|\*[7]{}[c]{}]{} Model & $\Omega_m$ & $\Omega_\Lambda$ & $\Gamma$ & $\sigma_8$ & m$_{\rm{p}}(h^{-1}M_{\odot})$)\ SCDM & 1.0 & 0.0 & 0.5 & 0.6 & 1.7$\times$10$^{10}$\ $\Lambda$CDM & 0.3 & 0.7 & 0.21 & 1.0 & 5.0$\times$10$^{9}$\ \[tab:1\] [l|\*[2]{}[c]{}]{} Model & $\Omega_m$ from the direct measure of $s_0$ & $\Omega_m$ from Minkowski functionals\ SCDM ($\Omega_{m}=1.0$) & 0.50 $\pm$ 0.19 & 0.78 $\pm$ 0.22\ $\Lambda$CDM ($\Omega_{m}=0.3$) & 0.24 $\pm$ 0.05 & 0.31 $\pm$ 0.07\ \[tab:om\]
--- author: - 'A. G. Istrate [^1], T. M. Tauris N. Langer' - 'J. Antoniadis' bibliography: - 'tauris\_refs.bib' date: 'Received July 25, 2014; accepted October 17, 2014' title: 'The timescale of low-mass proto-helium white dwarf evolution ' --- [A large number of low-mass ($<0.20\;M_{\odot}$) helium white dwarfs (He WDs) have recently been discovered. The majority of these are orbiting another WD or a millisecond pulsar (MSP) in a close binary system; a few examples are found to show pulsations or to have a main-sequence star companion. There appear to be discrepancies between the current theoretical modelling of such low-mass He WDs and a number of key observed cases, indicating that their formation scenario yet remains to be fully understood. ]{} [Here we investigate the formation of detached proto-He WDs in close-orbit low-mass X-ray binaries (LMXBs). Our prime focus is to examine the thermal evolution and the contraction phase towards the WD cooling track and investigate how this evolution depends on the WD mass. Our calculations are then compared to the most recent observational data.]{} [Numerical calculations with a detailed stellar evolution code were used to trace the mass-transfer phase in a large number of close-orbit LMXBs with different initial values of donor star mass, neutron star mass, orbital period, and strength of magnetic braking. Subsequently, we followed the evolution of the detached low-mass proto-He WDs, including stages with residual shell hydrogen burning and vigorous flashes caused by unstable CNO burning.]{} [We find that the time between Roche-lobe detachment until the low-mass proto-He WD reaches the WD cooling track is typically $\Delta t_{\rm proto}=0.5-2\;{\rm Gyr}$, depending systematically on the WD mass and therefore on its luminosity. The lowest WD mass for developing shell flashes is $\sim\!0.21\;M_{\odot}$ for progenitor stars of mass $M_2 \le 1.5\;M_{\odot}$ (and $\sim\!0.18\;M_{\odot}$ for $M_2=1.6\;M_{\odot}$).]{} [The long timescale of low-mass proto-He WD evolution can explain a number of recent observations, including some MSP systems hosting He WD companions with very low surface gravities and high effective temperatures. We find no evidence for $\Delta t_{\rm proto}$ to depend on the occurrence of flashes and thus question the suggested dichotomy in thermal evolution of proto-WDs. ]{} Introduction {#sec:intro} ============ In recent years, the number of detected low-mass ($\la 0.20\;M_{\odot}$) helium white dwarfs (He WDs) has increased dramatically, mainly as a result of multiple survey campaigns such as WASP, ELM, HVS, [*Kepler*]{}, and SDSS [@psc+06; @rbk+10; @bgkk05; @bkak10; @bka+13; @sob+12; @kba+12; @hmg+13; @mbh+14]. The existence of low-mass He WDs in close binaries with a radio millisecond pulsar (MSP), however, has been known for a few decades [e.g. @vbjj05 and references therein]. Several attempts have been made to calibrate WD cooling models for such systems on the basis of the spin-down properties of the MSP [e.g. @asvp96; @hp98; @dsbh98; @asb01; @pach07]. The idea is that the characteristic spin-down age of the MSP ($\tau_{\rm PSR}\equiv P/(2\dot{P})$, where $P$ is the spin period and $\dot{P}$ is the period derivative) should be equivalent to the cooling age of the WD ($\tau_{\rm cool}$), assuming that the radio MSP is activated at the same time as the WD is formed, following an epoch of mass transfer in a low-mass X-ray binary (LMXB). Unfortunately, this method is highly problematic since $\tau_{\rm PSR}$ generally is a poor true age estimator. It can easily be incorrect by a factor of 10 or more [@ctk94; @llfn95; @tau12; @tlk12]. Determining the true age of MSPs, however, is important for studying their spin evolution and constraining the physics of their previous recycling phase [@ltk+14]. The discovery of the intriguing PSR J1012+5307 [@nll+95] sparked an intense discussion about WD cooling ages and MSP birthrates [@llfn95] given that $\tau_{\rm PSR} > 20\;\tau_{\rm cool}$. Soon thereafter, it was suggested [@asvp96; @dsbh98; @vbjj05] that He WDs with a mass $\la 0.20\;M_{\odot}$ avoid hydrogen shell flashes, whereby their relatively thick ($\sim 10^{-2}\;M_{\odot}$) hydrogen envelope remains intact, causing residual hydrogen shell burning to continue on a very long timescale. Despite significant theoretical progress [e.g. @amc13 and references therein], our understanding of the thermal evolution of (proto) He WDs remains uncertain. In particular, a number of recent observations of apparently bloated WDs calls for an explanation. In this Letter, we study the formation of a large number of low-mass He WDs by modelling close-orbit LMXBs. We carefully investigate the properties of the resulting proto-WDs and follow their evolution until and beyond settling on the WD cooling track. Finally, we compare our results with observations. Numerical methods and physical assumptions {#sec:BEC} =========================================== Numerical calculations with a detailed stellar evolution code were used to trace the mass-transfer phase following the same prescription as outlined in @itl14. We investigated models with a metallicity of $Z=0.02$, a mixing-length parameter $\alpha = l/H_p = 2.0$, and a core convective overshooting parameter of $\delta_{\rm OV}=0.10$. A wide range of LMXB systems were investigated with different initial values of donor star mass ($M_2$), neutron star mass, orbital period, and the so-called $\gamma$-index of magnetic braking. The evolution of the low-mass (proto) He WD was calculated including chemical diffusion (mixing), hydrogen shell flashes (CNO burning), and residual shell hydrogen burning. Convective, semi-convective, and overshoot mixing processes were treated via diffusion. Thermohaline mixing was included as well, whereas gravitational settling and radiative levitation were neglected, as was stellar wind mass loss. Results {#sec:Results} ======= In Fig. \[fig:logg-T\] we have plotted a selection of our calculated evolutionary tracks, from the moment of Roche-lobe detachment until the end of our calculations, for (proto) He WDs with masses of $0.15-0.28\;M_{\odot}$. In general, our models fit the observations quite well. The few cases with discrepancies are sources with large uncertainties in the WD mass. Vigorous single or multiple cycle hydrogen shell flashes explain the large loops in the diagram, whereas mild thermal instabilities are seen e.g. for the $0.25\;M_{\odot}$ proto-WDs at $\log g\simeq 4.5$. It has been known for many years that a thermal runaway flash may develop through unstable CNO burning when a proto-WD evolves towards the cooling track [@kw67; @web75; @it86]. During these flashes the luminosity becomes very high, whereby the rate of hydrogen burning is significantly increased [e.g. @ndm04; @gau13 and references therein]. Our models with strong flashes often experience an additional episode of mass loss via Roche-lobe overflow [RLO, see also @it86; @seg00; @prp02; @ndm04]. For progenitor stars with $M_2 \le 1.5\;M_{\odot}$ we find hydrogen shell flashes in WDs with masses of $0.21 \le M_{\rm WD}/M_{\odot} \le 0.28$. Hence, the lowest mass for which flashes occur is $M_{\rm flash}=0.21\;M_{\odot}$. However, we find a lower value of $M_{\rm flash}=0.18\;M_{\odot}$ for $M_2 = 1.6\;M_{\odot}$. It has been argued [e.g. @vbjj05] that the value of $M_{\rm flash}$ is important since it marks a dichotomy for the subsequent WD cooling such that WDs with a mass $M_{\rm WD} < M_{\rm flash}$ remain hot on a Gyr timescale as a result of continued residual hydrogen shell burning, whereas WDs with $M_{\rm WD} > M_{\rm flash}$ cool relatively fast as a result of the shell flashes that erode the hydrogen envelope. We find that this transition is smooth, however, and that the thermal evolution timescale mainly depends on the proto-He WD luminosity and not on the occurrence or absence of flashes. In Fig. \[fig:delta-t\] we have plotted the time, $\Delta t_{\rm proto}$ it takes from Roche-lobe detachment until the star reaches its highest value of $T_{\rm eff}$. (For WDs that undergo hydrogen shell flashes we used the time until the occurrence of highest $T_{\rm eff}$ on their last loop in the HR–diagram.) The plot shows a very strong dependence on $M_{\rm WD}$. For very low-mass He WDs (i.e. those with $M_{\rm WD}<M_{\rm flash}$, which therefore avoid hydrogen shell flashes), $\Delta t_{\rm proto}$ may last up to 2 Gyr. This result has important consequences for their thermal evolution and contraction (see below). There is a well-known correlation between the degenerate core mass of an evolved low-mass star and its luminosity, $L$ [@rw71]. After terminating the RLO, the star moves to the far left in the HR–diagram – (initially) roughly at constant $L$ – while burning the residual hydrogen in the envelope at a rate proportional to $L$. We find that the total amount of hydrogen left in the envelope is always $\sim\!0.01\pm0.005\;M_{\odot}$, in agreement with @seg00, and is correlated in a variable manner with $M_{\rm WD}$ (especially for $M_2\ge1.5\;M_{\odot}$, explaining the peak in Fig. \[fig:delta-t\]). Therefore, the increase in $\Delta t_{\rm proto}$ seen in Fig. \[fig:delta-t\] for decreasing values of $M_{\rm WD}$ can simply be understood from their much lower luminosities following the Roche-lobe detachment [see also Figs. 5 and 10 in @itl14]. Based on our calculated proto-He WD models, we find (see Appendix B) $$\Delta t_{\rm proto} \simeq 400 \;\;{\rm Myr}\;\;\left(\frac{0.20\;M_{\odot}}{M_{\rm WD}}\right)^7 . \label{eq:t_proto}$$ The conclusion that $\Delta t_{\rm proto}$ can reach $\sim\!{\rm Gyr}$ was found previously for a few single models [e.g. @dsbh98; @seg00; @asb01]. Here we show, for the first time, its systematic dependence on $M_{\rm WD}$. Fig. \[fig:radius\] shows the contraction phase for three proto-He WDs. The value of $\Delta t_{\rm proto}$ increases significantly when $M_{\rm WD}$ decreases from 0.24 to $0.17\;M_{\odot}$. Hence, low-mass ($\la 0.20\;M_{\odot}$) proto-He WDs can remain bloated on a very long timescale. It is important to notice that no pronounced discontinuity in $\Delta t_{\rm proto}$ is seen at $M_{\rm flash}\simeq 0.21\;M_{\odot}$ (cf. Figs. \[fig:delta-t\], \[fig:radius\], and \[fig:t-proto-theo\]). Although the peak luminosity (and thus the rate of eroding hydrogen) is high during a flash, the star only spends a relatively short time ($\sim 10^3-10^6\;{\rm yr}$) at high $L$ when making a loop in the HR–diagram. Comparison with observational data of He WDs {#sec:obs} ============================================ In Table \[table:obs\] (Appendix A) we list observed low-mass He WDs included among the plotted data in Fig. \[fig:logg-T\]. We now discuss recent interesting sources in view of our theoretical modelling. MSPs with low-mass (proto) He WDs in tight orbits {#subsec:MSPs} ------------------------------------------------- The companion star to PSR J1816+4510 ($P_{\rm orb}=8.7\;{\rm hr}$) was recently observed by @ksr+12 [@kbv+13]. They assembled optical spectroscopy and found an effective temperature of $T_{\rm eff}=16\,000\pm500\;{\rm K}$, a surface gravity of $\log g=4.9\pm0.3$, and a companion mass of $M_{\rm WD}\,\sin ^3i=0.193\pm 0.012 \;M_{\odot}$, where $i$ is the orbital inclination angle of the binary. They concluded that while the spectrum is rather similar to that of a low-mass He WD, it has a much lower surface gravity (i.e. larger radius) than a WD on the cooling track. They discussed that PSR J1816+4510 perhaps represents a redback system [cf. @ccth13 for a formation scenario] where pulsar irradiation of the hydrogen-rich, bloated companion causes evaporation of material, which can explain the observed eclipses of the radio pulses for $\sim\!$10% of the orbit. However, the very hot surface temperature of this companion (16$\,$000 K) cannot be explained from a redback scenario. Redbacks typically have illuminated dayside temperatures of only $T_{\rm eff}\simeq 6\,000\;{\rm K}$ [@bvr+13]. Here we suggest that this companion star is simply a low-mass proto-He WD. As we have demonstrated, such a star takes several $100\;{\rm Myr}$ to reach the cooling track, and our models match the observed values of $T_{\rm eff}$ and $\log g$. [Note, for $P_{\rm orb}=8.7\;{\rm hr}$ one usually expects $M_{\rm WD}\la0.18\;M_{\odot}$, cf. @itl14]. Another case is the triple system PSR J0337+1715 recently discovered by @rsa+14, which raises fundamental questions about its formation [@tv14]. One open question is the order of formation of the two WDs orbiting the MSP (with $P_{\rm orb}=1.6$ and 327 days). Spectroscopy of the inner companion by @kvk+14 verified that this is a $0.197\;M_{\odot}$ He WD, as known from pulsar timing. They measured a low surface gravity of $\log g = 5.82\pm0.05$ and noted that its very high surface temperature, $T_{\rm eff}=15,800\pm100\;{\rm K}$, could indicate that it had just experienced a flash. This would suggest a surprisingly short lifetime for this object. However, our modelling of $\sim\!0.20\;M_{\odot}$ He WDs shows that these stars avoid flashes. Instead we find that for such a star it takes $400-600\;{\rm Myr}$ (Fig. \[fig:delta-t\]) to reach the WD cooling track. Therefore, we conclude that it is reasonable to detect such a WD at an early, bloated stage of its evolution. NLTT 11748 and other low-mass (proto) He WD binaries {#subsec:11748} ---------------------------------------------------- A large number of low-mass proto-He WDs (also classified as sdB stars) are found in binaries with another WD. These systems probably formed via stable RLO in cataclysmic variable systems resembling our calculations, but with a $\sim\!0.7\;M_{\odot}$ CO WD accretor instead of a NS. NLTT 11748 was discovered by @sks+10, with follow-up observations made by @kmw+14. This eclipsing detached binary consists of a $0.71-0.74\;M_{\odot}$ CO WD with a very low-mass He WD and $P_{\rm orb}\simeq 5.6\;{\rm hr}$. Our evolutionary tracks for a $0.16\;M_{\odot}$ He WD are indeed consistent with their observed values of $\log g = 6.35$ and $T_{\rm eff}=7\,600~{\rm K}$ (and their estimated mass of $0.136-0.162\;M_{\odot}$). @bka+13 recently detected four binaries with low-mass WDs having $\log g\simeq 5$, in accordance with our modelling of proto-He WDs presented here (cf. Figs. \[fig:logg-T\] and  \[fig:logg-T-time\]). Bloated, hot, low-mass He WDs detected by [*Kepler*]{} ------------------------------------------------------ Four (proto) He WDs have been found with A-star companions in the combined transit and eclipse data from the [*Kepler*]{} mission [@vrb+10; @crf11; @brvc12]. Three of these He WDs (KOI–74, KIC 10657664, KOI–1224) have $M_{\rm WD}\la 0.26\;M_{\odot}$ and are also plotted in Fig. \[fig:logg-T\]. The mass estimates of these WDs are not very precise. However, within 1–2$\sigma$, the characteristics of these objects also seem to match our evolutionary tracks reasonably well.\ The question now is why we see all these bloated proto-WDs given that WDs spend significantly more time on the subsequent cooling tracks. This is simply a selection effect. The WDs are only seen to eclipse A-stars in the [*Kepler*]{} data as long as they are bloated proto-WDs (and thus also more luminous than ordinary WDs, which have already settled on the cooling track). Discussion and conclusions {#sec:discussions} ========================== We have demonstrated that low-mass ($\la 0.20\;M_{\odot}$), detached proto-He WDs may spend up to $\sim\!2\;{\rm Gyr}$ in the contraction (transition) phase from the Roche-lobe detachment until they reach the WD cooling track. This is important for an age determination of He WDs in general, and for recycled MSPs in particular. We expect a fair number of He WDs to be observed in this (bloated) phase, in agreement with recent observations. The duration of this contraction phase ($\Delta t_{\rm proto}$) decreases strongly with increasing mass of the proto-He WD, $M_{\rm WD}$. This can be understood from the well-known correlation between degenerate core mass and luminosity of an evolved low-mass star. Therefore, after Roche-lobe detachment, the rate at which the residual ($0.01\pm0.005\;M_{\odot}$) hydrogen in the envelope is consumed is directly proportional to the luminosity and thus $M_{\rm WD}$. The value of $\Delta t_{\rm proto}$ is not particularly sensitive to the occurrence or absence of flashes. Whether or not hydrogen shell flashes occur depends on the WD mass, its chemical composition, and the treatment of diffusion (mixing) of the chemical elements [e.g. @dsbh98; @seg00; @asb01; @ndm04; @amc13]. In general, we find flashes in our models with $0.21\le M_{\rm WD}/M_{\odot} \le 0.28$ for $M_2\le 1.5\;M_{\odot}$. This result agrees excellently well agreement with the interval found by @ndm04 for donors with solar metallicity, and also with the earlier work of @dsbh98. For $M_2=1.6\;M_{\odot}$ we find that WDs down to $\sim\!0.18\;M_{\odot}$ are experiencing flashes. Detailed studies by @asb01 [@amc13] found hydrogen shell flashes for a much broader range of final WD mass ($0.18 < M_{\rm WD}/M_{\odot} < 0.41$). However, as pointed out by @ndm04, diffusion is an extremely fragile process, and turbulence can mitigate its effects. And more importantly, @ndm04 find that both $M_2$ and the mode of angular momentum losses may also affect the range for which hydrogen shell flashes occur. Indeed, we found a lower value of $M_{\rm flash}=0.18\;M_{\odot}$ for our models with $M_2 = 1.6\;M_{\odot}$. It has previously been shown that $M_{\rm flash}$ strongly increases with lower metallicity [e.g. @seg00; @ndm04]. The work of @amc13 was calculated for a constant $M_2=1.0\;M_{\odot}$ ($Z=0.01$). We have excluded such models with $M_2< 1.1\;M_{\odot}$ ($Z=0.02$) since these progenitor stars do not detach from their LMXB and evolve onto WD cooling track within a Hubble time. Chemical diffusion via gravitational settling and radiative levitation was not included in this work. These effects seem to slightly increase $\Delta t_{\rm proto}$ compared with models without diffusion (L. Nelson et al., in prep.). A systematic investigation of these and other effects on $\Delta t_{\rm proto}$ and $M_{\rm flash}$ will be addressed in a future work. AGI acknowledges discussions with L. Nelson, P. Marchant, R. Stancliffe and L. Grassitelli. JA acknowledges financial support by the ERC Starting Grant no. (279702, BEACON, led by P. Freire). Observational data and time evolution in the ($T_{\rm eff},\log g$)–diagram {#appendix:A} =========================================================================== In Fig. \[fig:logg-T-time\] we have plotted points for fixed time intervals of evolution along a number of selected tracks from Fig. \[fig:logg-T\]. The density of points along these curves combined with the (proto) WD luminosities at these epochs can be used to evaluate the probability of detecting them. For a direct comparison with data population synthesis needs to be included to probe the distribution of WD masses. The observational data plotted in Fig. \[fig:logg-T\] were taken partly from the sources given in Table \[table:obs\] (primarily He WDs with MSP companions, main-sequence A-star companions, or He WDs that have been detected to show pulsations). Additional data for the plotted symbols can be found in @sob+12 [@hmg+13; @bka+13]. ----------------------- ------------------------------ -------------------- --------------------------- -------------------- ------------------------------------------------------------------------------------------------------- He WD $\log g\;({\rm cm\,s}^{-2})$ $T_{\rm eff}$ (K) $M_{\rm WD}\;(M_{\odot})$ $P_{\rm orb}$ (hr) Optical data PSR J0337+1715 $5.82\pm0.05$ $15\,800\pm100$ $0.197\pm 0.0002$ 39.12 @kvk+14 PSR J0348+0432 $6.035\pm0.06$ $10\,120\pm90$ $0.172\pm0.003$ 2.46 @afw+13 PSR J0751+1807 $7.41\pm0.48$ $\;\;3\,900\pm400$ $0.138\pm 0.0006^a$ 6.31 @bvk06 PSR J1012+5307 $6.75\pm0.07$ $\;\;8\,550\pm25$ $0.16\pm0.02$ 14.51 @vbk96 [@cgk98] PSR J1738+0333 $6.45\pm 0.07$ $\;\;9\,130\pm150$ $0.182\pm 0.016$ 8.52 @avk+12 PSR J1816+4510 $4.9\pm 0.3$ $16\,000\pm500$ $\sim\!0.21\pm 0.02^b$ 8.66 @ksr+12 [@kbv+13] PSR J1909$-$3744 $6.77\pm 0.04$ $\;\;9\,050\pm50$ $0.2038\pm 0.0022$ 36.72 @ant13 PSR J0024$-$7204U$^c$ $\sim\!5.6$ $\sim\!11\,000$ $\sim\!0.17$ 10.29 @egh+01 PSR J1911$-$5958A$^c$ $6.44\pm 0.20$ $10\,090\pm150$ $0.175\pm 0.010$ 20.64 @bvkv06 NLTT 11748 $6.35\pm 0.03$ $\;\;7\,600\pm120$ $0.149\pm 0.013$ 5.64 @kmw+14 KOI$-$74 $6.51\pm 0.14$ $13\,000\pm1000$ $0.22\pm 0.03$ 125.53 @vrb+10 KOI$-$1224 $5.75\pm 0.06$ $14\,700\pm1000$ $0.22\pm 0.02$ 64.75 @brvc12 KIC 10657664 $5.50\pm 0.02$ $14\,600\pm300$ $0.26\pm 0.04^d$ 78.55 @crf11 SDSS J184037.78 $6.49\pm 0.06$ $\;\;9\,390\pm140$ $\sim\!0.17$ 4.59 @hmg+13**[$^{e,f}$]{}\ SDSS J111215.82 & $6.36\pm 0.06$ & $\;\;9\,590\pm140$ & $\sim\!0.17$ & 4.14 & @hmg+13**[$^{e,f}$]{}\ SDSS J151826.68 & $6.90\pm 0.05$ & $\;\;9\,900\pm140$ & $\sim\!0.23$ & 14.62 & @hmg+13**[$^{e,f}$]{}\ J1614 & $6.66\pm 0.14$ & $\;\;8\,800\pm170$ & $\sim\!0.19$ & – & @hmg+13**[$^f$]{}\ J2228 & $6.03\pm 0.08$ & $\;\;7\,870\pm120$ & $\sim\!0.16$ & – & @hmg+13**[$^f$]{}\ ********** ----------------------- ------------------------------ -------------------- --------------------------- -------------------- ------------------------------------------------------------------------------------------------------- $^a$ D. Nice, private comm. (2014).\ $^b$ Based on @kbv+13. See also @itl14 for further comments on the component masses of this source.\ $^c$ The WD is most likely to have formed in this globular cluster binary given that the eccentricity is $e<10^{-5}$, as expected from recycling.\ $^d$ @crf11 found two possible solutions for $M_{\rm WD}$ ($0.26\;M_{\odot}$ and $0.37\;M_{\odot}$). This WD has $P_{\rm orb}=3.3\;{\rm days}$ and thus we adopt the lower value of $M_{\rm WD}$ since this is agrees much better with the known $(M_{\rm WD},P_{\rm orb}$)-correlation [see @ts99 for discussions].\ $^e$ See additional references therein.\ $^f$ Pulsating He WDs, see @ca14 for recent theoretical modelling. \[table:obs\] The (proto) WD contraction phase {#appendix:B} ================================ Fig. \[fig:t-proto-theo\] shows the time $\Delta t_{\rm proto}$ it takes from Roche-lobe detachment until the proto-He WD reaches its highest value of $T_{\rm eff}$ and settles on the cooling track. Shown in this plot are all our calculated models for progenitor stars of 1.2 and $1.4\;M_{\odot}$ (i.e. a subset of the models plotted in Fig. \[fig:delta-t\]). The black line (Eqn. \[eq:t\_proto\]) is an analytical result obtained from a somewhat steep core mass–luminosity function ($L\propto M_{\rm WD}^{\,7}$) combined with the assumption (for simplicity) that in all cases $0.01\;M_{\odot}$ of hydrogen is burned before reaching the highest $T_{\rm eff}$. The figure shows that this line also serves as a good approximate fit to our calculated models. For a given He WD mass, the fit to $\Delta t_{\rm proto}$ calculated from our models is accurate to within 50%. Nuclear burning during flashes {#appendix:C} ============================== To compare the burning of residual envelope hydrogen for a case with and without large thermal instabilities (hydrogen shell flashes), we have plotted tracks in the HR–diagram shown in Fig. \[fig:HR-flash\_appendix\]. The age of the stars and the total amount of hydrogen remaining in their envelopes are given in Table \[table:HR-flash\] for the points marked in the figure. These models were chosen very close to (but on each side of) $M_{\rm flash}\simeq 0.21\;M_{\odot}$, in both cases for a $1.3\;M_{\odot}$ progenitor star. As discussed in the main text, although the peak luminosity is high during a flash (and thereby the rate at which hydrogen is burned), the star only spends a relatively short time ($\sim\!10^6\;{\rm yr}$) in this epoch. (For more massive He WDs it is even less time – for example, it only lasts $\sim\!10^3{\rm yr}$ for a $0.27\;M_{\odot}$ He WD.) Therefore, the amount of additional hydrogen burned as a result of flashes is relatively small. In the example shown in Fig. \[fig:HR-flash\_appendix\] it amounts to about 12% of the total amount of hydrogen at the point of Roche-lobe detachment. Hence, the flashes may appear to reduce $\Delta t_{\rm proto}$ by $\sim\!100\;{\rm Myr}$. However, one must bear in mind that the proto-WDs that experience flashes are also the WDs with the least amount of hydrogen in their envelopes after RLO. For a star that experiences flashes, the residual hydrogen present in the envelope following the LMXB-phase is processed roughly as follows: 70% during the epoch from Roche-lobe detachment until reaching highest $T_{\rm eff}$, 10% during the flashes, and 20% after finally settling on the WD cooling track. [rrrr]{} & & &\ Point & Relative age$^a$ & Total age$^b$ & Hydrogen$^c$\ & & & $(10^{-3}\;M_{\odot})$\ N1 & 0 & 0 & 13.68\ N2 & 341 Myr & 341 Myr & 2.94\ N3 & 1900 Myr & 2240 Myr & 0.79\ N4 & 8231 Myr & 10470 Myr & 0.67\ F1 & 0 & 0 & 7.78\ F2 & 107 Myr & 107 Myr & 2.71\ F3 & 31 Myr & 138 Myr & 2.45\ F4 & 1536 yr & 138 Myr & 2.45\ F5 & 5.1 Myr & 143 Myr & 2.22\ F6 & 20 Myr & 163 Myr & 2.06\ F7 & 1536 yr & 163 Myr & 2.05\ F8 & 3.6 Myr & 166 Myr & 1.75\ F9 & 2089 Myr & 2255 Myr & 0.85\ $^a$ Age relative to the previous point along the track.\ $^b$ Cumulated age relative to the first point on the track (since the time of Roche-lobe detachment).\ $^c$ Total amount of hydrogen remaining in the envelope of the (proto)-He WD.\ \[table:HR-flash\] [^1]: e-mail: aistrate@astro.uni-bonn.de
--- abstract: 'A class of Hamiltonians that are experimentally feasible in several contexts within Quantum Optics area and which lead to the so-called cooling by heating for fermionic as well as for bosonic systems have been analyzed numerically. We have found a large range of parameters for which cooling by heating can be observed either for the fermionic system alone or for the combined fermionic and bosonic systems. Finally, analyzing the experimental requirements we conclude that cooling by heating is achievable with nowadays technology, especially in the context of trapped ions/cavity QED, thus contributing to understand this interesting and counter intuitive effect.' author: - 'D. Z. Rossatto$^{1}$, A. R. de Almeida$^{2,3}$, T. Werlang$^{1}$, C. J. Villas-Boas$^{1}$, N. G. de Almeida$^{3}$' title: Cooling by heating in Quantum Optics Domain --- Recently, two schemes were proposed for cooling by heating [@Mari2012; @Cleuren2012], *i.e*., a given physical system in contact with a thermal reservoir has decreased its energy when one increases the temperature of the reservoir. A. Mari *et al*. [@Mari2012] introduced the idea of cooling a quantum system using incoherent thermal light. They proposed a scheme based on an optomechanical system, demonstrating that by driving the system with a thermal noise the interaction with other modes can be enhanced to assist in cooling the optomechanical system. In a another work, B. Cleuren, B. Rutten, and C. Van den Broeck [@Cleuren2012] proposed a scheme to cool a system powered by photons. Their systems are based on a nanosized solid state device, with no moving parts and no net electric currents, which can be refrigerated directly by using thermal photons. In this brief report we investigate numerically a class of well known Hamiltonians in the quantum optics domain and show that these Hamiltonians can lead to cooling by heating**.** Differently from both schemes above, which investigate the cooling by heating in solid state or optomechanical devices, our work brings this striking effect to the quantum optics context where techniques to manipulate systems at the individual atomic and bosonic scale is daily presented, thus opening the possibility to experimentally observe this phenomenon in a very controllable scenario. *Model.* In order to find out a system which allows us to cool it by raising the temperature of its reservoir, firstly we must note that all the systems which thermalize with the environment can not present this phenomenon (for example a single two-level atom or a single bosonic mode interacting with a thermal reservoir). Thus, to see cooling by heating some external force must be employed to drive the system out of equilibrium with the environment. To this end we explore the well known generalized anti-Jaynes-Cummings model (JCM) (which will be derived bellow), in the coupling regime where the effective Rabi frequency (atom-boson coupling) is much smaller than the bosonic and atomic transition frequencies. To implement such Hamiltonians in the trapped ions domain, for instance, one can use a two-level ion characterized by the transition frequency $\omega_{0}$ between the ground $\left\vert g\right\rangle $ and excited $\left\vert e\right\rangle $ states and trap frequency $\nu$ (bosonic mode) [@Wineland03]. The transition $\left\vert g\right\rangle $ $\leftrightarrow$ $\left\vert e\right\rangle $ is driven by a classical field of frequency $\omega_{L}$, wave vector $k_{L}=\omega_{L}/c$, and Rabi frequency $\Omega$ [@Wineland03]. In the Schrödinger picture, the Hamiltonian which describes such a system reads ($\hbar=1$) $H=H_{f}+H_{a}+H_{int}\left( t\right) $, with $H_{f}=\nu a^{\dagger}a$, $H_{a}=\omega_{0}\sigma_{z}/2$ and $$H_{int}\left( t\right) =\frac{\Omega}{2}\sigma_{-}e^{i\left( k_{L}\widehat{x}+\omega_{L}t\right) }+H.c.\text{,} \label{00}$$ $\sigma_{+}$($\sigma_{-}$) being the usual raising (lowering) Pauli operator for a two-level atomic system, $\sigma_{z}=\sigma_{+}\sigma_{-}-\sigma _{-}\sigma_{+}$, $a$ ($a^{\dagger}$) is the annihilation (creation) operator in the Fock space for the bosonic mode (vibrational motion of the ion),  $H.c.$ means Hermitian conjugate, $k_{L}\widehat{x}=$ $\eta_{L}\left( a+a^{\dagger}\right) $, with $\eta_{L}=$ $k_{L}/\sqrt{2m\nu}$ being the Lamb-Dicke parameter [@Wineland03]. Working in the limit $\eta_{L}\ll1$ and applying the rotating wave approximation, the Hamiltonian $H$ in the interaction picture can be written as [@Wineland03] $$H_{I}=g_{k}(\sigma_{-}a^{k}+\sigma_{+}a^{\dagger k}),\text{ }k=0,1,2. \label{1}$$ By adjusting the frequency $\omega_{L}$ on resonance with the two-level ion we can have the carrier interaction ($k=0$, $g_{0}=\Omega/2$); adjusting $\delta=\omega_{L}-\omega_{0}=k\nu$, we can also have the first ($k=1$, $g_{1}=i\Omega\eta_{L}/2$) and second ($k=2$, $g_{2}=-\Omega\eta_{L}^{2}/4$) blue sideband interactions [@Wineland03]. The quantum of vibrational energy of the center of mass of the ion is then described by $a^{\dagger}a$. In quantum optics area, the dynamics of this model under Born and Markov approximations (weak system-reservoir coupling) is provided by the master equation formalism [@Scully], which for the Hamiltonian (\[1\]) reads$$\begin{aligned} \frac{\partial\rho}{\partial t} & =-i\left[ H_{I},\rho\right] +\kappa\left( n_{th}+1\right) \mathcal{D}[a]\rho+\kappa n_{th}\mathcal{D}[a^{\dag}]\rho\nonumber\\ & +\gamma\left( m_{th}+1\right) \mathcal{D}[\sigma_{-}]\rho+\gamma m_{th}\mathcal{D}[\sigma_{+}]\rho\label{2}$$ where $\kappa$ and $\gamma$ are the spontaneous emission rates for the vibrational motion and internal levels of the ion, respectively, $n_{th}$ ($m_{th}$) is the mean number of phonons (photons) of the reservoir coupled to the vibrational mode (internal levels of the ion), and $\mathcal{D}[A]\rho\equiv2A\rho A^{\dagger}-A^{\dagger}A\rho-\rho A^{\dagger}A$ [@Lindblad]. Below we proceed to solve numerically (or analytically, for the carrier interaction) this master equation to obtain the steady state of the system (at $t\rightarrow\infty$ or $\partial\rho/\partial t=0$), in order to be able to calculate the corresponding thermodynamical properties. To this aim, we firstly have to note that the master equation can give rise to an infinity set of coupled differential equations for the elements of the density matrix of the whole system. Then, to solve it numerically, first we must truncate the Fock basis of the bosonic field somewhere. This truncation depends on the mean number of excitations in the vibrational mode, *i.e.*, the matrix elements corresponding to highly excited Fock states (compared to the mean number of excitation of the vibrational mode) must be virtually zero. We then integrate numerically the system of coupled differential equations for the elements of the density matrix of the system following the method presented in [@Tan99]. As we are working with two distinct reservoirs, there will be different response functions: one for the atom and another for the vibrational mode [@Zia02]. Working with the Hamiltonian (\[1\]) we can distinguish three situations where cooling by heating (i.e., by raising the temperature of the reservoir) can be observed: *(i)* looking at the variation of the internal energy of the ion only; *(ii)* looking at the variation of the vibrational energy of the ion; *(iii)* looking at the variation of both internal and vibrational energies of the ion. *Variation of the internal energy of the ion only*. Firstly we assume a carrier interaction ($k=0$) in Eq.(\[1\]) which corresponds to a single two-level ion driven by a classical field. Since the dynamics of the system does not involve the vibrational mode, we can fix $\kappa=0$ without loss of generality. From Eq.(\[2\]), we can easily obtain the average internal energy $E_{a}=\left\langle H\right\rangle =\left\langle H_{a}+H_{INT}\right\rangle =\left\langle H_{a}\right\rangle $ of the ion in the steady state as a function of the mean number of thermal photons (temperature of the reservoir). Then we can calculate the response function ($C_{a}$) of the internal energy with respect to the temperature of its reservoir ($T$) which we define as $$C_{a}=\frac{\partial E_{a}}{\partial T}. \label{RF}$$ This equation resembles the usual definition of specific heat. However, note that the temperature appearing in the equation above is the one of the reservoir, which is different of the effective temperature of the system since it is not in thermal equilibrium with its environment. With the steady state solution for the internal energy $E_{a}$, we can analytically derive the response function $$\begin{aligned} C_{a} & =-2k_{B}m_{th}\left( m_{th}+1\right) \nonumber\\ & \times\left[ \ln\left( \frac{m_{th}+1}{m_{th}}\right) \right] ^{2}\frac{\left[ 2\left( g_{0}/\gamma\right) ^{2}-\left( 2m_{th}+1\right) ^{2}\right] }{\left[ 2\left( g_{0}/\gamma\right) ^{2}+\left( 2m_{th}+1\right) ^{2}\right] ^{2}}, \label{cv1}$$ $k_{B}$ being the Boltzmann constant. Clearly, we see from Eq.(\[cv1\]) that $C_{a}\leq0$ for$$m_{th}\leq\frac{1}{\sqrt{2}}\frac{g_{0}}{\gamma}-\frac{1}{2}. \label{rangenegative}$$ Note that $C_{a}\rightarrow0$ when $m_{th}\rightarrow0$ (similar to what occurs for the third law of thermodynamics) or $m_{th}\rightarrow\infty$ (system saturation). For a sample of $N$ non-interacting atoms [@Natoms], the response function is $C_{N}=NC_{a}$ and then, the negative response can be observed even for an ensemble of two-level atoms. *Variation of the vibrational and internal energy of the ion*. Considering the ion coupled to the vibrational mode, we note that using the average energy of an individual system instead of the total energy (sum of the subsystems and interaction average energies) does not change our conclusions, since there is a region where the response function, Eq.(\[RF\]), becomes negative for both systems simultaneously and $Tr\left( H_{I}\rho\right) =0$ in all cases studied here, $H_{I}$ being the interaction Hamiltonian (\[1\]). To see that this is so, in Fig. 1(a) ($k=1$) and Fig. 2(a) ($k=2$) we plot the stationary average energy for both the bosonic mode ($\left\langle H_{f}\right\rangle /\nu$) and the atomic system ($\left\langle H_{a}\right\rangle /\omega_{0}$). In all simulations we assumed zero atomic energy in the lower state $\left\vert g\right\rangle $. To reliably calculate the region where cooling by heating can occur, we have limited our numerical analysis to the range $0\leq g_{k},\kappa\leq2\gamma$ ($k=1,2$). Also, in all figures, the average atomic energy was multiplied by a factor of ten for the sake of clarity. Assuming the two reservoirs at a common temperature ($m_{th}=n_{th}$), first we set $\kappa=0.1\gamma$ and $g_{1}=1.0\gamma$ in Fig. 1(a), and $\kappa=0.1\gamma$ and $g_{2}=0.2\gamma$ in Fig. 2(a). Remarkably, note from those figures that there is a region where the response to the rising reservoir temperature of both atomic and bosonic system is negative (falling energy), thus supporting our assertion that cooling by heating can be observed even if we adopt a definition, different from Eq.(\[RF\]), taking into account the total energy. Also, note that the final temperature of the bosonic system differs from that of its reservoir $\left( \left\langle a^{\dag}a\right\rangle \neq n_{th}\right) $, thus indicating the existence of non-equilibrium steady states [@Lynden-Bell99; @Zia02]. It is important to mention that it is not surprisingly to have a *non-equilibriun* steady state once the system is driven by an external force (the external laser). In Fig. 1(b) (Fig. 2(b)) we plot the response function versus $n_{th}$ to the bosonic and the atomic systems for the model $k=1$ ($k=2$). Both figures show that cooling by heating for the bosonic system can occur in a wider region than that for the atomic system. Let us now explore the fact that the reservoirs for the atomic and bosonic systems can have different mean number of thermal photons. This is particularly relevant for trapped ion experiments since the transition frequency $\omega_{0}$ (of the order of few GHz) of the electronic levels involved are usually much bigger than the frequency $\nu$ of the ionic motion ($\sim$MHz) [@Meekhof96], resulting in different mean number of thermal photons for the electronic levels and ionic motion for a given temperature. In Fig. 3(a) we show the behavior of the response function for the atomic system when the mean photon number of the bosonic reservoir is fixed at $n_{th}=0.0$, $1.0$, $2.0$, for the model $k=1$, using the same parameters as those in Fig. 1. Note that cooling by heating can occur when $m_{th}\lesssim1$ irrespective of the fixed $\ n_{th}$. Fig. 3(b) does the same for the model $k=2$, with the parameters used in Fig. 2. Cooling by heating can now occur when $m_{th}\lesssim0.5$. It is noteworthy that when we fix the average number of thermal photons for the atomic reservoir and investigate the behavior of the response function to the bosonic system as a function of temperature, we do not see regions where cooling by heating can occur. Besides, for the range of parameters used in our numerical simulations, we have found regions where cooling by heating can occur in the atomic system for some values of the effective Rabi frequency $g_{k}$, irrespective of the ratio $\kappa/\gamma$. On the other hand, to the system under study, to observe cooling by heating for the bosonic system for some effective Rabi frequency $g_{k}$, not only the reservoirs must have the same average photon number ($m_{th}=n_{th}$) but the atomic decay must be stronger than the bosonic mode decay, which our numerical simulations point to the ratio $\kappa/\gamma\lesssim0.3$ for the model $k=1$ and $\kappa/\gamma\lesssim0.4$ for the model $k=2$. In a first moment, one could think that we should have the same response regardless the bosonic or the fermionic system, once the equation of motion (3) is completely symmetric on the fermionic and on the bosonic operators. However, the nature of those operators are completely different, i.e., the fermionic operators are restricted to a two-dimension Hilbert space while the bosonic ones are in a infinite Hilbert space. So, the physical difference is the number of accessible states of each subsystem: the fermionic subsystem has only two accessible states and the bosonic subsystem can access infinite states. *Experimental proposal*: We now comment on the parameters appearing in the effective Hamiltonians discussed above and how cooling by heating could be observed with the nowadays technology. In the trapped ions domain, for instance, Hamiltonians Eq.(\[1\]) were obtained and used to engineer nonclassical motional states [@Wineland03]. For the anti-JCM ($k=1$) and the so-called two-phonon anti-JCM ($k=2$), the effective couplings are, respectively, $\left\vert g_{1}\right\vert =\left\vert \eta_{L}\Omega \right\vert /2$ and $\left\vert g_{2}\right\vert =\left\vert \eta_{L}^{2}\Omega\right\vert /4$, where $\eta_{L}$ is the Lamb-Dicke parameter and $\Omega$ is the Rabi frequency of the classical field driving the two-level ion, which can easily be adjusted [@Wineland03]. For the hyperfine ground states of a single $^{9}Be^{+}$ ion, one can adjust $\eta_{L}$ $=$ $0.2$, thus lying in the Lamb-Dicke regime [@Meekhof96]. We note that typical starting values of the average number of thermal phonons in the mode of interest are between $0$ and $2$ and the decay rate of the vibrational motion of the ion can be much smaller than $g_{k}$ [@Turchette00]. Thus, a trapped ion seems to be an appropriate physical system to observe cooling by heating. The existence of cooling by heating in this context provides an interesting and counter-intuitive application: the ion motion can be reduced as the reservoir temperature increases. The anti-JCM can also be engineered in a cavity QED setup [@marcelo]. As the usual atom-field coupling in the microwave domain is $\lambda\sim10^{5}s^{-1}$, the effective coupling for the anti-JCM can be $g_{1}\sim10^{3}s^{-1}$ [@marcelo]. The cavity decay rate $\kappa$ ranges from $10s^{-1}$ to $10^{2}s^{-1}$ [@Gleyzes07] and, therefore, we easily attain the condition $0\leq g_{k}/\kappa\lesssim10$. Taking into account realistic temperatures, the effective mean occupation number at the microwave frequency has to be $n_{th}\sim0.7$, according to QED cavity experiments [@harocheRev2001]. This mean number of thermal photons can be reduced down to $0.1$ by sending atoms resonantly with the cavity mode to absorb the thermal field [@harocheRev2001]. *Conclusion*. We studied a class of Hamiltonians well-known in the quantum optics domain and showed that cooling by heating can occur for a large range of parameters, including some achievable by present day techniques. We numerically solve the master equation and calculate the response function of the internal energy for systems interacting with a thermal bath when varying their corresponding reservoir temperature: $\ a)$ a single two-level atom (or even a sample of $N$ two-level atoms) driven by a classical field and $b)$ a bosonic mode interacting with a two-level ion/atom. We hope this work will trigger a search for experimental verification of cooling by heating in the quantum optics area, thus strongly contributing to understand this interesting and counter intuitive effect. The authors acknowledge the financial support from the Brazilian agencies CNPq and Brazilian National Institute of Science and Technology for Quantum Information (INCT-IQ). C. J. V. B. also acknowledges support from FAPESP (Proc. 2012/00176-9). [99]{} A. Mari and J. Eisert, Phys. Rev. Lett. **108**, 120602 (2012). B. Cleuren, B. Rutten, and C. Van den Broeck, Phys. Rev. Lett. **108**, 120603 (2012). D. Leibfried *et al.,* Rev. Mod. Phys. **75**, 281 (2003). H.-P. Breuer and F. Petruccione, *The theory of Open Quantum Systems* (Oxford Unversity Press, Oxford, 2007). G. Lindblad, Commun. Math. Phys. **48**, 119 (1976) S. M. Tan, J. Opt. B: Quantum Semiclass. Opt. **1**, 424 (1999). R. K. P. Zia, E. L. Praestgaard, and O. G. Mouritsen, American Journal of Physics **70,** 384 (2002). The Hamiltonian for $N$ identical non-interacting atoms is obtained by replacing $\sigma_{+}\rightarrow S_{+}=\sum_{i=1}^{N}\sigma _{+}^{i}$ in Eq. (\[1\]) for $k=0$ $\left( S_{-}=S_{+}^{\dagger}\right) $. Here $\sigma_{+}^{i}$ is the Pauli operator acting on the j-th atom. The calculation of the response function for $N$ atoms is almost identical to the calculation performed for a single atom D. Lynden-Bell, Physica A **263**, 293 (1999). D. M. Meekhof *et al.,* Phys. Rev. Lett. **76**, 1796 (1996). Q. A. Turchette *et al*., Phys. Rev. A **61**, 063418 (2000). M. França Santos, E. Solano, and R. L. de Matos Filho, Phys. Rev. Lett. **87,** 093601 (2001). S. Gleyzes *et al.,* Nature **446**, 297 (2007). J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. **73**, 565 (2001).. **Figure Caption** Fig. 1: (color online) (a) Average energy for atomic ($\left\langle H_{a}\right\rangle /\omega_{0}$ - dashed line (x10)) and bosonic systems ($\left\langle H_{f}\right\rangle /\nu$ - solid line) versus a common mean number of thermal photons $n_{th}=m_{th}$, for the model $k=1$. (b) Response function versus mean number of thermal photon. The cooling by heating can occur for $m_{th}=n_{th}\lesssim1.4$ to the bosonic and $m_{th}=n_{th}\lesssim0.9$ to the atomic system. The parameters used are $\kappa=0.1\gamma$ and $g_{1}=1.0\gamma$. Fig. 2: (color online) (a) Average energy for atomic (dashed line (x10)) and bosonic systems (solid line) versus a common mean number of thermal photons $n_{th}=m_{th}$, for the model $k=2$. The mean value of the interaction Hamiltonian $H_{I}$ (not shown in this figure) is always zero. (b) Response function versus mean number of thermal photons. The cooling by heating can occur for $m_{th}=n_{th}\lesssim1.2$ to the bosonic and $m_{th}=n_{th}\lesssim0.4$ to the atomic system. The parameters used are $\kappa=0.1\gamma$ and $g_{2}=0.2\gamma$. Fig. 3: (color online) Response function to the atomic system versus the average photon number $m_{th}$ of its reservoir when fixing the average photon number of the bosonic reservoir: $n_{th}=0.0$ (solid line), $n_{th}=1.0$ (dashed line) and $n_{th}=2.0$ (dotted line), for the model (a) $k=1$, using $\kappa=0.1\gamma$ and $g_{1}=1.0\gamma$; and (b) $k=2$, using $\kappa =0.1\gamma$ and $g_{2}=0.2\gamma$.
--- abstract: 'We consider the problem of contextual bandits with stochastic experts, which is a variation of the traditional stochastic contextual bandit with experts problem. In our problem setting, we assume access to a class of [*stochastic experts*]{}, where each expert is a conditional distribution over the arms given a context. We propose upper-confidence bound (UCB) algorithms for this problem, which employ two different importance sampling based estimators for the mean reward for each expert. Both these estimators leverage *information leakage* among the experts, thus using samples collected under all the experts to estimate the mean reward of any given expert. This leads to *instance dependent* regret bounds of $\mathcal{O}\left(\lambda(\pmb{\mu})\mathcal{M}\log T/\Delta \right)$, where $\lambda(\pmb{\mu})$ is a term that depends on the mean rewards of the experts, $\Delta$ is the smallest gap between the mean reward of the optimal expert and the rest, and $\mathcal{M}$ quantifies the information leakage among the experts. We show that under some assumptions $\lambda(\pmb{\mu})$ is typically $\mathcal{O}(\log N)$. We implement our algorithm with stochastic experts generated from cost-sensitive classification oracles and show superior empirical performance on real-world datasets, when compared to other state of the art contextual bandit algorithms.' author: - Rajat Sen - Karthikeyan Shanmugam - Sanjay Shakkottai bibliography: - 'experts.bib' title: Contextual Bandits with Stochastic Experts ---
--- abstract: | We derive a new formulation of the compressible Euler equations exhibiting remarkable structures, including surprisingly good null structures. The new formulation comprises covariant wave equations for the Cartesian components of the velocity and the logarithmic density coupled to a transport equation for the specific vorticity, defined to be vorticity divided by density. The equations allow one to use the full power of the geometric vectorfield method in treating the “wave part” of the system. A crucial feature of the new formulation is that all derivative-quadratic inhomogeneous terms verify the strong null condition. The latter is a nonlinear condition signifying the complete absence of nonlinear interactions involving more than one differentiation in a direction transversal to the acoustic characteristics. Moreover, the same good structures are found in the equations verified by the Euclidean divergence and curl of the specific vorticity. This is important because one needs to combine estimates for the divergence and curl with elliptic estimates to obtain sufficient regularity for the specific vorticity, whose derivatives appears as inhomogeneous terms in the wave equations. The above structures collectively open the door for our forthcoming results: exhibiting a stable regime in which initially smooth solutions develop a shock singularity (in particular the first Cartesian coordinate partial derivatives of the velocity and density blow up) while, relative to a system of geometric coordinates adapted to the acoustic characteristics, the solution (including the vorticity) remains many times differentiable, all the way up to the shock. The good null structures, which are often associated with global solutions, are in fact key to proving that the shock singularity forms. Our secondary goal in this article is to overview the central role that the structures play in the proof. **Keywords**: characteristics; eikonal equation; eikonal function; null condition; null hypersurface; null structure; singularity formation; vectorfield method; vorticity; wave breaking **Mathematics Subject Classification (2010)** Primary: 35L67 - Secondary: 35L05, 35Q31, 76N10 author: - 'Jonathan Luk$^{*}$ and Jared Speck$^{** \dagger}$' bibliography: - 'JBib.bib' title: The Hidden Null Structure of the Compressible Euler Equations and a Prelude to Applications --- [^1] [^2] [^3] Introduction {#S:INTRO} ============ In this article, we study the compressible Euler equations for a perfect fluid in three spatial dimensions under a barotropic *equation of state*, that is, when the pressure $p$ is a function of the density $\rho$: $$\begin{aligned} \label{E:BAROTROPICEOS} p = p(\rho).\end{aligned}$$ In this setting, the compressible Euler equations are evolution equations for the velocity $v:\mathbb{R}^{1+3} \rightarrow \mathbb{R}^3$ and the density $\rho:\mathbb{R}^{1+3} \rightarrow [0,\infty)$. Our main result in this paper is a reformulation of the equations as a coupled system of (quasilinear) wave and transport equations with inhomogeneous terms exhibiting remarkable structures. As we will show in [@jLjS2016b; @jLjS2017], this allows for a precise mathematical understanding of the formation of shock singularities in the presence of vorticity, starting from regular initial conditions. Basic background ---------------- ### Definitions Before stating the equations, we first provide some definitions. We use the following notation[^4] for the Euclidean divergence and curl of a $\Sigma_t-$tangent vectorfield $V$, where $\Sigma_t$ denotes the hypersurface of constant Cartesian time $t$: $$\begin{aligned} \label{E:FLATDIVANDCURL} {\mbox{\upshape div}\mkern 1mu}V & := \partial_a V^a, \qquad ({\mbox{\upshape curl}\mkern 1mu}V)^i := \epsilon_{iab} \partial_a V^b. \end{aligned}$$ In , $\epsilon_{ijk}$ is the fully antisymmetric symbol normalized by $$\begin{aligned} \label{E:EPSILONNORMALIZATION} \epsilon_{123} = 1.\end{aligned}$$ The vorticity $\omega: \mathbb{R}^{1+3} \rightarrow \mathbb{R}^3$ is the vectorfield $$\begin{aligned} \label{E:VORTICITYDEFINITION} \omega^i & := ({\mbox{\upshape curl}\mkern 1mu}v)^i.\end{aligned}$$ Rather than formulating the equations in terms of the density and the vorticity, we find it convenient to use the *logarithmic density* ${\uprho}$ and the *specific vorticity* ${\upomega}$. To define these quantities, we first fix a constant background density $\bar{\rho}$ such that $$\begin{aligned} \label{E:BACKGROUNDDENSITY} \bar{\rho} > 0.\end{aligned}$$ In applications, one may choose any convenient value[^5] of $\bar{\rho}$. \[D:MODIFIEDVARIABLES\] $$\begin{aligned} \label{E:MODIFIEDVARIABLES} {\uprho}& := \ln \left(\frac{\rho}{\bar{\rho}} \right), \qquad {\upomega}:= \frac{\omega}{(\rho/\bar{\rho})} = \frac{\omega}{\exp {\uprho}}.\end{aligned}$$ We assume throughout that[^6] $$\begin{aligned} \label{E:DENSITYPOSITIVE} \rho > 0.\end{aligned}$$ In particular, the variable ${\uprho}$ (see ) is finite assuming and . ### A standard first-order formulation of the compressible Euler equations We now state a standard formulation of the compressible Euler equations; see, for example, [@dCsM2014] for a discussion of the physical origin of the equations. Specifically, relative to Cartesian coordinates, the compressible Euler equations can be expressed[^7] as follows: $$\begin{aligned} \label{E:TRANSPORTDENSRENORMALIZEDRELATIVETORECTANGULAR} {B}{\uprho}& = - {\mbox{\upshape div}\mkern 1mu}v, \\ {B}v^i & = - {c_s}^2 \partial_i {\uprho}= - {c_s}^2 \delta^{ia} \partial_a {\uprho}. \label{E:TRANSPORTVELOCITYRELATIVETORECTANGULAR}\end{aligned}$$ Above, $\delta^{ia}$ is the standard Kronecker delta, $$\begin{aligned} \label{E:MATERIALVECTORVIELDRELATIVETORECTANGULAR} {B}& := \partial_t + v^a \partial_a\end{aligned}$$ is the *material derivative vectorfield*, and $$\begin{aligned} \label{E:SOUNDSPEED} {c_s}& := \sqrt{\frac{dp}{d \rho}}\end{aligned}$$ is a fundamental quantity known as the *speed of sound*. From now on, we view $$\begin{aligned} \label{E:SPEEDOFSOUNDISAFUNCTIONOFRENORMALIZEDDENSITY} {c_s}& = {c_s}({\uprho}),\end{aligned}$$ and, for future use, we set $$\begin{aligned} \label{E:DEFINITIONOFSPEEDPRIME} {c_s}' = {c_s}'({\uprho}) & := \frac{d}{d {\uprho}} {c_s}.\end{aligned}$$ Summary of the main results and preliminary discussion {#SS:PRELIMINARYDISCUSSION} ------------------------------------------------------ Note that neither the vorticity $\omega$ nor the specific vorticity ${\upomega}$ appear in the system -. However, ${\upomega}$ plays a central role in the main results of the present article, which we now summarize. We refer the readers to Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] on pg.  and Theorem \[T:STRONGNULL\] on pg.  for the precise statements. [.25in]{}[.25in]{} **Summary of the main results.** The compressible Euler equations can be reformulated as a system of covariant wave equations for the Cartesian components $\lbrace v^i \rbrace_{i=1,2,3}$ of the velocity and the logarithmic density ${\uprho}$ coupled to a transport equation for the specific vorticity ${\upomega}$, a transport equation for ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$, and an identity for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$. Moreover, the inhomogeneous terms exhibit remarkable structures, *including good null structures that can be viewed as extensions of the standard null forms adapted to the acoustical metric $g$*. It is well-known since the foundational work of Riemann [@bR1860] in one spatial dimension that solutions to the compressible Euler equations can form shocks in finite time, even if the initial data are smooth and small. This occurs in spite of the fact that solutions enjoy a conserved energy. That is, the energy is supercritical, even in one spatial dimension, and does not prevent singularity formation. The formation of shocks is connected in part to the failure of a null condition; see Subsect. \[SS:STRONGNULLVSOTHERNULL\] for further discussion about “null conditions.” Put differently, there are Riccati-type interaction terms in the equations satisfied by the solution’s first derivatives, and these terms can drive the formation of a singularity tied to the intersection of characteristics. More precisely, the Riccati-type terms drive the blowup of the first Cartesian coordinate partial derivatives of the density and velocity, while the velocity and density themselves remain bounded; this is the crudest picture of the formation of a shock singularity. On the other hand, in our new formulation of the equations, all of the terms that violate the null condition are on the left-hand side of the equations, “hidden ” in the terms $\square_g v^i$ and $\square_g {\uprho}$ (see and ), where $\square_g$ is a covariant wave operator (see Def. \[D:COVWAVEOP\]). We derive the new formulation by differentiating the system - with suitable operators and observing cancellations. In more than one spatial dimension, the approach of hiding the difficult terms in the operator $\square_g$ turns out to be crucial for understanding the formation of the shock. Put differently, if one writes the wave equations in divergence form, then all explicitly written inhomogeneous terms satisfy a null condition (distinct from the one mentioned earlier in this paragraph), *even in the presence of vorticity*! We devote all of Sect. \[S:PROOFOFTHMSTRONGNULL\] to discussing this null condition and its relation to other null conditions that have appeared in the literature. A similar – but much simpler – structure had previously been found by Christodoulou–Miao [@dCsM2014] in their proof of shock formation in irrotational (that is, vorticity free) regions. In the irrotational case, the dynamics reduces to a single quasilinear wave equation for the fluid potential.[^8] In fact, as we further explain in Remark \[rmk.irrot\], for irrotational solutions, our wave equation for the velocity $v^i$ follows as an implicit consequence of the calculations in [@dCsM2014]. More precisely, Christodoulou–Miao showed that appropriately defined variations[^9] of the fluid potential satisfy *homogeneous* covariant quasilinear wave equations and thus *all* of their nonlinearities are hidden in the covariant wave operator $\square_g$; see Subsect. \[SSS:PRELIMINARYDICUSSION\] for further discussion. We note that the first observation and use of this kind of good structure was made by Christodoulou in his breakthrough work on (small-data) shock formation [@dC2007] for solutions to relativistic Euler equations in $(1+3)$ dimensions in irrotational regions. There is a long, rich history of prior result leading up to the works [@dC2007; @dCsM2014] and their recent extensions. Readers may consult the survey article [@gHsKjSwW2016] for more details; here we discuss only the works that are most relevant for the present article. Alinhac was the first [@sA1999a; @sA1999b; @sA2001b; @sA2002] to prove shock formation results for hyperbolic PDEs in more than one spatial dimension without symmetry assumptions. Specifically, in two and three spatial dimensions, he proved small-data shock formation results for wave equations of the form $ (g^{-1})^{\alpha \beta} (\partial \Phi) \partial_{\alpha} \partial_{\beta} \Phi = 0 $ whenever the nonlinear terms fail to satisfy Klainerman’s null condition (which we describe in more detail in Subsubsect. \[sss.small.data\]). More precisely, Alinhac exhibited a set of small data such that $\partial^2 \Phi$ blows up in finite time due to the intersection of characteristics, while $\Phi$ and $\partial \Phi$ remain bounded, all the way up to the singularity. After appropriate renormalizations, it may be seen that all wave equations treated by Christodoulou [@dC2007] and Christodoulou–Miao [@dCsM2014] essentially fall under the scope of Alinhac’s work. However, the approach developed by Christodoulou in [@dC2007] was a big advancement over that of Alinhac for the following main reasons. - For the wave equations of irrotational compressible fluid mechanics, Christodoulou and Christodoulou–Miao gave a fully geometric description of the singularity formation that, for small data, exactly ties singularity formation to the intersection of characteristics. That is, unlike Alinhac’s framework, Christodoulou’s yields that shocks are the only possible kinds of singularities that can in principle occur when the data are small. - Christodoulou’s framework yields sharp information about the maximal classical development[^10] of the data, including the behavior of the solution up to the boundary. As is described in [@dC2007; @dCaL2016], this information is essential even to properly set up the shock development problem, which is the problem of weakly continuing the solution past the singularity. In contrast, due to fundamental technical limitations tied to his use of a Nash-Moser energy estimate framework, Alinhac’s proof breaks down precisely at the time of first blowup and cannot be extended to yield information about the boundary of the maximal development. For similar reasons, Alinhac’s proof relies on a non-degeneracy assumption on the initial data that ensures that there is a unique blowup point in the constant-time hypersurface of first blowup. - Many features of Christodoulou’s framework are robust[^11] and have the potential to be applied to other equations. In view of the above remarks, it is clear why Christodoulou’s approach to proving shock formation in the irrotational case served as the starting point for our study of shock formation in the presence of vorticity. Another seed idea was found in the work [@jS2014b], in which Speck proved shock formation results similar to those of [@dC2007; @dCsM2014] for a large class of quasilinear wave equations, which are not necessarily homogeneous, *as long as the inhomogeneous terms satisfy a null condition*. Roughly speaking, the null condition from [@jS2014b] allows one to show that the nonlinear inhomogeneous terms do not interfere with the shock formation mechanisms and that for small compactly supported data, no other kinds of singularities can occur prior to shock formation. We clarify that the null condition from [@jS2014b] is visible when the wave equations are written in covariant form; if one expresses the wave equations relative to the standard Cartesian coordinates, then the nonlinear terms *fail* to satisfy Klainerman’s classic null condition; see Subsect. \[SS:STRONGNULLVSOTHERNULL\] for further discussion of the different null conditions. Our new formulation of the compressible Euler equations, for which an of such a null condition is also verified, *even in the presence of vorticity*, opens the door to our forthcoming results [@jLjS2016b; @jLjS2017]: showing that there is an open (relative to an appropriate Sobolev topology) set of regular initial data such that the solution forms a shock in finite time. The main novel feature of our works [@dC2007; @dCsM2014] is that the vorticity is *not required vanish at the shock*. That is, we have to control the vorticity in a neighborhood of the first singularity caused by compression. In particular, we must rule out the onset of “wild instabilities” that could in principle be caused by the interaction of vorticity flow and shocks. We summarize our results as follows. [.25in]{}[.25in]{} **Summary of forthcoming results [@jLjS2016b; @jLjS2017].** In two or three spatial dimensions,[^12] for any physical equation of state except that of the Chaplygin gas,[^13] there exists an set of regular initial data, with elements close to the data of a subset of simple plane wave solutions, that leads to finite-time shock formation. The specific vorticity, which is for some of our solutions, remains uniformly bounded, all the way up to the shock. Moreover, the dynamics are “well-described” by the irrotational Euler equations. All prior blowup results for the compressible Euler equations that allow for non-zero vorticity are either non-constructive in nature or are such that the potential formulation of the Euler equations was used near the shock because the vorticity was *provably non-zero there*; see [@tS1985; @sA1993; @dC2007; @dCsM2014; @jS2014b; @jSgHjLwW2016]. We refer the readers to the survey paper [@gHsKjSwW2016] as well as our companion paper [@jLjS2016b] for further discussions on related previous works. Importantly, we note that our work may be relevant for the shock development problem,[^14] that is, the problem of continuing the solution as a weak solution after a shock has formed. The reason is that upon weakly continuing the solution, vorticity is typically generated across the shock hypersurface [@dC2007], even if the solution is irrotational up to the time of first blowup. In our forthcoming works [@jLjS2016b; @jLjS2017], we are able to give a complete description of the behavior of the solution up to the onset of the first shock, including that of the vorticity. In particular, relative to a system of geometric coordinates adapted to the acoustic characteristics (which are null hypersurfaces corresponding to sound wave propagation), *the solution, including the vorticity, remains many times differentiable, all the way up to the shock*. Moreover, in the case of two spatial dimensions, our methods can in principle also give a description of a portion of the boundary of the maximal classical development of the solution, at least for a subclass of solutions verifying non-degeneracy conditions of the type assumed in [@dC2007; @dCsM2014].[^15] Preliminary overview of the role of the present work in our forthcoming proofs of shock formation ------------------------------------------------------------------------------------------------- Our secondary goal in this paper is to overview our forthcoming proofs of shock formation in the presence of vorticity and to highlight the role played by our new formulation of the equations. For reasons to be explained, we will treat the case of two and three spatial dimensions in separate works. Our proofs are based in part on the framework developed by Christodoulou [@dC2007] and Christodoulou–Miao[@dCsM2014] in their study of shock formation in three spatial dimensions in the irrotational case,[^16] on an extended version of the notion of good null structure observed in [@jS2014b], and on the framework of [@jSgHjLwW2016], in which the authors extended Christodoulou’s results[^17] to a new solution regime in which the solutions are close to simple plane symmetric waves. To control the vorticity up to the singularity, we exploit all of the geo-analytic structures revealed by Theorems \[T:GEOMETRICWAVETRANSPORTSYSTEM\] and \[T:STRONGNULL\], structures which are compatible with an extended version of Christodoulou’s framework. We now briefly summarize our new formulation of the compressible Euler equations and its implications for the study of shock formation. We revisit these issues in extended detail in Sect. \[S:IDEASBEHINDSHOCKFORMATION\]. 1. **(New formulation of the equations)** The system comprises *covariant* wave equations for the Cartesian components $\lbrace v^i \rbrace_{i=1,2,3}$ of the velocity and the logarithmic density ${\uprho}$ coupled to a transport equation for the specific vorticity ${\upomega}$; see Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\]. 2. **(Structure of the inhomogeneous terms)** The system features inhomogeneous terms that can be split into three distinct classes: 1. Quadratic terms in the derivatives of $v^i$, ${\uprho}$ and ${\upomega}^i$ obeying the *strong null condition* relative to the acoustical metric $g = g({\uprho},v)$ (which is the Lorentzian metric corresponding to the propagation of sound wave, see Def. \[D:ACOUSTICALMETRIC\]). We exhibit the good null structure enjoyed by these terms in Theorem \[T:STRONGNULL\]. 2. Products that are linear in $\lbrace \partial v^i \rbrace$ or $\partial {\uprho}$, where $\partial$ denotes the spacetime gradient with respect to the Cartesian coordinates. 3. Products that are linear in $\underline{\partial} {\upomega}$, where $\underline{\partial}$ denotes the spatial gradient with respect to the Cartesian coordinates. Importantly, these good structures also hold for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$, where ${\mbox{\upshape div}\mkern 1mu}$ and ${\mbox{\upshape curl}\mkern 1mu}$ are the usual Euclidean operators. That is, these terms satisfy equations with inhomogeneous terms enjoying the *same structure* highlighted above, which we need in order to obtain suitable estimates for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$. Moreover, even though a general Cartesian spatial derivative $\underline{\partial} {\upomega}$ does *not* obey an equation with such a good structure, the information that one can obtain for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ (using the good structures of their equations) is sufficient to close, via elliptic estimates, a top-order estimate for ${\upomega}$. Put differently, to control $\underline{\partial} {\upomega}$, we first control the “good terms” ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ and then use elliptic estimates. 3. **(Null structure is a key ingredient in the proof of shock formation)** As we have mentioned, in [@dC2007], Christodoulou introduced a foundational geometric framework for proving shock formation in regions with vanishing vorticity. The main ingredient in his approach was an *eikonal function* $u$, which solved the eikonal equation $(g^{-1})^{\alpha \beta} \partial_{\alpha} u \partial_{\beta} u=0$ (see ) corresponding to the acoustical metric $g$, which also appears in the (quasilinear) covariant wave operator $\square_g$ mentioned above. Christodoulou completed $u$ and the standard Cartesian time function $t$ to new set of geometric coordinates on spacetime and constructed related geometric vectorfields, adapted to the acoustic characteristics (that is, the level sets of $u$, which are $g$-null hypersurfaces). On the one hand, these geometric vectorfields can degenerate with respect to the Cartesian[^18] coordinate partial vectorfields as the shock forms. However, the big gain is that the nonlinearities in the equation exhibit good null structure when decomposed relative to the geometric coordinates and/or vectorfields. By taking advantage of this null structure, Christodoulou [@dC2007] and Christodoulou–Miao [@dCsM2014] were able to prove that relative to the geometric coordinates, the solution remains many times differentiable. At the same time, they proved that for an open set of data, in finite time, the geometric coordinates *degenerate* relative to the Cartesian ones, which causes a singularity in the Cartesian coordinate partial derivatives of the solution. One might say that the geometric coordinates “hide” the singularity, which allows one to prove the estimates needed to show that the singularity does in fact form. As the above discussion has suggested, in order to extend Christodoulou’s framework to allow for the presence of vorticity, it is important that the inhomogeneous terms should have a certain good null structure. We formulate this precisely in Sect. \[S:PROOFOFTHMSTRONGNULL\], where we refer to the good null structure as the “strong null condition.” The definition of the strong null condition is motivated by the result that one eventually proves: one shows that the singularity forms in a derivative of the solution in a direction transversal to the acoustic characteristics, while the solution’s tangential derivatives remain uniformly bounded. Hence, the good structure enjoyed by an inhomogeneous term product verifying the strong null condition is roughly the following: it contains at most one differentiation in a direction transversal to the acoustic characteristics. Near the singularity, such terms are *weaker* than the Riccati-type interactions (which are quadratic in the solution’s transversal derivatives) that are hidden in the action of the wave operator $\square_g$. Hence, these good terms do not interfere with the shock formation mechanisms. We stress that *the strong null condition is fully nonlinear in nature and is not based on Taylor expanding nonlinearities to quadratic order*. We will explain the necessity of such a structure in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. 4. **(Null structure and top-order singular estimates)** In quasilinear hyperbolic PDEs in one spatial dimension, such as Burgers’ equation, it is often easy to construct geometric coordinates such that with respect to these coordinates, the solution remains as regular as the initial data, all the way up to the singularity (see, for example, Footnote \[FN:BURGER\]). However, Christodoulou’s framework is fraught with a challenging technical difficulty: even with the help of the geometric coordinates, it does not seem possible to “hide” the singularity at very high derivative levels;[^19] in all known results on the formation of shocks in more than one spatial dimension, the best estimates available allow for the possibility that the high-order geometric energies might blow up as the singularity forms. However, it is not know with certainty whether or not the high-order energy blowup does in fact occur.[^20] A key part of the analysis is to control the *possible* blowup-rate of these energies and showing, with a careful order-by-order analysis, that the lower-order energies remain uniformly bounded, all the way up to the shock. In our study of shock formation with vorticity, we in particular need to accommodate the singular high-order energy estimates for the “wave part” of the system, which are inherited from the irrotational case. Perhaps not surprisingly, the energy estimates for the transport part of the system (that is, for the specific vorticity) are also allowed to blow up at the high orders. A critically important part of the analysis is understanding how the different blowup-rates for the wave and transport parts are tied to each other, in view of the fact that the wave and transport variables are coupled at the level of the equations. Put differently, we need to simultaneously study the wave and transport parts of the system, perform an order-by-order analysis of the singular high-order energy estimates, and close a Gronwall-type energy estimate that accounts for the distinct singular behavior of each part of the system at distinct derivative levels. The fact that we can close such an argument is intimately tied to the good null structures found in the coupling terms. 5. **(Difficulties related to multiple speeds)** To prove shock formation in regions with vorticity, we encounter all of the same difficulties that Christodoulou encountered plus two challenging new ones. The first of these that in the presence of vorticity, the equations contain *multiple speeds*: the speed of sound, which corresponds to the acoustic characteristics (which were present in Christodoulou’s work) and the speed of vorticity transport, which corresponds to the integral curves (also known as the flow lines) of the material derivative vectorfield, along which the specific vorticity is transported. This new difficulty is present in both two and three spatial dimensions. In an effort to isolate the new ideas needed to handle it, we prove shock formation for solutions with vorticity in two spatial dimensions in the separate work [@jLjS2016b]; see Subsubsect. \[SSS:GEOMETRICVECINTERACTWITHTRANSPORT\] for an overview of these new ideas. Analytically, the challenge is that the material derivative vectorfield ${B}$, the Euclidean divergence, and the Euclidean curl do not have any relationship to the geometric vectorfields needed to commute the wave equations, which makes it difficult to obtain estimates for the geometric derivatives of ${\upomega}$. However, it turns out that the geometric vectorfields have just enough structure such that their commutator with an appropriately weighted, but otherwise arbitrary, first-order differential operator[^21] produces controllable error terms, consistent with “hiding the singularity” relative to the geometric coordinates at the lower derivative levels. This is important because *we found a procedure that avoids, in the evolution equation-type estimates for ${\upomega}$, having to commute through a second-order operator*. To derive suitable estimates for the specific vorticity, we crucially rely on the geometric fact that ${B}$ is transversal to the acoustic characteristics. This basic fact allows us to derive energy estimates for ${\upomega}$ along the characteristics in which *the energies do not feature any degenerate weights,* which is critically important for controlling error terms. A related fact is that the transversality condition allows us to avoid a potential logarithmic divergence; see Subsubsect. \[sss.inho.vor\] for further discussion. However, we cannot rely on the energy of ${\upomega}$ along the characteristics at the top order since, at the top order, we are forced to derive elliptic estimates with a degenerate weight along constant-time hypersurfaces; see the next point. 6. **(Top order elliptic estimates for the vorticity)** The second new difficulty compared to the work of Christodoulou is that in the presence of vorticity, one needs to use elliptic estimates on $\Sigma_t$ to control the top derivatives of ${\upomega}$. More precisely, this difficulty is present only in three or more spatial dimension since in two spatial dimensions, the “vorticity stretching” term responsible for the difficulty is absent (that is, $\mbox{{\upshape RHS}~\eqref{E:RENORMALIZEDVORTICTITYTRANSPORTEQUATION}} \equiv 0$ in two spatial dimensions). Because of the significant innovations needed to close the elliptic estimates near the singularity, we will prove shock formation in the case of three spatial dimensions in a separate work. In particular, in three spatial dimensions, one must derive elliptic estimates for derivatives of the vorticity in a direction transversal to the acoustic characteristics, which, near the singularity, is a severe technical difficulty that is not present in the irrotational case. In particular, when expressed relative to the geometric coordinates, the specific vorticity energies along $\Sigma_t$ contain degenerate weights. Ultimately, these degenerate weights contribute to the fact that the top-order $L^2$ estimates for ${\upomega}$ can blow up as the shock forms, much like the energy estimates in the irrotational case. To close the proof, *we must show that blowup-rate for the transport variable ${\upomega}$ is not too severe*. In particular, we must show that the blowup-rate is compatible with the corresponding blowup-rates for the wave variables, whose top-order energies, as it turns out, are no more singular than they are in the irrotational case; see Subsubsect. \[SSS:ELLIPTICESTIMATESFORVORTICITY\] for an overview of the main new ideas behind these elliptic estimates. Paper outline {#SS:OUTLINE} ------------- In Sect. \[sec.main.theorem\], we provide definitions and state the two main theorems. In Sect. \[S:PROOFOFTHMSTRONGNULL\], we discuss some basic concepts from Lorentzian geometry and prove the second theorem, which exhibits the good null structure enjoyed by the inhomogeneous terms in the equations. We also compare and contrast these good null structures to different null structures found in the literature. In Sect. \[S:IDEASBEHINDSHOCKFORMATION\], we provide provide a preview on how the new formulation of the equations can be used to prove a sharp shock-formation result in the presence of non-zero vorticity in three spatial dimensions. To provide context, we overview how to prove shock formation in the irrotational case using a version Christodoulou’s framework adapted to the initial data that are close in spirit to the data considered in [@jLjS2016b; @jLjS2017]. In Sect. \[E:PROOFOFMAINTHEOREM\], we prove our main Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] via a series of calculations. Statement of the main theorems {#sec.main.theorem} ============================== Our main goal in this section is to give a precise statement of the two main theorems. Before stating them, however, we will first introduce appropriate notations, as well as some basic geometric constructions necessary for the statements of the theorems. Notation {#SS:NOTATION} -------- Throughout $\lbrace x^{\alpha} \rbrace_{\alpha=0,1,2,3}$ denotes the usual Cartesian coordinate system on $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$. More precisely, $x^0 \in \mathbb{R}$ is the time coordinate and $(x^1,x^2,x^3) \in \mathbb{R} \times \mathbb{T}^2$ are spatial coordinates. $ \displaystyle \partial_{\alpha} := \frac{\partial}{\partial x^{\alpha}} $ denotes the corresponding coordinate partial derivative vectorfields. We often use the alternate notation $x^0 = t$ and $\partial_0 = \partial_t$. Lowercase Greek “spacetime” indices such as $\alpha$ vary over $0,1,2,3$ while lowercase Latin “spatial” indices such as $a$ vary over $1,2,3$. In later sections, we will use the convention that uppercase Greek indices, associated to the array of solution functions, vary over $0,1,\dots,6$ and upper case Latin indices, associated to null frames, vary over $1,2,3,4$. We use Einstein’s summation convention in that repeated indices are summed over their respective ranges. $\Sigma_t$ denotes the usual flat hypersurface of constant time $t$. Preliminary ingredients in the new formulation of the equations {#SS:PRELIMINARYINGREDIENTS} --------------------------------------------------------------- ### Assumptions on the equation of state {#SSS:EOS} We make the following physical assumptions, which ensure the hyperbolicity of the system when $\rho > 0$: - ${c_s}\geq 0$ . - ${c_s}> 0$ when $\rho > 0$. ### Geometric tensorfields associated to the flow {#SSS:GEOMETRICTENSORFIELDS} Roughly, there are two kinds of motion associated to compressible Euler flow: the transporting of vorticity and the propagation of sound waves. We now discuss the tensorfields associated to these phenomena. The material derivative vectorfield ${B}$, defined in , is associated to the transporting of vorticity. We now define the Lorentzian metric $g$ corresponding to the propagation of sound waves. \[D:ACOUSTICALMETRIC\] We define the *acoustical metric* $g$ and the *inverse acoustical metric* $g^{-1}$ relative to the Cartesian coordinates as follows: $$\begin{aligned} g & := - dt \otimes dt + {c_s}^{-2} \sum_{a=1}^3(dx^a - v^a dt) \otimes (dx^a - v^a dt), \label{E:ACOUSTICALMETRIC} \\ g^{-1} & := - {B}\otimes {B}+ {c_s}^2 \sum_{a=1}^3 \partial_a \otimes \partial_a. \label{E:INVERSEACOUSTICALMETRIC} \end{aligned}$$ \[R:GINVERSEISTHEINVERSE\] It is straightforward to verify that $g^{-1}$ is the matrix inverse of $g$, that is, we have $(g^{-1})^{\mu \alpha} g_{\alpha \nu} = \delta_{\nu}^{\mu}$, where $\delta_{\nu}^{\mu}$ is the standard Kronecker delta. Other authors have defined the acoustical metric to be ${c_s}^2 g$. We prefer our definition because it implies that $(g^{-1})^{00} = - 1$, which simplifies the presentation of many formulas. The vectorfield ${B}$ enjoys some simple but important geometric properties, which we provide in the next lemma. \[L:BASICPROPERTIESOFTRANSPORT\] ${B}$ is timelike, future-directed,[^22] $g-$orthogonal to $\Sigma_t$, and unit-length:[^23] $$\begin{aligned} \label{E:TRANSPORTISUNITLENGTH} g({B},{B}) & = - 1. \end{aligned}$$ Clearly ${B}$ is future-directed. The identity (which also implies that ${B}$ is timelike) follows from a simple calculation based on and . Similarly, we compute that $ \displaystyle g({B},\partial_i) := g_{\alpha i}{B}^{\alpha} = 0 $ for $i=1,2,3$, from which it follows that ${B}$ is $g-$orthogonal to $\Sigma_t$. Statement of the main result I: Reformulation of the equations {#SS:MAINRESULT.I} -------------------------------------------------------------- We first recall the standard definition of the covariant wave operator $\square_g$. \[D:COVWAVEOP\] Relative to arbitrary coordinates, the covariant wave operator $\square_g$ acts on scalar-valued functions $\phi$ as follows: $$\begin{aligned} \label{E:WAVEOPERATORARBITRARYCOORDINATES} \square_g \phi = \frac{1}{\sqrt{|\mbox{\upshape det} g|}} \partial_{\alpha} \left\lbrace \sqrt{|\mbox{\upshape det} g|} (g^{-1})^{\alpha \beta} \partial_{\beta} \phi \right\rbrace.\end{aligned}$$ Our first main result is the following theorem, which provides the new formulation of the equations. \[T:GEOMETRICWAVETRANSPORTSYSTEM\] In three spatial dimensions under a barotropic equation of state , the compressible Euler equations - imply the following system (see Footnote \[F:VECTORFIELDSACTONFUNCTIONS\]) in $({\uprho},v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)$, where the Cartesian component functions $v^i$ are **treated as scalar-valued functions under covariant differentiation** on LHS  and ${B}$ is the material derivative vectorfield defined in : $$\begin{aligned} \square_g v^i & = - {c_s}^2 \exp({\uprho}) ({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i + 2 \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b + \mathscr{Q}^i, \label{E:VELOCITYWAVEEQUATION} \\ \square_g {\uprho}& = \mathscr{Q}, \label{E:RENORMALIZEDDENSITYWAVEEQUATION} \\ {B}{\upomega}^i & = {\upomega}^a \partial_a v^i. \label{E:RENORMALIZEDVORTICTITYTRANSPORTEQUATION} \end{aligned}$$ Above, $\mathscr{Q}^i$ and $\mathscr{Q}$ are the **null forms** relative to $g$, which are defined by $$\begin{aligned} \mathscr{Q}^i & := -(1+ {c_s}^{-1} {c_s}') (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} v^i, \label{E:VELOCITYNULLFORM} \\ \mathscr{Q} & := - 3 {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} {\uprho}+ 2 \sum_{1 \leq a < b \leq 3} \left\lbrace \partial_a v^a \partial_b v^b - \partial_a v^b \partial_b v^a \right\rbrace. \label{E:DENSITYNULLFORM} \end{aligned}$$ In addition, ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and the scalar-valued functions $({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i$ verify the following equations: $$\begin{aligned} \label{E:FLATDIVOFRENORMALIZEDVORTICITY} {\mbox{\upshape div}\mkern 1mu}{\upomega}& = - {\upomega}^a \partial_a {\uprho}, \\ {B}({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i & = (\exp {\uprho}) {\upomega}^a \partial_a {\upomega}^i - (\exp {\uprho}) {\upomega}^i {\mbox{\upshape div}\mkern 1mu}{\upomega}+ \mathscr{P}_{({\upomega})}^i, \label{E:EVOLUTIONEQUATIONFLATCURLRENORMALIZEDVORTICITY}\end{aligned}$$ where $\mathscr{P}_{({\upomega})}^i$ is defined by $$\begin{aligned} \label{E:VORTICITYNULLFORM} \mathscr{P}_{({\upomega})}^i & := \epsilon_{iab} \left\lbrace (\partial_a {\upomega}^c) \partial_c v^b - (\partial_a v^c) \partial_c {\upomega}^b \right\rbrace.\end{aligned}$$ \[R:DELICATEVORTICITYQUDRATICTERM\] As written, the term $\mathscr{P}_{({\upomega})}^i$ from does not have the special null structure that is essential for applications to shock formation. However, by using equation for substitution, one can show that cancellations occur, which yields the desired null structure; see the proof of Theorem \[T:STRONGNULL\]. \[R:SIMPLIFIEDEQUATIONSIN2D\] In two spatial dimensions, the equations simplify considerably due to the absence of vorticity stretching. Specifically, $\mbox{{\upshape RHS}~\eqref{E:RENORMALIZEDVORTICTITYTRANSPORTEQUATION}} \equiv 0$ for solutions that are independent of $x^3$ and have $v^3 \equiv 0$. Consequently, one does not need to use equations - when deriving estimates in two spatial dimensions. \[rmk.irrot\] For data with vanishing vorticity, the solution verifies ${\upomega}\equiv 0$, as long as it remains $C^1$. For such solutions, the system of equations from Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] becomes a system of quasilinear wave equations whose right-hand sides consist only of quadratic null forms relative to the acoustical metric $g$. We note that in particular, the equations from Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] yield, in the irrotational case, the wave equations derived in [@dCsM2014], but without the need to introduce a fluid potential. More precisely, in [@dCsM2014], Christodoulou–Miao showed that all Cartesian coordinate partial derivatives of the fluid potential $\Phi$, which verifies[^24] $\partial_i\Phi=-v^i$, satisfy homogeneous covariant quasilinear wave equations, where the metric is conformal to the acoustical metric $g$ of Def. \[D:ACOUSTICALMETRIC\]. We note that it is easy to show that a conformal change of the metric changes the wave equation, but only by generating a semilinear term that is proportional to a $g$-null form (as defined in Def. \[D:NULLFRAME\]). Thus, in the irrotational case, $-v^i$, being a Cartesian derivative of $\Phi$, satisfies a quasilinear wave equation whose inhomogeneous term exhibit the desired null structure.[^25] We also note that in the irrotational case, the calculations of [@dCsM2014] could be extended to yield our wave equation for ${\uprho}$; however, the calculations would be slightly more involved since ${\uprho}$ is a nonlinear function of the spacetime Cartesian coordinate partial derivatives $\partial_{\alpha} \Phi$. \[R:CONSTRAINTS\] If we think of $({\uprho},v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)$ as independent scalar-valued functions, then the initial data for the mixed-order system - and - are $({\uprho},v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)|_{t=0}$ and $(\partial_t {\uprho},\partial_t v^1,\partial_t v^2,\partial_t v^3)|_{t=0}$. However, in order to be consistent with the compressible Euler equations -, the data must verify “constraints.” Specifically, $\lbrace {\upomega}^i \rbrace_{i=1,2,3}|_{t=0}$ is determined in terms of ${\uprho}$ and $\lbrace \partial_j v^i \rbrace_{i,j=1,2,3}|_{t=0}$ by equation , while $\partial_t {\uprho}|_{t=0}$ and $\lbrace \partial_t v^i \rbrace_{i=1,2,3}|_{t=0}$ are determined in terms of ${\uprho}|_{t=0}$, $\lbrace v^i \rbrace_{i=1,2,3}|_{t=0}$, $\lbrace \partial_i {\uprho}\rbrace|_{i=1,2,3}|_{t=0}$, and $\lbrace \partial_j v^i \rbrace_{i,j=1,2,3}|_{t=0}$ via the compressible Euler equations -. In our forthcoming work on shock formation, we consider initial data for the system - and - that verify tensorial smallness/largeness conditions; see Subsect. \[SS:PREVIEWONSHOCKS\] for more details. We of course must ensure that our smallness/largeness conditions are consistent with the constraints. Aside from that, the fact that the data are constrained is a minor issue since our main interest in Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] is that it provides equations that are useful for deriving a priori estimates for solutions. Statement of the main result II: Strong null condition {#SS:MAINRESULT.II} ------------------------------------------------------ Our second main theorem sharply characterizes the null structure of the inhomogeneous terms in the above system. The theorem refers to the “strong null condition,” which we rigorously define in Def. \[D:STRONGNULLCONDITION\]. As we have mentioned, the strong null condition roughly states that none of the inhomogeneous term products contain two factors involving differentiations transversal to the acoustic characteristics. \[T:STRONGNULL\] For *solutions*[^26] to - and -, the inhomogeneous terms on the right-hand sides of the equations consist of two types: **i)** terms that are manifestly linear in the first derivatives of $({\uprho},v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)$ and **ii)** terms that can be expressed as products that are quadratic in the first derivatives of $({\uprho},v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)$ and that **verify the strong null condition** (see Def. \[D:STRONGNULLCONDITION\]) relative to the acoustical metric $g$ (see Def. \[D:ACOUSTICALMETRIC\]). \[R:SIGNIFICANCEOFSTRONGNULL\] Given Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\], Theorem \[T:STRONGNULL\] is a simple result. However, its significance for the study of shock formation is profound. Roughly speaking, terms verifying the strong null condition do not prevent the shock from forming. The special structures associated to the strong null condition become visible only relative to an exact (as opposed to approximate) null frame adapted to $g$. That is, when proving shock formation, *there seems to be no room for error when decomposing nonlinear terms*. This is in stark contrast to many problems for nonlinear wave equations in which small-data global existence holds. In those problems, there is often room for error in the decompositions and it is often possible to prove small-data global existence by decomposing nonlinear terms relative to a null frame adapted to a background metric. The background geometry allows for a drastically simplified approach to deriving estimates. We explore these issues in more detail in Remark \[R:STRONGNULLFULLYNONLINEAR\], Subsect. \[SS:STRONGNULLVSOTHERNULL\], and Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. The strong null condition and proof of Theorem \[T:STRONGNULL\] {#S:PROOFOFTHMSTRONGNULL} =============================================================== In this section, we provide some basic geometric background and prove Theorem \[T:STRONGNULL\], which shows that the appropriate inhomogeneous terms from Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] verify the strong null condition. We also compare and contrast the strong null condition to distinct null structures found in other problems. Null frames, null forms, and the strong null condition {#SS:NULLSTUFFANDPROOFOFSECONDTHEOREM} ------------------------------------------------------ Our main goal in this subsection is to define the strong null condition. We first provide some standard background material. \[D:NULLFRAME\] Let $g$ be a Lorentzian metric on[^27] $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$. A $g-$*null frame* (“null frame” for short, when the metric is clear) at a point $p$ is a set of vectors $$\begin{aligned} \label{E:NULLFRAME} \mathscr{N} := \lbrace {L},{\underline{L}},e_1,e_2 \rbrace \end{aligned}$$ belonging to the tangent space of $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$ at $p$ with $$\begin{aligned} g({L},{L}) & = g({\underline{L}},{\underline{L}}) = 0, && \label{E:NULLFRAMEVECTORFIELDSLENGTHZERO} \\ g({L},{\underline{L}}) &= -2, && \label{E:NULLPAIRINNERPRODUCTMINUS2} \\ g({L},e_A) &= g({\underline{L}},e_A) = 0, && (A=1,2), \label{E:NULLVECTORFIELDSORTHOGONALTOSPACELIKEONES} \\ g(e_A,e_B) &= \delta_{AB}, && (A,B=1,2), \label{E:SPACELIKEVECTORFIELDSORTHONORMAL} \end{aligned}$$ where $\delta_{AB}$ is the standard Kronecker delta. The following lemma is a consequence of Def. \[D:NULLFRAME\]; we omit the simple proof. \[L:DECOMPOFGINVERSERELATIVETONULLFRAME\] Relative to an arbitrary $g-$null frame, we have $$\begin{aligned} \label{E:DECOMPOFGINVERSERELATIVETONULLFRAME} g^{-1} & = - \frac{1}{2} {L}\otimes {\underline{L}}- \frac{1}{2} {L}\otimes {\underline{L}}+ \sum_{A=1}^2 e_A \otimes e_A. \end{aligned}$$ \[D:DECOMPOSINGNONLINEARTERMRELATIVETONULLFRAME\] Let $\vec{V} := ({\uprho}, v^1,v^2,v^3,{\upomega}^1,{\upomega}^2,{\upomega}^3)$ be the array of unknowns in the system - and -. We label the components of $\vec{V}$ by $V^0={\uprho}$, $V^i=v^i$, $V^{i+3}={\upomega}^i$ for $i=1,2,3$. Let $\mathcal{N}(\vec{V},\partial \vec{V})$ be a smooth nonlinear term that is quadratically nonlinear in $\partial \vec{V}$. That is, we assume that $\mathcal{N}(\vec{V},\partial \vec{V}) ={\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta}\partial_{\alpha} V^\Theta \partial_{\beta} V^\Gamma$, where ${\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta}$ is symmetric in $\Theta$ and $\Gamma$ and is a smooth function of $\vec V$ (*not* necessarily vanishing at $0$) for $\alpha,\beta=0,1,2,3$ and $\Theta,\Gamma=0,1,\dots,6$. Given a null frame $\mathscr{N}$ as defined in Def. \[D:NULLFRAME\], we denote $$\mathscr{N} := \lbrace e_1, e_2, e_3:= {\underline{L}}, e_4 := {L}\rbrace.$$ Moreover, we let $M_{\alpha}^A$ be the scalar functions corresponding to expanding the Cartesian coordinate partial derivative vectorfield $\partial_{\alpha}$ at $p$ relative to the null frame, that is, $$\partial_{\alpha} = \sum_{A=1}^4 M_{\alpha}^A e_A.$$ Then[^28] $$\begin{aligned} \label{E:NONLINEARTERMDECOMPOSEDRELATIVETONULLFRAME} \mathcal{N}_{\mathscr{N}} :={\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta} M_{\alpha}^A M_{\beta}^B (e_A V^\Theta)(e_B V^\Gamma) \end{aligned}$$ denotes the nonlinear term obtained by expressing $\mathcal{N}(\vec{V},\partial \vec{V})$ in terms of the derivatives of $\vec{V}$ with respect to the elements of $\mathscr{N}$, that is, by expanding $\partial \vec{V}$ as a linear combination of the derivatives of $\vec{V}$ with respect to the elements of $\mathscr{N}$ and substituting the expression for the factor $\partial \vec{V}$ in $\mathcal{N}(\vec{V},\partial \vec{V})$. We are now ready to state our main definition. \[D:STRONGNULLCONDITION\] Let $\mathcal{N}(\vec{V},\partial \vec{V})$ be as in Def. \[D:DECOMPOSINGNONLINEARTERMRELATIVETONULLFRAME\]. We say that $\mathcal{N}(\vec{V},\partial \vec{V})$ verifies the *strong null condition* relative to $g$ if the following condition holds: for *every* $g-$null frame $\mathscr{N}$, $\mathcal{N}_{\mathscr{N}}$ can be expressed in a form that depends linearly (or not at all) on ${L}\vec{V}$ and ${\underline{L}}\vec{V}$. That is, there exists scalars $\overline{{\mathrm{f}}}_{\Theta\Gamma}^{AB}(\vec V)$ and $\underline{{\mathrm{f}}}_{\Theta\Gamma}^{AB}(\vec V)$ such that $$\overline{{\mathrm{f}}}_{\Theta\Gamma}^{33}(\vec V)=\overline{{\mathrm{f}}}_{\Theta\Gamma}^{44}(\vec V)=0,\quad \underline{{\mathrm{f}}}_{\Theta\Gamma}^{33}(\vec V)=\underline{{\mathrm{f}}}_{\Theta\Gamma}^{44}(\vec V)=0$$ and such that the following hold: $$\label{strong.null.def} \begin{split} {\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta} M_{\alpha}^3 M_{\beta}^3 (e_3 V^\Theta)(e_3 V^\Gamma)=&\overline{{\mathrm{f}}}_{\Theta\Gamma}^{AB}(\vec V)(e_A V^\Theta)(e_B V^\Gamma),\\ {\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta} M_{\alpha}^4 M_{\beta}^4 (e_4 V^\Theta)(e_4 V^\Gamma)=&\underline{{\mathrm{f}}}_{\Theta\Gamma}^{AB}(\vec V)(e_A V^\Theta)(e_B V^\Gamma). \end{split}$$ \[R:STRONGNULLFULLYNONLINEAR\] Since the equations of Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] are such that the inhomogeneous terms are at most quadratic in the derivatives of the unknowns, we have given a definition of the strong null condition (Def. \[D:STRONGNULLCONDITION\]) only for such nonlinearities. If one were trying to study shock formation in a larger class of systems, then one could extend the definition of the strong null condition to higher-order nonlinear terms. However, if the definition were to be relevant for the proof of shock formation, then it would have to account for the exact structure of the higher-order terms. The reason is that one generally expects that in systems featuring quadratic *and* cubic-or-higher-order (in the solution’s derivatives) terms, *all terms* need to have special structure in order for a proof of shock formation to go through. This is in contrast to Klainerman’s original formulation [@sK1984] of a null condition in the context of small-data global-existence problems in $(1+3)$ dimensions, which is based on truncated Taylor expansions of the nonlinearities in which the cubic and higher-order terms do not matter; see discussion in Section \[SS:STRONGNULLVSOTHERNULL\]. That is, the structures needed to close a proof of shock formation are less stable and are close in spirit to the ones that seem to be needed in low-regularity problems (see Subsubsect. \[sss.low.regularity\] for further discussion). One should perhaps not be too surprised by this, since, even in the simple case of the Riccati ODE $\dot{y}= y^2$, the nature of solutions can be drastically altered by the addition of terms proportional to $y^3$, $y^4$, $y^5$, etc. By definition, the strong null condition depends on the acoustical metric $g$. Prior works on quasilinear wave equations indicate that such a structure is useful, and often indispensable, for handling the wave part of the system. However, it is not a priori obvious that a null structure *adapted to $g$* is also crucial for controlling the inhomogeneous nonlinear terms in the transport equation for ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$; the principal part of the transport equation has no obvious connection to the covariant wave operator $\square_g$. Nonetheless, as we will show in our forthcoming works on shock formation, the strong null condition is indeed the right condition, since the singularity formation is driven by the wave part of the system and not the transport part. It is well-known that there is a class of nonlinearities, associated to the standard null forms, which obey the strong null condition. We now recall the definition of the standard null forms. \[D:NULLFORMS\] The standard null forms $\mathscr{Q}^g(\cdot,\cdot)$ (relative to $g$) and $\mathscr{Q}_{(\alpha \beta)}(\cdot,\cdot)$ act on pairs $(\phi,\widetilde{\phi})$ of scalar-valued functions as follows: $$\begin{aligned} \mathscr{Q}^g(\partial \phi, \partial \widetilde{\phi}) &:= (g^{-1})^{\alpha \beta} \partial_{\alpha} \phi \partial_{\beta} \widetilde{\phi}, \label{E:Q0NULLFORM} \\ \mathscr{Q}_{(\alpha \beta)}(\partial \phi, \partial \widetilde{\phi}) & = \partial_{\alpha} \phi \partial_{\beta} \widetilde{\phi} - \partial_{\alpha} \widetilde{\phi} \partial_{\beta} \phi. \label{E:QALPHABETANULLFORM} \end{aligned}$$ It is well-known that the standard null forms obey the strong null condition. For completeness, we will give a proof of this fact as part of the proof of Theorem \[T:STRONGNULL\], given below in Subsect. \[SS:PROOFOFTHEOREMSTRONGNULL\]. \[rmk.notallnullforms\] As we mentioned above, the standard null forms (relative to $g$) of Def. \[D:NULLFORMS\] satisfy the strong null condition of Def. \[D:STRONGNULLCONDITION\]. In fact, if one requires the stronger condition that the cancellation structure occurs for all functions instead of just solutions to the system, that is, if one requires (compare with ) $${\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta} M_{\alpha}^3 M_{\beta}^3 ={\mathrm{f}}(\vec V)_{\Theta\Gamma}^{\alpha \beta} M_{\alpha}^4 M_{\beta}^4=0,\quad\mbox{for all }\Theta, \Gamma=0,1,\dots, 6,$$ then it is an easy exercise to show that the nonlinearities must be linear combinations the standard null forms relative to $g$, with coefficients depending on $\vec{V}$. In our setting, while most of the nonlinear terms in the system - and - have the desired good null structure because they are linear combinations of the standard null forms relative to $g$, this is *not* the case for all of them. In particular, the term $\mathscr{P}_{({\upomega})}^i$ from equation verifies the strong null condition *only when $\vec V$ is a solution to the compressible Euler equations*; see the proof in Sect. \[SS:PROOFOFTHEOREMSTRONGNULL\] for further details. Comparing and contrasting the strong null condition to null structures found in other contexts {#SS:STRONGNULLVSOTHERNULL} ---------------------------------------------------------------------------------------------- The notion of a “null condition” was first introduced by Klainerman [@sK1984] in his study of small-data global existence for systems of nonlinear wave equations in three spatial dimensions. Ever since, his null condition and related ones have been ubiquitous in the analysis of nonlinear wave equations. They lie at the heart of many spectacular advances, including the stability of Minkowski spacetime, global regularity for critical geometric wave equations, and the formation of trapped surfaces, just to name a few. As we have mentioned and will further discuss, the shock formation results of Christodoulou [@dC2007; @dCsM2014] and Speck [@jS2014b] also rely on a type of null condition,[^29] and obtaining a deeper understanding it lies at the heart of our forthcoming work on shock formation in the presence of vorticity. In this subsection, we briefly describe various notions of null conditions and compare/contrast them with the strong null condition of Def. \[D:STRONGNULLCONDITION\]. ### Small-data global existence problems {#sss.small.data} As we mentioned above, a notion of a null condition was first introduced [@sK1984] in the context of small-data global existence problems in $(1+3)$ dimensions. His notion, which we will call the *classic null condition*, is based on Taylor expanding the nonlinear terms up to quadratic order. A foundational result, due independently to Christodoulou [@dC1986a] and Klainerman [@sK1986], states that if the nonlinearities in the equation satisfy the classic null condition, then all sufficiently small initial data give rise to global solutions. In the wake of [@sK1984; @sK1986; @dC1986a], many extensions of these results have been proved. Perhaps the most spectacular of these is the monumental work of Christodoulou–Klainerman [@dCsK1993], who showed that Minkowski spacetime is stable as a solution to the Einstein vacuum equations. In particular, a key to the result is that for solutions to Einstein’s equations, the Bianchi equations, viewed as a system of evolution equation for the Weyl curvature tensor, exhibit a good null structure, similar to the structure introduced in [@sK1984], but adapted to the dynamic spacetime metric. The key insights behind Klainerman’s classic null condition are **i)** solutions to the linear wave equation on $\mathbb{R}^{1+3}$, when differentiated with respect to different elements of a (canonical) null frame,[^30] decay with different rates and **ii)** most importantly from the point of view of analysis, the classic null condition excludes the presence of the most slowly decaying quadratic terms. In fact, in small-data global existence problems, cubic and higher-order terms decay much faster and thus are easier to control. It is for the latter reason that the classic null condition is concerned only with quadratic terms obtained from Taylor expanding the nonlinearities. In addition to the classic null condition, other kinds of null structures have been identified as being relevant in the context of small-data global existence problems. Moreover, like the classic null condition, these notions of a null structure typically allow for[^31] a “margin of error.” An important example is found in the study of the Einstein vacuum equations in the wave coordinate gauge. In this gauge, the equations violate the classic null condition, but still possess a structure that is a particular case of Lindblad–Rodnianski’s *weak null condition* [@hLiR2003]; see also [@sA2003; @hL2008]. In [@hLiR2010], Lindblad–Rodnianski exploited the weak null condition to give a proof of the stability of Minkowski spacetime in the wave coordinate gauge. Remarkably, while the true dynamic metric is not the Minkowski metric (in fact, the true null cones provably *diverge logarithmically* from their Minkowskian counterparts!), they nonetheless are able to control the nonlinear terms by relying on on a weak null condition whose formulation was tied to the geometry of the background Minkowski metric. In addition, their weak null condition was not sensitive to the presence of most cubic terms. Moreover, their proof of small-data global existence relied only on vectorfields adapted to the Minkowskian characteristics (that is, the standard flat light cones) – and not the characteristics of the dynamic metric. Their approach, which was drastically simpler than the original approach of Christodoulou-Klainerman [@dCsK1993], was viable in part because even though some error terms are allowed to grow in time, the growth is sufficiently slow and can be suitably controlled. This is in stark contrast to the situation encountered in the proof of shock formation, which we describe in Subsubsect. \[sec.null.euler\]; near the shock singularity, one seems to need a null condition adapted exactly to the relevant metric (that is, the acoustical metric of Def. \[D:ACOUSTICALMETRIC\]) with no margin of error. ### Low-regularity problems {#sss.low.regularity} Another class of problems for which standard null forms play an important role is low-regularity problems. Specifically, many remarkable global low-regularity results have been achieved for various semilinear wave equations with standard null form nonlinearities. Examples include wave maps, Maxwell–Klein–Gordon equations, and Yang–Mills equations [@sKmM1994; @sKmM1995; @tT20082009; @jSdT2010; @jKwS2012; @jKjL2015; @sjOdT2015]. A crucial ingredient in these results is bilinear estimates, for which the full structure of the nonlinearity, as opposed to only its quadratic part, has to be exploited. In fact, typical derivative-cubic terms, while completely benign in the context of Subsubsect. \[sss.small.data\], would invalidate the proofs if the equations were modified to include them. In a recent breakthrough, Klainerman–Rodnianski–Szeftel [@sKiRjS2015] extended the above low-regularity techniques to the Einstein vacuum equations, which, in an appropriate gauge, constitute a *quasilinear* system of wave equations for which the semilinear terms are standard null forms. Their main result was a proof of the bounded $L^2$ curvature conjecture, which asserts that local existence of solutions to the Einstein vacuum equations holds true as long as the initial data have curvature in $L^2$. Their proof crucially relies on the fact that the nonlinear terms are standard null forms adapted to the dynamic metric $g$ (which occurs in the principal part of the equation), with no margin of error. In particular, the weak null condition of Lindblad–Rodnianski, while useful for small data global existence problems, seems irrelevant for these kinds of problems. Indeed, Ettinger–Lindblad recently showed [@bEhL2016] that in the wave coordinate gauge, such a low-regularity local existence result fails. As a final example of quasilinear wave equation for which a null condition plays a crucial role, we mention the monumental work of Christodoulou [@dC2009] on the Einstein vacuum equations, in which he showed that trapped surfaces can form dynamically and moreover, their formation is stable. In this work, Christodoulou introduced the *short pulse method*. More precisely, he introduced a small parameter $\delta$ such that the data are supported in a region of $\delta$-size null affine length and obey a tensorial hierarchy of smallness-largeness estimates, where sizes are measured in terms of powers of $\delta^{-1}$. Christodoulou showed [@dC2009] that due to the remarkable null structure of the equations in the double null foliation gauge, this hierarchy of large and small quantities can be propagated by the flow of the equations long enough for a trapped surface to form. In his work, it was important that the good null structure was adapted exactly to the dynamic metric $g$ in a manner similar to the discussions in the previous two paragraphs. In fact, as was pointed out in [@jLiR2013], this problem can be viewed as a low-regularity problem, since it is only for a very rough norm that the data are bounded independent of $\delta$. ### Null condition in the setting of compressible Euler equations with vorticity {#sec.null.euler} As we have already mentioned in the introduction, in the previous works on shock formation as well as in our forthcoming work, the special null structure of the inhomogeneous terms is one of the key ingredients in the proofs. In those works, although the initial data are regular, some low-order standard Sobolev norm (defined with respect to the Cartesian coordinate partial derivative vectorfields) of the solution blows up (for example, the standard $H^1$ norms of $v^i$ and $\rho$ blow up in [@jLjS2016b]). For this reason, the authors need to control the solution up to a time when this low-order standard Sobolev norm blows up; it is only relative to a special *low-regularity* norm involving directionally dependent powers of a geometric weight (specifically, the weight $\upmu$ defined in ), designed specifically to capture the geometry of the shock, that the solution remains bounded. It is therefore not surprising that our strong null condition (see Def. \[D:STRONGNULLCONDITION\]) shares many similarities to the null conditions mentioned in Subsubsect. \[sss.low.regularity\] (as opposed to the null condition of Subsubsect. \[sss.small.data\]). In particular, it is not surprising that our definition of the strong null condition refers to exact $g$–null frames, where $g$ is the acoustical metric of Def. \[D:ACOUSTICALMETRIC\]. Indeed, in the proof of shock formation, one must use vectorfields adapted to the true characteristics (as opposed to approximate ones) in order to avoid incurring uncontrollable error terms. Moreover, our condition cannot be based on truncated Taylor expansion, since the proof is very sensitive to cubic and higher-order terms. We note that there are two new features of the strong null condition for the compressible Euler equations. First, it appears to be the first instance of a null condition that is *not* based on truncations and that plays a crucial role in a problem involving quasilinear wave equations coupled to another quasilinear equation of a *different characteristic speed*. Second, as we already emphasized in Remark \[rmk.notallnullforms\], the strong null condition can accommodate some new nonlinearities that are not standard null forms. However, the cancellations needed to exhibit the good null structure of these new nonlinearities occur only for *solutions* to the system. This is somewhat reminiscent of the cancellations tied to the use of the wave coordinate gauge in the Lindblad–Rodnianski proof [@hLiR2010] of the stability of Minkowski spacetime. That is, in both cases, some of the special null structures found in the equations (which are needed to close the proofs) occur only for solutions. We note, however, that in our formulation of the compressible Euler equations, the cancellations are fully nonlinear and not tied to a gauge choice, which is different than the situation encountered in [@hLiR2010]. Proof of Theorem \[T:STRONGNULL\] {#SS:PROOFOFTHEOREMSTRONGNULL} --------------------------------- It is easy to see that the terms on the right-hand sides of equations - and - consist of three types: type **i)** terms (as defined in the statement of the theorem), quadratic terms consisting of linear combinations (with coefficients depending on $\vec{V}$) of the standard null forms - acting on the elements of $\vec{V}$, and the terms $\mathscr{P}_{({\upomega})}^i$ defined in . It thus suffices to consider the standard null forms and the term $\mathscr{P}_{({\upomega})}^i$. From the formula , which is valid for an arbitrary $g-$null frame, it is clear the terms of the form $\mathscr{Q}^g(\cdot,\cdot)$ verify the strong null condition. To handle the terms of the form $\mathscr{Q}_{(\alpha \beta)}(\cdot,\cdot)$, we denote the null frame by $$\mathscr{N} := \lbrace e_1, e_2, e_3:= {\underline{L}}, e_4 := {L}\rbrace.$$ Since the null frame spans the tangent space at each point where it is defined, we can express, for $\alpha = 0,1,2,3$, $$\label{M.def} \partial_{\alpha} = \sum_1^4 M_{\alpha}^A e_A,$$ where the $M_{\alpha}^A$ are scalar-valued functions. Then $$\mathscr{Q}_{(\alpha \beta)}(\partial \phi,\partial \widetilde{\phi}) = \sum_{A,B=1}^4 \left\lbrace M_{\alpha}^A M_{\beta}^B - M_{\alpha}^B M_{\beta}^A \right\rbrace (e_A \phi) e_B \widetilde{\phi}.$$ The term in braces is antisymmetric in $A$ and $B$ and thus there are no diagonal terms $(e_A \phi) e_A \widetilde{\phi}$ present in the sum. In particular, terms proportional to $({\underline{L}}\phi) {\underline{L}}\widetilde{\phi}$ and $({L}\phi) {L}\widetilde{\phi}$ are not present. It follows that the terms $\mathscr{Q}_{(\alpha \beta)}(\cdot,\cdot)$ verify the strong null condition. It remains for us to analyze the terms $$\begin{aligned} \label{E:VORTICITYQUADRATICTERMNULLFRAMEEXPANDED} \mathscr{P}_{({\upomega})}^i & = \sum_{A,B=1}^4 \epsilon_{iab} M_a^A M_c^B \left\lbrace (e_A {\upomega}^c) e_B v^b - (e_A v^c) e_B {\upomega}^b \right\rbrace\end{aligned}$$ from , where $M_a^A$ is as in . Note that all terms on RHS  are allowable under the strong null condition except for $$\begin{aligned} \label{E:EXCEPTIONALTERMS} & \epsilon_{iab} M_a^3 M_c^3 (e_3 {\upomega}^c) e_3 v^b - \epsilon_{iab} M_a^3 M_c^3 (e_3 v^c) e_3 {\upomega}^b \\ & \ \ + \epsilon_{iab} M_a^4 M_c^4 (e_4 {\upomega}^c) e_4 v^b - \epsilon_{iab} M_a^4 M_c^4 (e_4 v^c) e_4 {\upomega}^b. \notag\end{aligned}$$ To handle the terms in , we will use equation for substitution. We start by expanding the material derivative vectorfield (see ) relative to the null frame: $ {B}= \sum_{A=1}^4 \upbeta^A e_A $, where the $\upbeta^A$ are scalar-valued functions. From and -, we find that the product $\upbeta^3 \upbeta^4$ verifies $$\upbeta^3 \upbeta^4 \neq 0,$$ and thus both $\upbeta^3$ and $\upbeta^4$ are non-vanishing. Hence, using the expansion $ {B}= \sum_{A=1}^4 \upbeta^A e_A $, we can replace the factors $e_3 {\upomega}^c$, $e_3 {\upomega}^b$, $e_4 {\upomega}^c$, and $e_4 {\upomega}^b$ in respectively with $(1/\upbeta^3) {B}{\upomega}^c$, $(1/\upbeta^3) {B}{\upomega}^b$, $(1/\upbeta^4) {B}{\upomega}^c$, and $(1/\upbeta^4) {B}{\upomega}^b$, up to terms that are allowable under the strong null condition. It remains for us to analyze $$\begin{aligned} \label{E:MOREEXCEPTIONALTERMS} & \epsilon_{iab} (1/\upbeta^3) M_a^3 M_c^3 ({B}{\upomega}^c) e_3 v^b - \epsilon_{iab} (1/\upbeta^3) M_a^3 M_c^3 (e_3 v^c) {B}{\upomega}^b \\ & \ \ + \epsilon_{iab} (1/\upbeta^4) M_a^4 M_c^4 ({B}{\upomega}^c) e_4 v^b - \epsilon_{iab} (1/\upbeta^4) M_a^4 M_c^4 (e_4 v^c) {B}{\upomega}^b. \notag\end{aligned}$$ To handle the terms on the first line of , we use equation to replace $ {B}{\upomega}^c $ with $ \upomega^d M_d^3 e_3 v^c $ and $ {B}{\upomega}^b $ with $ \upomega^d M_d^3 e_3 v^b $, up to terms that are allowable under the strong null condition. After substitution, the terms on the first line of become, up to terms that are allowable under the strong null condition, $ \epsilon_{iab} (1/\upbeta^3) M_a^3 M_c^3 \upomega^d M_d^3 (e_3 v^c) e_3 v^b - \epsilon_{iab} (1/\upbeta^3) M_a^3 M_c^3 \upomega^d M_d^3 (e_3 v^c) e_3 v^b = 0 $; this identity is the key cancellation in the proof. Similarly, to handle the terms on the second line of , we can use equation to replace $ {B}{\upomega}^c $ with $ \upomega^d M_d^4 e_4 v^c $ and $ {B}{\upomega}^b $ with $ \upomega^d M_d^4 e_4 v^b $, up to terms that are allowable under the strong null condition, and then argue as above. This completes the proof of Theorem \[T:STRONGNULL\]. $\hfill \qed$ Ideas behind the proof of shock formation and its connection to null structure {#S:IDEASBEHINDSHOCKFORMATION} ============================================================================== In this section, we overview, without proof, how the structures revealed by Theorems \[T:GEOMETRICWAVETRANSPORTSYSTEM\] and \[T:STRONGNULL\] are used in our forthcoming results [@jLjS2016b; @jLjS2017] on stable shock-formation result for the compressible Euler equations in regions containing vorticity. That discussion is located Subsect. \[SS:PREVIEWONSHOCKS\]. In the preliminary Subsect. \[SS:GOODNULLINPROOFOFSHOCKFORMATION\], we recall Christodoulou’s framework [@dC2007] for proving shock formation in the irrotational case; the framework also plays an important role in our works [@jLjS2016b; @jLjS2017]. We do not provide any proofs in the irrotational case either; detailed proofs, tailored to the discussion below, are located in [@jSgHjLwW2016], which provided an extension of Christodoulou’s result [@dC2007] to treat a new regime of initial data (see below for more details). We focus mainly on the work [@jSgHjLwW2016] rather than [@dC2007] because, for reasons explained below, some aspects of it are simpler to implement. Readers may also consult the survey article [@gHsKjSwW2016] for additional discussion on Christodoulou’s result [@dC2007] and related ones. The case of the irrotational wave equations and related quasilinear wave equations {#SS:GOODNULLINPROOFOFSHOCKFORMATION} ---------------------------------------------------------------------------------- In this subsection, we describe the main ideas behind the proof of shock formation in solutions to a general class of quasilinear wave equations that includes, as a special case, the irrotational compressible Euler equations. ### Preliminary discussion concerning the equations {#SSS:PRELIMINARYDICUSSION} In [@dC2007], Christodoulou provided a complete description of the formation of shocks for perturbations (belonging to a suitable high-order Sobolev space) of the non-vacuum constant state solutions to the equations of (special) relativistic fluid mechanics in three spatial dimensions in regions with vanishing vorticity. His results hold for any barotropic equation of state[^32] and were extended to the non-relativistic compressible Euler equations in [@dCsM2014]. In both the relativistic and non-relativistic cases, under an arbitrary barotropic equation of state, the dynamics in the irrotational case reduce to a quasilinear wave equation of Euler-Lagrange type for a potential function $\Phi$. The equation can be written relative to Cartesian coordinates in the following non-Euler-Lagrange form: $$\begin{aligned} \label{E:CHRISTODOULOUWAVEEQN} (g^{-1})^{\alpha \beta}(\partial \Phi) \partial_{\alpha} \partial_{\beta} \Phi & = 0,\end{aligned}$$ where the form of the Cartesian metric component functions $g_{\alpha\beta} = g_{\alpha\beta}(\partial \Phi)$ is determined by the equation of state. The Lorentzian “spacetime” metric $g$, whose inverse appears in , may be viewed as a $4 \times 4$ symmetric matrix of signature $(-,+,+,+)$. The metric $g$ is the exact analog of the acoustical metric from Def. \[D:ACOUSTICALMETRIC\]. It turns out that to prove shock formation for solutions to equation , it is convenient to differentiate the equation one time with Cartesian coordinate partial derivatives. This motivates the following definition, $(\nu = 0,1,2,3)$: $$\begin{aligned} \label{E:PSICOMPONENTSDEF} \vec{\Psi} & := (\Psi_0,\Psi_1,\Psi_2,\Psi_3), \qquad \Psi_{\nu} := \partial_{\nu} \Phi.\end{aligned}$$ In [@dC2007], Christodoulou showed that by differentiating the irrotational wave equations of relativistic fluid mechanics with Cartesian coordinate partial derivatives (see [@dCsM2014] for the same result in the case of the non-relativistic compressible Euler equations), one obtains the following system[^33] on $\mathbb{R}^{1+3}$: $$\begin{aligned} \label{E:DIFFCHRISTODOULOUWAVEEQN} \square_{g(\vec{\Psi})} \Psi_{\nu} & = 0.\end{aligned}$$ The nonlinearities in are hidden in the covariant wave operator $\square_{g(\vec{\Psi})}$ on the LHS. In [@jS2014b], Speck showed that the vanishing of RHS  is tied to the Euler-Lagrange structure of the original irrotational Euler wave equation. Moreover, he showed that *all* equations of the form (not necessarily of Euler-Lagrange type), upon differentiation, yield a system of the form $$\begin{aligned} \label{E:DIFFSPECKWAVEEQN} \square_{g(\vec{\Psi})} \Psi_{\nu} & = \mathscr{Q}_{\nu}(\vec{\Psi},\Psi_{\nu}),\end{aligned}$$ where $\mathscr{Q}_{\nu}(\cdot,\cdot)$ *verifies the strong null condition* of Def. \[D:STRONGNULLCONDITION\]. It is important for the proof of shock formation that $\square_{g(\vec{\Psi})}$ is the covariant wave operator of $g(\vec{\Psi})$ (see Def. \[D:COVWAVEOP\]); the operator $\square_{g(\vec{\Psi})}$ enjoys particularly good commutation properties with appropriately constructed vectorfields, which we describe in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. As we have mentioned, in various solution regimes, the nonlinear terms $\mathscr{Q}_{\nu}$ on RHS  have a negligible effect[^34] on the dynamics; we explain this in more detail at the end of Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. For this reason, we ignore the $\mathscr{Q}_{\nu}$ for most of this subsection. Moreover, as is described in [@jS2014b], the proof of shock formation for solutions to the system is not much more difficult than the proof in the case of a single scalar wave equation. For this reason, until Subsect. \[SS:PREVIEWONSHOCKS\], we restrict our attention to the scalar covariant wave equation $$\begin{aligned} \label{E:SCALARMODELWAVE} \square_{g(\Psi)} \Psi & = 0.\end{aligned}$$ In [@jS2014b], Speck proved a small-data[^35] shock formation result, in the spirit of Christodoulou’s work [@dC2007], for all equations of type , , and in three spatial dimensions whenever the nonlinear terms *fail* to satisfy Klainerman’s null condition [@sK1984]. We recall that, as we described in Subsect. \[SS:PRELIMINARYDISCUSSION\], a similar but less precise result had been proved for equations of type by Alinhac. ### Preliminary remarks on solution regimes {#SSS:DATA} It is by now well-understood that the geometric framework introduced in [@dC2007] can be applied to show stable shock formation for large classes of scalar quasilinear wave equations in different solution regimes. Specifically, the works [@dC2007; @dCsM2014; @jS2014b] prove shock formation results for various scalar quasilinear wave equations in $1+3$ dimensions for small compactly supported data, [@jSgHjLwW2016] treats the regime of nearly simple outgoing plane symmetric solutions, and [@sMpY2014] treats a special “short pulse” regime, which is a large-data regime. Although the fine details behind propagating the estimates are distinct in each case, all of these works rely on a similar geometric framework that is able to accommodate and sharply describe the formation of the shock. In our discussion here, we will focus on the solution regime of [@jSgHjLwW2016], since we will study a similar regime in our forthcoming works on shock formation in the presence of non-zero vorticity. We chose this solution regime in part because there is no dispersion, which simplifies some parts of the proof. That is, one does not need to keep track of decay in time or space, which simplifies some aspects of the analysis. However, we expect that our forthcoming work could be generalized to other solution regimes as long as one makes appropriate smallness assumptions. In [@jSgHjLwW2016], the authors proved a two-space-dimensional shock formation result for initial data posed on the Cauchy hypersurface $\mathbb{R} \times \mathbb{T}$ (where $\mathbb{T}$ is the torus), which were assumed to be close to that of plane symmetric simple[^36] outgoing[^37] waves, where the $\mathbb{T}$ direction corresponds to a breaking of the plane symmetry. By plane symmetric, we mean $\Psi = \Psi(t,x^1)$, while by nearly plane symmetric, we mean $\Psi = \Psi(t,x^1,x^2)$ with small initial dependence on $x^2 \in \mathbb{T}$. To propagate smallness in the problem, in particular the smallness of the perturbation away from simple plane symmetry, the authors of [@jSgHjLwW2016] introduced the data-size parameters $\mathring{\upepsilon}$ and $\mathring{\updelta}$, where $\mathring{\upepsilon}$ is small relative to $\mathring\updelta^{-1}$. The geometric meanings of $\mathring{\upepsilon}$ and $\mathring{\updelta}$ are easy to describe: $\mathring{\upepsilon}$ measures the size (in appropriate norms) of $\Psi$ itself and its derivatives in directions *tangent* to the characteristics,[^38] while $\mathring{\updelta}$ measures the size of the *purely transversal* (that is, transversal to the characteristics) derivatives of $\Psi$. It was also assumed that the mixed transversal-tangent derivatives are of small size $\mathring{\upepsilon}$. In the following, we will consider this solution regime, including the $\mathring{\updelta}$-$\mathring{\epsilon}$ size hierarchy, but *adapted to three spatial dimensions*[^39] with the spatial manifold equal to $\mathbb R\times \mathbb T^2$. As we will discuss in Subsubsect. \[SSS:FEATURESINCOMMONWITHIRROTATIONALCASE\], we will consider a similar solution regime in our forthcoming work on shock formation in the presence of vorticity. ### Some preliminary remarks on the proof, including the significance of the strong null condition {#SSS:SOMEREMARKSONTHEPROOF} For the nearly simple outgoing plane symmetric solutions described in Subsubsect. \[SSS:PRELIMINARYDICUSSION\], the shock-producing homogeneous quasilinear wave equations of type can be caricatured[^40] by the following equation on $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$ (where $\Sigma_t \simeq \mathbb{R} \times \mathbb{T}^2$), when the derivatives are decomposed with respect to the Cartesian frame $\{{L}_{(Flat)} : = \partial_t + \partial_1,\, {\partial}_1, \, {\partial}_2,\, {\partial}_3\}$: $$\begin{aligned} \label{E:EQUATIONCARICATURE} {L}_{(Flat)} \partial_1 \Psi & = (\partial_1 \Psi)^2 + \mbox{\upshape Error}.\end{aligned}$$ In , $\mbox{\upshape Error}$ consists of quasilinear and semilinear terms depending on the up-to-second (principal!) order derivatives of $\Psi$ and, by assumption, $\mbox{\upshape Error}$ is *initially* small. Equation suggests that $\partial_1 \Psi$, if initially large with respect to $\mbox{\upshape Error}$, should experience Riccati-type blow up along the integral curves of ${L}_{(Flat)}$. However, the approach of writing the equation in the form does not seem to actually allow one to prove that ${\partial}_1\Psi$ blows up; it is difficult to guarantee that $\mbox{\upshape Error}$ is small throughout the evolution. To explain why the above scheme should fail, we must explain some basic facts about the nature of the blowup. Specifically, a fundamental aspect of Christodoulou’s framework is that the blowup occurs for derivatives of $\Psi$ in directions transversal to the characteristics, whose intersection is tied to the blowup. The relevant characteristics in the nearly plane symmetric regime are *perturbations* of the level sets of $u_{(Flat)} := 1 - x^1 + t$, to which ${L}_{(Flat)}$ is tangential. In particular, terms such as $\partial_2^2 \Psi$, which have been relegated to the term $\mbox{\upshape Error}$ on RHS , generally blow up at the shock since $\partial_2$ is generally transversal to the characteristics (even though it is tangent to the characteristics $\lbrace 1 - x^1 + t = \mbox{\upshape const} \rbrace$ of the global background solution). It is for this reason that the scheme from the previous paragraph seems to be insufficient for proving that blowup occurs. A key idea behind Christodoulou’s approach in [@dC2007] is to “hide” the singularity via a dynamic change of coordinates, adapted to the characteristics. This can be viewed as a high-dimensional analogue of the well-known hodograph transformation in one spatial dimension, in which one introduces a new system of geometric coordinates in which the solution remains regular; the singularity reveals itself only in the degeneration of the map from geometric to Cartesian coordinates. This same phenomenon of hiding the singularity can be exhibited in a much simpler context via Burgers’ equation $\partial_t \Psi + \Psi \partial_x \Psi$: shock-forming solutions to Burgers’ equation remain smooth (that is, $C^{\infty}$ if the data are) *relative to Lagrangian coordinates*.[^41] More precisely, in three spatial dimensions, one constructs a new system of *geometric coordinates* $(t,u,\vartheta^1,{\vartheta}^2)$ (with corresponding coordinate partial derivative vectorfields[^42] $ \displaystyle \left\lbrace \frac{\partial}{\partial t}, \frac{\partial}{\partial u}, \frac{\partial}{\partial \vartheta^1}, \frac{\partial}{\partial \vartheta^2} \right\rbrace $) on the spacetime $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$, relative to which the solution remains regular, all the way up to the shock, except at the very high derivative levels, where the corresponding energies are allowed to blow up in a controlled fashion. These degenerate energy estimates are the source of almost all of the technical difficulties that one faces; see Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. One might say that relative to the geometric coordinates, the singularity is *renormalizable* except at the very high derivatives levels. The singularity occurs in the Cartesian coordinate partial derivatives $\partial_{\alpha} \Psi$ because the geometric coordinates degenerate (in a precise fashion that lies at the heart of the proof) relative to the Cartesian ones. The most important geometric coordinate is the eikonal function $u$, which we later describe in great detail. The eikonal function is constructed so that its level sets are null hypersurfaces (which we also refer to as “characteristics” or, in the context of the compressible Euler equations, as “acoustic characteristics” in view of their connection to sound wave propagation) and thus $ \displaystyle \left\lbrace \frac{\partial}{\partial t}, \frac{\partial}{\partial \vartheta^1}, \frac{\partial}{\partial \vartheta^2} \right\rbrace $ are tangent to the acoustic characteristics. It turns out that upon being re-expressed relative to the geometric coordinates, equation can be caricatured as $$\begin{aligned} \label{E:GEOMETRICCOORDINATEEQUATIONCARICATURE} \frac{\partial}{\partial t} \frac{\partial}{\partial u} \Psi & = \mbox{\upshape Error},\end{aligned}$$ where all terms in *remain highly differentiable relative to the geometric coordinates, all the way up to the shock*.[^43] As we mentioned above, the singularity in the first Cartesian coordinate partial derivatives of $\Psi$ is tied to the degeneration of the change of variables map between the geometric coordinates and the Cartesian ones. To capture the degeneration, one defines a geometric weight $\upmu$ such that $\upmu \to 0$ corresponds to the intersection of the acoustic characteristics, the formation of a shock, and the blow up of the solution’s Cartesian coordinate partial derivatives. Thus, proving shock formation is equivalent to showing that $\upmu$ vanishes in finite time. It turns out that $\upmu$ verifies an evolution equation that can be caricatured, relative to geometric coordinates, as follows:[^44] $$\begin{aligned} \label{E:UPMUEVOLUTIONCARICATURE} \frac{\partial}{\partial t} \upmu & \sim \frac{\partial}{\partial u} \Psi + \mbox{\upshape Error},\end{aligned}$$ where $\mbox{\upshape Error}$ remains small, all the way up to the shock. The interplay between equations and is the key to understanding the shock formation. We now describe the relationship between the geometric and Cartesian coordinate partial derivative vectorfields. If $u$ is appropriately constructed, then, under appropriate assumptions on the data, one can show that for nearly plane symmetric solutions, we have $$\begin{aligned} \label{E:PARTIALUPARTIAL1CARICATURE} \frac{\partial}{\partial u} & = - \upmu \partial_1 + \upmu \mbox{\upshape Error},\end{aligned}$$ where in and , $\upmu \mbox{\upshape Error}$ remains small up to the shock; see Figure \[F:FRAME\] below to obtain insight on the vectorfield $ \displaystyle \frac{\partial}{\partial u} $, which is well-approximated by the vectorfield ${\breve{X}}$ appearing in that figure. Hence, to prove finite-time shock formation, one considers data such that $ \displaystyle \frac{\partial}{\partial u} \Psi $ is sufficiently negative at some point. By integrating in time, one can propagate this negativity for a long time, as long as $\mbox{\upshape Error}$ remains small. Then by integrating in time, we see that $\upmu$ will vanish in finite time and, by dividing by $\upmu$, that some Cartesian coordinate partial derivative of $\Psi$ will blow up (in particular because $ \displaystyle \frac{\partial}{\partial u} \Psi $ is strictly non-zero at the points where $\upmu$ vanishes). To make this argument precise in more than one spatial dimension, one of course needs to derive energy estimates. As we mentioned above, this is the difficult part of the proof; see Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. Based on the above discussion, it is easy to explain the significance of the strong null condition of Def. \[D:STRONGNULLCONDITION\] in the case where the nonlinear terms on the right hand side of are precisely quadratic in the solution’s derivatives. We first note that one can construct a null[^45] frame[^46] that is expressible relative to the geometric coordinates as follows: $$\begin{aligned} \label{E:SHOCKSNULLFRAME} \left\lbrace {L}= \frac{\partial}{\partial t},\, {\breve{\underline{L}}}= \upmu \frac{\partial}{\partial t} + 2 \frac{\partial}{\partial u} - 2 \upmu \Xi,\, e_1,\, e_2 \right\rbrace.\end{aligned}$$ Above, $\Xi$ is a vectorfield in the span of $ \displaystyle \left\lbrace \frac{\partial}{\partial \vartheta^1}, {\frac}{{\partial}}{{\partial}{\vartheta}^2} \right\rbrace$, ${\breve{\underline{L}}}$ is $g$-null and $g$–orthogonal to the constant-$(t,u)$ tori, and $\lbrace e_1, e_2 \rbrace$ is an arbitrary $g$–orthonormal frame in the span[^47] of $ \displaystyle \left\lbrace \frac{\partial}{\partial \vartheta^1}, {\frac}{{\partial}}{{\partial}{\vartheta}^2} \right\rbrace $. To proceed, we note that by , a typical quadratic semilinear term $\sum_{\alpha,\beta= 0}^3 C_{\alpha \beta} (\partial_{\alpha} \Psi) \partial_{\beta} \Psi$, with $C_{\alpha \beta}$ constants and $\partial_{\alpha}$ the Cartesian coordinate partial derivative vectorfields, if present on RHS , would yield (on RHS , after multiplying by the factor $\upmu$ as described above), relative to the geometric coordinates, a term proportional to $ \displaystyle \frac{1}{\upmu} \left(\frac{\partial}{\partial u} \Psi \right)^2 $. The main problem with such a term is that the factor of $1/ \upmu$ prevents one from proving that $ \displaystyle \frac{\partial}{\partial u} \Psi $ remains bounded all the way up to the singularity (which is caused by the vanishing of $\upmu$) and therefore obstructs the basic philosophy of the approach: showing that $\Psi$ remains regular relative to the geometric coordinates. Similarly, generic cubic terms in $\partial \Psi$ would yield an even worse term $ \displaystyle \frac{1}{\upmu^2} \left(\frac{\partial}{\partial u} \Psi \right)^3 $. However, terms verifying the strong null condition do not suffer from these problems; it is easy to see from that quadratic terms verifying the strong null condition can yield, for example, terms on RHS  of the form $ \displaystyle \frac{\partial}{\partial t} \Psi \cdot \frac{\partial}{\partial u} \Psi $, $ \displaystyle {\frac}{{\partial}}{{\partial}{\vartheta}^1} \Psi \cdot \frac{\partial}{\partial u} \Psi $, $ \displaystyle \upmu \left({\frac}{{\partial}}{{\partial}{\vartheta}^1} \Psi \right)^2 $, or $ \displaystyle \upmu \left({\frac}{{\partial}}{{\partial}{\vartheta}^2} \Psi \right)^2 $, which do not incur the dangerous factor $1/\upmu$ and which involve at least one differentiation with respect to an element of $ \displaystyle \left\lbrace {{\frac}{{\partial}}{{\partial}t}},{{\frac}{{\partial}}{{\partial}{\vartheta}^1}},{{\frac}{{\partial}}{{\partial}{\vartheta}^2}} \right\rbrace, $ which are tangent to the characteristics. Moreover, in the context of nearly simple outgoing plane symmetric waves (which we described in Subsubsect. \[SSS:DATA\]), not only are the quadratic terms verifying the strong null condition regular near the shock, they are also *small* all the way up to the shock. This is because each term in their decomposition relative to the geometric coordinates contains a “tangential” factor that is differentiated with respect to an element of $ \displaystyle \left\lbrace {{\frac}{{\partial}}{{\partial}t}},{{\frac}{{\partial}}{{\partial}{\vartheta}^1}},{{\frac}{{\partial}}{{\partial}{\vartheta}^2}} \right\rbrace $; tangential factors enjoy the $\mathcal{O}(\mathring{\upepsilon})$ smallness described in Subsubsect. \[SSS:DATA\]. ### Normalization choices and assumptions on the nonlinearities to ensure shock formation {#SSS:ASSUMPTIONSONNONLINEARITIES} We recall that we are studying shock formation in nearly simple outgoing plane symmetric solutions to the wave equation . After appropriate normalization choices and rescaling, we may assume[^48] that the Cartesian components of the metric from verify $$\begin{aligned} \label{E:LITTLEGDECOMPOSED} g_{\mu \nu} = g_{\mu \nu}(\Psi) & := m_{\mu \nu} + g_{\mu \nu}^{(Small)}(\Psi), && (\mu, \nu = 0,1,2,3),\end{aligned}$$ where $m_{\mu \nu} = \mbox{diag}(-1,1,1,1)$ is the standard Minkowski metric on $\mathbb{R} \times \mathbb{R} \times \mathbb{T}^2$ and $g_{\mu \nu}^{(Small)}(\Psi)$ are given smooth functions of $\Psi$ with $$\begin{aligned} \label{E:METRICPERTURBATIONFUNCTION} g_{\mu \nu}^{(Small)}(\Psi = 0) = 0, \qquad (g^{-1})^{00}(\Psi) \equiv -1.\end{aligned}$$ To ensure that shocks form, we assume that $$\begin{aligned} \label{E:NONVANISHINGNONLINEARCOEFFICIENT} G_{\alpha \beta}(\Psi = 0) {L}_{(Flat)}^{\alpha} {L}_{(Flat)}^{\beta} \neq 0,\end{aligned}$$ where $$\begin{aligned} \label{E:LFLAT} G_{\alpha \beta} = G_{\alpha \beta}(\Psi) & := \frac{d}{d \Psi} g_{\alpha \beta}(\Psi),\qquad {L}_{(Flat)} := \partial_t + \partial_1.\end{aligned}$$ The assumptions and are equivalent to the assumption that Klainerman’s null condition fails for equation for solutions depending only on $(t,x^1)$. Roughly, this implies that relative to Cartesian coordinates, there are quadratic Riccati-type semilinear terms present in the wave equation, as we caricatured with the model term “$(\partial_1 \Psi)^2$” in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. ### The eikonal function and related geometric constructions {#SSS:GEOMETRICCONSTRUCTIONS} As we explained above, the main idea behind the proof of shock formation under Christodoulou’s framework [@dC2007] is that one can construct a new system of *geometric coordinates* $(t,u,\vartheta^1,{\vartheta}^2)$ (see Def. \[D:GEOMETRICCOORDS\]) relative to which the solution remains regular, all the way up to the shock, except at the very high derivative levels. As in the rest of the article, $t$ is the standard Cartesian time function. The most important geometric coordinate is the eikonal function $u$, which solves the eikonal equation, a nonlinear hyperbolic PDE coupled to the wave equation: $$\begin{aligned} \label{E:EIKONAL} (g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} u \partial_{\beta} u & = 0, \qquad \partial_t u > 0,\end{aligned}$$ where $g = g(\Psi)$ is the Lorentzian metric appearing in . We supplement with the initial conditions $$\begin{aligned} \label{E:ICEIKONAL} u|_{\Sigma_0} = 1 - x^1.\end{aligned}$$ The choice is motivated by the (assumed) approximate plane symmetry of the initial data for the wave equation. The following regions of spacetime are determined by $t$ and $u$ and play an important role in the analysis. They are depicted in Figure \[F:SOLIDREGION\]. \[**Subsets of spacetime**\] \[D:HYPERSURFACESANDCONICALREGIONS\] We define the following spacetime subsets: $$\begin{aligned} \Sigma_{t'} & := \lbrace (t,x^1,x^2,x^3) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T}^2 \ | \ t = t' \rbrace, \label{E:SIGMAT} \\ \Sigma_{t'}^{u'} & := \lbrace (t,x^1,x^2,x^3) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T}^2 \ | \ t = t', \ 0 \leq u(t,x^1,x^2,x^3) \leq u' \rbrace, \label{E:SIGMATU} \\ \mathcal{P}_{u'} & := \lbrace (t,x^1,x^2,x^3) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T}^2 \ | \ u(t,x^1,x^2,x^3) = u' \rbrace, \label{E:PU} \\ \mathcal{P}_{u'}^{t'} & := \lbrace (t,x^1,x^2,x^3) \in \mathbb{R} \times \mathbb{R} \times \mathbb{T}^2 \ | \ 0 \leq t \leq t', \ u(t,x^1,x^2,x^3) = u' \rbrace, \label{E:PUT} \\ \ell_{t',u'} &:= \mathcal{P}_{u'}^{t'} \cap \Sigma_{t'}^{u'},\\ \mathcal{M}_{t',u'} & := \cup_{u \in [0,u']} \mathcal{P}_u^{t'}. \label{E:MTUDEF}\end{aligned}$$ The most important of these subsets are the $\mathcal{P}_u$, which we describe below in more detail. [Solidregion.pdf]{} (54,31) [$\displaystyle \mathcal{M}_{t,u}$]{} (42,33) [$\displaystyle \mathcal{P}_u^t$]{} (74,33) [$\displaystyle \mathcal{P}_0^t$]{} (75,12) [$\displaystyle \Psi \equiv 0$]{} (32,17) [$\displaystyle \Sigma_0^u$]{} (47,13) [$\displaystyle \ell_{0,0}$]{} (12,13) [$\displaystyle \ell_{0,u}$]{} (88.5,59) [$\displaystyle \Sigma_t^u$]{} (93,53) [$\displaystyle \ell_{t,0}$]{} (85.5,53) [$\displaystyle \ell_{t,u}$]{} (-.6,16) [$\displaystyle x^2 \in \mathbb{T}$]{} (22,-3) [$\displaystyle x^1 \in \mathbb{R}$]{} (-.9,3)[(.9,1)[22]{}]{} (.7,1.8)[(100,-4.5)[48]{}]{} \[F:SOLIDREGION\] We now explain how to use $u$ to construct a good vectorfield frame, given in equation , that is useful for studying the solution. We note that the frame is closely related to a $g$-null frame (see Def. \[D:NULLFRAME\]); however, it is not literally a $g$-null frame (and the difference is not important for the main ideas of the discussion). To proceed, we associate the following gradient vectorfield to the eikonal function: $$\begin{aligned} \label{E:LGEOEQUATION} {L_{(Geo)}}^{\nu} & := - (g^{-1})^{\nu \alpha} \partial_{\alpha} u.\end{aligned}$$ From , we deduce that ${L_{(Geo)}}$ is future-directed and $g$-null: $$\begin{aligned} \label{E:LGEOISNULL} g({L_{(Geo)}},{L_{(Geo)}}) := g_{\alpha \beta} {L_{(Geo)}}^{\alpha} {L_{(Geo)}}^{\beta} = 0.\end{aligned}$$ Moreover, we can differentiate the eikonal equation with ${\mathscr{D}}^{\nu} := (g^{-1})^{\nu \alpha} {\mathscr{D}}_{\alpha}$, where ${\mathscr{D}}$ is the Levi-Civita connection of $g$, and use the torsion-free property of ${\mathscr{D}}$ to deduce that $ 0 = (g^{-1})^{\alpha \beta} {\mathscr{D}}_{\alpha} u {\mathscr{D}}_{\beta} {\mathscr{D}}^{\nu} u = - {\mathscr{D}}^{\alpha} u {\mathscr{D}}_{\alpha} {L_{(Geo)}}^{\nu} = {L_{(Geo)}}^{\alpha} {\mathscr{D}}_{\alpha} {L_{(Geo)}}^{\nu} $. That is, ${L_{(Geo)}}$ is geodesic: $$\begin{aligned} \label{E:LGEOISGEODESIC} {\mathscr{D}}_{{L_{(Geo)}}} {L_{(Geo)}}& = 0.\end{aligned}$$ In addition, since ${L_{(Geo)}}$ is proportional to the $g-$dual of the one-form $d u$, which is co-normal to the level sets $\mathcal{P}_u$ of the eikonal function, it follows that ${L_{(Geo)}}$ is $g-$orthogonal to $\mathcal{P}_u$. Hence, the $\mathcal{P}_u$ have null normals. For this reason, such hypersurfaces are known as *null hypersurfaces*. We sometimes refer to them as “characteristics” or, in the context of the compressible Euler equations, as “acoustic characteristics.” As we mentioned earlier, the most important quantity in connection with shock formation is the inverse foliation density. \[D:UPMUDEF\] Let ${L_{(Geo)}}^0$ be the $0$ Cartesian component of the vectorfield ${L_{(Geo)}}$ defined in . We define the inverse foliation density $\upmu$ as follows: $$\begin{aligned} \label{E:UPMUDEF} \upmu & := \frac{-1}{(g^{-1})^{\alpha \beta} \partial_{\alpha} t \partial_{\beta} u} = \frac{-1}{(g^{-1})^{0 \alpha} \partial_{\alpha} u} = \frac{1}{{L_{(Geo)}}^0}.\end{aligned}$$ $1/\upmu$ is a measure of the density of the characteristics $\mathcal{P}_u$ relative to the constant-time hypersurfaces $\Sigma_t$. When $\upmu$ becomes $0$, the density becomes infinite and the level sets of $u$ intersect. The idea to study this quantity in the context of shock formation goes back at least to [@fJ1974], in which John proved a blowup result for solutions to a large class of hyperbolic systems in one spatial dimension. It is easy to show that under the assumptions of Subsubsect. \[SSS:ASSUMPTIONSONNONLINEARITIES\] and , we have $$\begin{aligned} \label{E:UPMUINITIAL} \upmu|_{\Sigma_0} & = 1 + \mathcal{O}(\Psi).\end{aligned}$$ In particular, when $|\Psi|$ is initially small, $\upmu$ is initially near unity. It turns out that the Cartesian components ${L_{(Geo)}}^{\nu}$ blowup when $\upmu$ vanishes (that is, when the shock forms). It also turns out that the products $\upmu {L_{(Geo)}}^{\nu}$ remain regular all the way up to the shock. For this reason, the vectorfield ${L}:= \upmu {L_{(Geo)}}$ is useful for studying the solution. \[D:LUNITDEF\] We define the rescaled null (see ) vectorfield ${L}$ as follows: $$\begin{aligned} \label{E:LUNITDEF} {L}& := \upmu {L_{(Geo)}}. \end{aligned}$$ Note that ${L}t = 1$. We now dynamically construct a geometric torus coordinates $\vartheta^1$ and $\vartheta^2$ by setting $\vartheta^1|_{\Sigma_0} = x^2$, ${\vartheta}^2|_{\Sigma_0} = x^3$ (with $x^2$ and $x^3$ being the standard Cartesian coordinates on $\mathbb{T}^2$) and propagating the $\vartheta^A$ to the future via the transport equation $$\begin{aligned} \label{E:TRANSPORTEQUATIONFORTHETA} {L}\vartheta^1 & = {L}\vartheta^2 = 0.\end{aligned}$$ \[D:GEOMETRICCOORDS\] We refer to $(t,u,\vartheta^1,{\vartheta}^2)$ as the geometric coordinates. We denote the corresponding geometric partial derivative vectorfields by $$\begin{aligned} \label{E:GEOCORDSPARTIALDERIVVECTORFIELDS} \left\lbrace \frac{\partial}{\partial t}, \frac{\partial}{\partial u}, \frac{\partial}{\partial \vartheta^1}, {\frac}{{\partial}}{{\partial}{\vartheta}^2} \right\rbrace. \end{aligned}$$ In addition to using the geometric coordinates, we also follow [@dC2007] and introduce a set of geometric vectorfields adapted to them (and to the characteristics). The necessity of using the geometric vectorfields is tied to a “regularity issue” that we suppressed in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\] for the simplicity of the exposition: directly commuting the coordinate vectorfields $ \displaystyle \frac{\partial}{\partial u}, $ $ \displaystyle \frac{\partial}{\partial \vartheta^1} $ and $ \displaystyle \frac{\partial}{\partial \vartheta^2} $ through the wave equation leads error terms that lose a derivative, a difficulty that we do know how to overcome at the top order. In contrast, in commuting with the geometric vectorfields, we are able to overcome[^49] the potential derivative loss yet still capture the geometry of the shock singularity. \[frame.def.1\] We define ${X}$ to be the unique $\Sigma_t-$tangent vectorfield that is $g-$orthogonal to $\ell_{t,u}$ and normalized by $$\begin{aligned} \label{E:GLUNITRADUNITISMINUSONE} g({L},{X}) = -1. \end{aligned}$$ Moreover we define the following $\upmu-$weighted version of ${X}$ $$\begin{aligned} {\breve{X}}& := \upmu {X}. \label{E:RADANDULGOOD} \end{aligned}$$ Finally, define ${Y}_1$ (respectively ${Y}_2$) to be the $g$-orthogonal projection of ${\partial}_2$ (respectively ${\partial}_3$) onto $\ell_{t,u}$. It is convenient to use the following rescaled frame,[^50] which can be viewed as a replacement of the geometric coordinate partial derivative vectorfields that does not suffer from the regularity problems mentioned above. \[D:RESCALEDFRAME\] We define the rescaled frame to be $$\begin{aligned} \label{E:RESCALEDFRAME} \left\lbrace {L}, {\breve{X}}, {Y}_1, {Y}_2 \right\rbrace.\end{aligned}$$ The rescaled frame is depicted in Figure \[F:FRAME\]. It spans the tangent space of spacetime at each point with $\upmu > 0$. Moreover, by construction, $\{L,{Y}_1,{Y}_2\}$ are *tangential* to the characteristics $\mathcal P_u$, while ${\breve{X}}$ is *transversal*. In addition, we note that the Cartesian components of ${\breve{X}}$ are proportional to $\upmu$ and thus are small in regions where $\upmu$ is small. We now compare the rescaled frame to the geometric coordinate partial derivative vectorfields in . Indeed, one first notes that $$\label{E:LISDDT} L={\frac}{{\partial}}{{\partial}t}.$$ Next, one computes that ${\breve{X}}u=1$ and therefore, since ${\breve{X}}$ is tangent to $\Sigma_t$, we have $ \displaystyle {\breve{X}}= \frac{\partial}{\partial u} - \Xi $, where $\Xi$ an $\ell_{t,u}-$tangent vectorfield. Finally, the pair $\{{Y}_1, {Y}_2\}$, like $ \displaystyle \left\lbrace {\frac}{{\partial}}{{\partial}{\vartheta}^1},{\frac}{{\partial}}{{\partial}{\vartheta}^2} \right\rbrace $, are tangential to $\ell_{t,u}$ (and in particular to the characteristics $\mathcal{P}_u$). [Frame.pdf]{} (84,55) [$\displaystyle {L}$]{} (67,43.5) [$\displaystyle {\breve{X}}$]{} (70,49) [$\displaystyle {Y}_1$]{} (57,32) [$\displaystyle {L}$]{} (35,21) [$\displaystyle {\breve{X}}$]{} (44,28) [$\displaystyle {Y}_1$]{} (51,13) [$\displaystyle \mathcal{P}_0^t$]{} (37,13) [$\displaystyle \mathcal{P}_u^t$]{} (7,13) [$\displaystyle \mathcal{P}_1^t$]{} (21,10) [$\displaystyle \upmu \approx 1$]{} (70,58) [$\displaystyle \upmu \ \mbox{\upshape small}$]{} (75,12) [$\displaystyle \Psi \equiv 0$]{} \[F:FRAME\] The reader may have noticed that while the definition of the strong null condition (see Def. \[D:STRONGNULLCONDITION\]) is based on null frames, our current discussion on the geometry does not explicitly feature a null frame.[^51] Nevertheless, one can construct a null frame out of $\{L,{\breve{X}},{Y}_1,{Y}_2\}$ by defining the null vector[^52] $\underline{L}:=L+2\upmu^{-1}{{\breve{X}}}$ and performing the Gram–Schmidt process on $\{{Y}_1,{Y}_2\}$ to obtain an orthonormal frame. As it turns out, the proof of shock formation can be carried out relative to the rescaled frame $\left\lbrace L,{\breve{X}},{Y}_1,{Y}_2 \right\rbrace$, which is sufficiently adapted to the characteristics and has all of the properties needed for capturing the good null structure in the equation. Having introduced the geometric setup, we now provide a more precise version of the evolution equation for $\upmu$ caricatured in . It plays a key role in the ensuing discussion. The proof of the lemma is based on decomposing the “$0$ component” of the geodesic equation ; see [@jSgHjLwW2016] for the short proof. \[L:SCHEMATICEVOLUTIONEQUATIONINVERSEFOLIATIONDENSITY\] The inverse foliation density from Def. \[D:UPMUDEF\] verifies the following evolution equation, where the first product on the RHS is exactly depicted and the second one is schematically depicted: $$\begin{aligned} \label{E:SCHEMATICEVOLUTIONEQUATIONINVERSEFOLIATIONDENSITY} {L}\upmu & = \frac{1}{2} G_{{L}{L}} {\breve{X}}\Psi + \upmu \mathcal{O}({L}\Psi),\end{aligned}$$ where $G_{{L}{L}} := G_{\alpha \beta} {L}^{\alpha} {L}^{\beta}$ and $G_{\alpha \beta} = G_{\alpha \beta}(\Psi)$ is defined in . ### A quick summary of the proof of shock formation in the irrotational case {#SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION} We now summarize the proof of shock formation for solutions to equation for perturbations of simple outgoing plane symmetric outgoing waves (which we described in Footnote \[FN:SIMPLEPLANE\] and Subsubsect. \[SSS:PRELIMINARYDICUSSION\]); see [@jSgHjLwW2016] for the complete proof in the case of two spatial dimensions, which can be extended to the current setting of three spatial dimensions using the techniques established in [@dC2007; @jS2014b]. 1. (**Dynamic geometric tensors, adapted to** $u$) As in the proof of the stability of Minkowski spacetime [@dCsK1993], one constructs various geometric tensors adapted to the eikonal function $u$. In particular, one constructs a set of “commutation vectorfields” $$\begin{aligned} \label{E:COMMSET} {\mathscr{Z}}& := \lbrace {L}, {\breve{X}}, {Y}_1, {Y}_2 \rbrace, \end{aligned}$$ used to differentiate the equations and obtain estimates for the solution’s derivatives. The set ${\mathscr{Z}}$ (whose elements we constructed in Subsubsect. \[SSS:GEOMETRICCONSTRUCTIONS\]) spans the tangent space of spacetime at each point with $\upmu > 0$. To close the proof of shock formation outlined in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\], one needs non-degenerate $L^{\infty}$ estimates for the ${\mathscr{Z}}$ derivatives of $\Psi$ and various tensorfields up to a certain order. However, it turns out that due to the special structure of the equations relative to geometric coordinates, one can obtain sufficient energy estimates by commuting the wave equation only with vectorfields belonging to the commutation subset $$\begin{aligned} \label{E:TANCOMMSET} {\mathscr{P}}& := \lbrace {L}, {Y}_1, {Y}_2 \rbrace, \end{aligned}$$ which spans the tangent space of the characteristics $\mathcal{P}_u$ at each point. One may check that for $Z \in {\mathscr{Z}}$, the Cartesian components $Z^{\alpha}$ depend on the first Cartesian coordinate partial derivatives of $u$. We can schematically denote this by $Z \sim \partial u$. Hence, the regularity of the vectorfields themselves is tied to the regularity of $\Psi$ through the eikonal equation . It turns out that this simple fact generates enormous technical complications into the derivation of energy estimates; see Step (7). 2. (**The geometric structure of the commuted wave equation**) To close the proof of shock formation, one needs to commute the wave equation many times with the elements of ${\mathscr{P}}$ and then derive energy estimates for $\Psi$ up to top order. More precisely, one commutes the $\upmu-$weighted wave equation $\upmu \square_{g(\Psi)} \Psi = 0$. We stress that *it is important that the weight in the previous equation is precisely $\upmu$*; the weight leads to important cancellations in commutation identities and, at the same time, is compatible with various degenerate error terms that one encounters energy estimates, as describe below. The main challenge is to bound the commutator terms, whose basic structure is revealed by the following commutation identity, written in schematic form except for the second product on the RHS (which is written exactly up to the overall sign): $$\begin{aligned} \label{E:COMMUTEDWAVE} \upmu \square_g(P \Psi) & = P(\upmu \square_g \Psi) + \upmu^{-1} \left\lbrace {{^{(P)} \mkern-1mu \pi_{{L}{\breve{X}}}}} + P \upmu \right\rbrace (\upmu \square_g \Psi) \\ & \ \ + \upmu {{^{(P)} \mkern-1mu \pi}} \cdot {\mathscr{D}}^2 \Psi + \upmu {\mathscr{D}}{{^{(P)} \mkern-1mu \pi}} \cdot {\mathscr{D}}\Psi. \notag \end{aligned}$$ In , ${\mathscr{D}}$ is the Levi-Civita connection of $g$, ${{^{(P)} \mkern-1mu \pi_{\alpha \beta}}} := {\mathscr{D}}_{\alpha} P_{\beta} + {\mathscr{D}}_{\beta} P_{\alpha} $ is the deformation tensor of $P$, and ${{^{(P)} \mkern-1mu \pi}} \cdot {\mathscr{D}}^2 \Psi$ and ${\mathscr{D}}{{^{(P)} \mkern-1mu \pi}} \cdot {\mathscr{D}}\Psi$ schematically denote tensorial contractions involving the derivatives of ${{^{(P)} \mkern-1mu \pi}}$ and $\Psi$. There are many important cancellations in the products on RHS  due the special structure of the elements of ${\mathscr{P}}$. The net effect is that when one decomposes the differentiations on RHS  relative to the rescaled frame , *none of the terms involve any singular factors of* $1/\upmu$. For example, factors such as $(1/\upmu) {\breve{X}}{\breve{X}}\Psi$ do not appear. This is completely consistent with the philosophy that the solution should remain regular relative to the geometric coordinates and is closely tied our definition of the strong null condition (which, roughly speaking, posits that there are no terms involving two differentiations in the transversal ${\breve{X}}$ direction). We also note that from this perspective, the operator $\upmu {\mathscr{D}}$ on RHS  should not be viewed as having an “extra” $\upmu$ weight; for by definition , the factor of $\upmu$ gets soaked up into the definition of ${\breve{X}}$ when performing decompositions. In view of these considerations, we note that the second product on RHS  could in principle, after more than one commutation, introduce a crippling factor of $ 1/\upmu $ into the commuted equations, which at the lower derivative levels would obstruct the goal of obtaining non-degenerate estimates. However, the vectorfields are constructed so that the sum ${{^{(P)} \mkern-1mu \pi_{{L}{\breve{X}}}}} + P \upmu$ completely vanishes. Note also that by the above remarks, the factor ${\mathscr{D}}{{^{(P)} \mkern-1mu \pi}}$ on RHS  depends on *three* derivatives of $u$. As we will explain in Step (7), this simple fact is the source of most of the difficulty in the proof. 3. (**Size assumptions on the data**) The main idea of [@jSgHjLwW2016] was to treat a regime in which the initial data have pure $\mathcal{P}_u-$transversal derivatives, such as ${\breve{X}}\Psi$ and ${\breve{X}}{\breve{X}}\Psi$, that are of size $\approx \mathring{\updelta} > 0$, while all other derivatives, such as ${P}{\breve{X}}\Psi$, ${P}\Psi$, and $\Psi$ itself, are of *small* size $\mathring{\upepsilon}$, where the smallness of $\mathring{\upepsilon}$ is allowed to depend on $\mathring{\updelta}$. These size assumptions roughly correspond to perturbations of simple outgoing plane symmetric solutions. A key point is that, due to the special structure of the covariant wave operator $\square_{g(\Psi)}$ when expressed relative to the geometric coordinates (see Def. \[D:GEOMETRICCOORDS\]) and the special structure of the elements of ${\mathscr{Z}}$, it is possible to propagate this hierarchy all the way up to the shock, except at the very high derivative levels. 4. ($L^{\infty}$ **bootstrap assumptions** for $\Psi$) One assumes that on a bootstrap region $\mathcal{M}_{{T_{(Boot)}};U_0}$ with $U_0 \approx 1$ (see ), on which the shock has not yet formed (but perhaps is about to form), $\Psi$ and sufficiently many[^53] of its ${\mathscr{P}}$ derivatives are bounded in $\| \cdot \|_{L^{\infty}}$ by $\leq \varepsilon$. At the end of the proof, one shows that the logic closes if $\varepsilon = C \mathring{\upepsilon}$ for a sufficiently large constant $C$. 5. ($L^{\infty}$ **and pointwise estimates for many geometric objects**) Using the bootstrap assumptions for $\Psi$ and the smallness of $\mathring{\upepsilon}$, one derives, on the region $\mathcal{M}_{{T_{(Boot)}};U_0}$, non-degenerate $L^{\infty}$ estimates for the ${\mathscr{P}}$ derivatives of the error terms appearing on RHS  up to a certain order. The $L^{\infty}$ estimates can be derived by commuting various transport equations, including the evolution equation for $\upmu$, with the elements of ${\mathscr{P}}$. The vast majority of these terms are shown to be of small size $\mathcal{O}(\varepsilon)$ on $\mathcal{M}_{{T_{(Boot)}};U_0}$. To derive energy estimates, one also needs to derive $L^{\infty}$ estimates for the very low-level ${\breve{X}}$ derivatives of various tensorfields including $\upmu$. These estimates are easy to obtain by studying various transport equations and using the $L^{\infty}$ estimates for the ${\mathscr{P}}$ derivatives, but we will not focus on this issue here. Using these $L^{\infty}$ estimates, one then derives pointwise estimates for the error terms on RHS  and the higher-order ${\mathscr{P}}-$commuted analogs of RHS , all the way up to top order. The pointwise estimates are needed in preparation for the energy estimates, which we describe below. In all of these estimates, it is important to decompose tensors relative to the rescaled frame . 6. (**Showing that the shock forms**) The main ideas behind showing that $\upmu$ goes to $0$ in finite time and that a shock forms were already presented in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. Here, we present them in more detail. Specifically, given the $L^{\infty}$ bootstrap assumptions for $\Psi$ and the $L^{\infty}$ estimates, the proofs that $\upmu$ goes to $0$ in finite time and that a shock forms are straightforward: one easily shows that the first product on RHS  is, relative to the geometric coordinates, approximately constant in time, that is, that $ \displaystyle {L}\left( \frac{1}{2} G_{{L}{L}} {\breve{X}}\Psi \right) = \mathcal{O}(\mathring{\upepsilon}) $. Moreover, one shows that the second product verifies $ \upmu \mathcal{O}({L}\Psi) = \mathcal{O}(\mathring{\upepsilon}) $ and is therefore a small error term. Hence, one obtains $ [{L}\upmu](t,u,\vartheta) = \frac{1}{2} [G_{{L}{L}} {\breve{X}}\Psi](0,u,\vartheta) + \mathcal{O}(\mathring{\upepsilon}) $ and, by integrating in time (see ) and taking into account and the initial $\mathcal{O}(\mathring{\upepsilon})$ smallness of $\Psi$, that[^54] $\upmu(t,u,\vartheta) = 1 + \frac{1}{2} t [G_{{L}{L}} {\breve{X}}\Psi](0,u,\vartheta) + \mathcal{O}(\mathring{\upepsilon}) $. Hence, for data such that the term $ \displaystyle \frac{1}{2} t [G_{{L}{L}} {\breve{X}}\Psi](0,u,\vartheta) $ is sufficiently negative at some value of $(u,\vartheta)$ (in particular, large enough in magnitude to dominate the term $\mathcal{O}(\mathring{\upepsilon})$), one concludes that $\upmu$ vanishes in finite time. Moreover, one shows that in a past neighborhood of any point where $\upmu$ vanishes, we have $|{\breve{X}}\Psi| \gtrsim 1$, or equivalently, that $|{X}\Psi| := |{X}^a \partial_a \Psi| \gtrsim 1/\upmu$. Since ${X}$ is comparable to a Cartesian coordinate partial derivative $\partial$, we conclude that when $\upmu$ vanishes, $|\partial \Psi|$ blows up like $ \displaystyle \frac{1}{\upmu} $. 7. (**Energy estimates up to top order**) To improve the $L^{\infty}$ bootstrap assumptions for $\Psi$ (which were used in particular in Step (6) to prove that $\upmu$ vanishes in finite time), the main task is to derive energy estimates for sufficiently many ${\mathscr{P}}$ derivatives of $\Psi$ that *do not degenerate* as $\upmu$ goes to $0$; one can then use Sobolev embedding to obtain, under a suitable smallness assumption on the data, the desired $L^{\infty}$ bounds. We stress again that one does not need to derive energy estimates for the ${\breve{X}}-$commuted wave equation. This is helpful because in the solution regime under study, the energies corresponding to the ${\mathscr{P}}-$commuted equation are all initially of small size $\mathcal{O}(\mathring{\upepsilon}^2)$ (while the energies for the ${\breve{X}}-$commuted wave equation are of the larger size $\mathcal{O}(\mathring{\updelta}^2)$). Note that the energy estimates for $\Psi$ are coupled to those for $u$ in view of the fact that the vectorfield commutators $P \in {\mathscr{P}}$ for the wave equation depend on $\partial u$. To derive energy estimates for $\Psi$ and $u$ up to top order, one uses the pointwise estimates from Step (4) and a geometric energy method. More precisely, with ${\mathscr{P}}^M$ denoting an arbitrary $M^{th}-$order string of vectorfields constructed out of the elements of ${\mathscr{P}}$, one derives energy estimates for ${\mathscr{P}}^M \Psi$ for $1 \leq M \leq N_{Top}$ (with $N_{Top}$ sufficiently large).[^55] One can derive the basic energy identity by using a suitable version of the vectorfield multiplier method, that is, by applying the divergence theorem on regions of the form $\mathcal{M}_{t,u}$ (see Figure \[F:SOLIDREGION\]) to the energy current vectorfield[^56] $J^{\alpha}[{\mathscr{P}}^M \Psi] := {Q}_{\ \beta}^{\alpha}[{\mathscr{P}}^M \Psi] {T}^{\beta}$. Here, $ {Q}_{\mu \nu}[{\mathscr{P}}^M \Psi] := ({\mathscr{D}}_{\mu} {\mathscr{P}}^M \Psi) {\mathscr{D}}_{\nu} {\mathscr{P}}^M \Psi - \frac{1}{2} g_{\mu \nu} ({\mathscr{D}}^{\alpha} {\mathscr{P}}^M \Psi) ({\mathscr{D}}_{\alpha} {\mathscr{P}}^M \Psi) $ is the energy-momentum tensorfield of ${\mathscr{P}}^M \Psi$ and ${T}:= (1 + 2 \upmu) {L}+ 2 {\breve{X}}$ is a *multiplier vectorfield*, which is timelike[^57] with respect to $g$. The careful placement of the $\upmu$ weights in the definition of ${T}$, both the explicit one and the implicit one inherent in the relation ${\breve{X}}= \upmu {X}$, are essential for generating suitable energies. More precisely, the divergence theorem yields an integral identity involving coercive “energies” $ \displaystyle \mathbb{E}[{\mathscr{P}}^M \Psi](t,u) := \int_{\Sigma_t^u} J_{\alpha}[{\mathscr{P}}^M \Psi] {B}^{\alpha} $ (where the material derivative vectorfield ${B}$ is the future-directed normal to $\Sigma_t$) and also[^58] coercive “null fluxes” $ \displaystyle \mathbb{F}[{\mathscr{P}}^M \Psi](t,u) := \int_{\mathcal{P}_u^t} J_{\alpha}[{\mathscr{P}}^M \Psi] {L}^{\alpha} $, both of which are needed to close the estimates. The integral identity, which forms the starting point for the $L^2-$type analysis, follows from integrating the following divergence identity over regions of the form $\mathcal{M}_{t,u}$ (see ) with respect to a suitable volume form: $$\begin{aligned} \label{E:DIVID} \upmu {\mathscr{D}}_{\alpha} J^{\alpha}[{\mathscr{P}}^M \Psi] = {T}{\mathscr{P}}^M \Psi \cdot (\upmu \square_{g(\Psi)} {\mathscr{P}}^M \Psi) + \frac{1}{2} \upmu {Q}^{\alpha \beta}[\Psi] {{^{({T})} \mkern-1mu \pi_{\alpha \beta}}}, \end{aligned}$$ where ${{^{({T})} \mkern-1mu \pi_{\alpha \beta}}} := {\mathscr{D}}_{\alpha} {{T}}_{\beta} + {\mathscr{D}}_{\beta} {{T}}_{\alpha} $ is the deformation tensor of ${T}$. The coerciveness of the energies and null fluxes are consequences of the dominant energy condition, which is the property ${Q}_{\alpha \beta}[{\mathscr{P}}^M \Psi] V^{\alpha} W^{\beta} \geq 0$ whenever $V$ and $W$ are future-directed[^59] causal[^60] vectorfields. In [@jSgHjLwW2016], readers may find a detailed description of how to close the energy estimates and why they are difficult to derive. Here, we highlight some of the main ideas. See also our companion article [@jLjS2016b] for a similar discussion in the context of solutions to the compressible Euler equations in two spatial dimensions with vorticity. The energy estimates are difficult to derive for two main reasons. First, a careful computation reveals that the energies $\mathbb{E}[{\mathscr{P}}^M \Psi](t,u)$ and the null fluxes $\mathbb{F}[{\mathscr{P}}^M \Psi](t,u)$ contain $\upmu$ weights, inherited from the $\upmu$ weights found in the definition of ${T}$. Consequently, $\mathbb{E}[{\mathscr{P}}^M \Psi]$ and $\mathbb{F}[{\mathscr{P}}^M \Psi]$ provide only very weak control over certain directional derivatives near the shock (where $\upmu$ is small). This is a serious difficulty because one encounters “strong” error terms in the energy estimates arising, for example, from error terms on the RHS of the wave equations[^61] $\upmu \square_{g(\Psi)} {\mathscr{P}}^M \Psi = \cdots$ that *do not have $\upmu$ weights*. To control such strong error terms, one needs to exploit various special structures. For example, one relies on the availability of a subtle spacetime integral with a favorable “friction-type” sign, first identified by Christodoulou [@dC2007], that is generated by the term $ \displaystyle \frac{1}{2} \upmu {Q}^{\alpha \beta}[\Psi] {{^{({T})} \mkern-1mu \pi_{\alpha \beta}}} $ on RHS  (and which also appears in the energy identities). Through a detailed analysis of $\upmu$ and ${L}\upmu$, the spacetime integral can be shown to be strong in regions where $\upmu$ is small, thanks to the negativity[^62] of ${L}\upmu$. In fact, the spacetime integral can be used to absorb many of the “strong” error terms. The second reason that the energy estimates are difficult is that, as we mentioned above, the top-order derivatives of $u$ are hard to estimate. To further explain this difficulty, we will count derivatives; it suffices to explain the difficulty that arises after commuting the wave equation one time, as we did in equation . To proceed, we recall that to control RHS , we must bound *three* derivatives of $u$ in $L^2$. The naive way to achieve this goal is to commute the eikonal equation with three derivatives, schematically denoted by “$\partial^3$”, to obtain the schematic evolution equation $(g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} u \partial_{\beta} \partial^3 u = \partial^3 \Psi \cdot \partial u + l.o.t $. The problem with this approach is that the RHS of this evolution equation for $\partial^3 u$ depends on *three* derivatives of $\Psi$, which is inconsistent with the regularity of $\Psi$ obtained from deriving energy estimates for equation (the energy estimates yield control over only two derivatives of $\Psi$ in $L^2$). Clearly this loss of a derivative cannot be overcome by further commuting the equations. To overcome it, one uses strategies employed in [@dCsK1993] and later in [@sKiR2003], including using that the special structure of the vectorfields in ${\mathscr{P}}$ leads to the absence of the worst imaginable top-order derivatives of $u$. That is, the third derivatives of $u$ appearing on RHS  have special tensorial structures. One can bound these terms by exploiting the tensorial structures with the help of modified quantities and, by using elliptic estimates[^63] on $\ell_{t,u}$. By “modified quantities,” we mean that one finds special combinations of the derivatives of $\Psi$ and $u$ that satisfy a good evolution equation allowing one, with the help of the aforementioned elliptic estimates, to avoid the loss of a derivative. To construct the modified quantities and derive elliptic estimates, one needs precise geometric decompositions of the derivatives of $\Psi$ and $u$, adapted to the acoustic characteristics $\mathcal{P}_u$. In carrying out the above scheme, one encounters another serious difficulty: it turns out that introducing the modified quantities, which are essential to avoid losing a derivative at the top order, leads to the presence of a difficult factor of $1/\upmu$ into the top order energy identities. We may caricature[^64] the effect of this factor in the basic energy inequality as follows, where $\mathbb{E}_{Top}$ denotes the top-order energy along $\Sigma_t$ and $A$ is a universal positive constant, *independent of the number of times that the equations are commuted*: $$\begin{aligned} \label{E:CARICENERGYINEQ} \mathbb{E}_{Top}(t) \leq \mathbb{E}_{Top}(0) + A \int_{s=0}^t \sup_{\Sigma_s} \left| \frac{\frac{\partial}{\partial t} \upmu}{\upmu} \right| \cdot \mathbb{E}_{Top}(s) \, ds + \cdots. \end{aligned}$$ To derive a Gronwall estimate for $\mathbb{E}_{Top}(t)$, we need to bound $ \displaystyle \sup_{\Sigma_s} \left| \frac{\frac{\partial}{\partial t} \upmu}{\upmu} \right| $. The true estimate is lengthy to state, but it can be caricatured as follows: $ \displaystyle \sup_{\Sigma_s} \left| \frac{\frac{\partial}{\partial t} \upmu}{\upmu} \right| \leq \frac{{\mathring{\updelta}_*}}{1 - {\mathring{\updelta}_*}s} $, where $ {\mathring{\updelta}_*}> 0 $ is a constant depending on the data. Moreover, with $$\upmu_{\star}(s) := \min_{\Sigma_s} \upmu,$$ one can prove an estimate that can be caricatured as $$\upmu_{\star}(s) \sim 1 - {\mathring{\updelta}_*}s.$$ Hence, from Gronwall’s inequality[^65] one obtains an a priori estimate than can be caricatured as follows: $$\mathbb{E}_{Top}(t) \leq \mathbb{E}_{Top}(0) \cdot \upmu_{\star}^{-A}(t) + \cdots.$$ In particular, $\mathbb{E}_{Top}(t)$ can blow up as $\upmu$ vanishes. As we have noted, the argument sketched above relies on having non-degenerate $L^{\infty}$ estimates at the lower derivative levels, which are needed to control various error terms. Thus, the only hope of validating the above estimate for $\mathbb{E}_{Top}(t)$ and thus closing the problem is to derive *less degenerate* energy estimates below top order. For if the above degenerate energy estimate were the best one we could prove at all derivative levels, then we would not be able to recover the $L^{\infty}$ bootstrap assumptions from Step (4); such degenerate energy estimates, when combined with Sobolev embedding, would yield only that the $L^{\infty}$ norms of the low-order derivatives of $\Psi$ can also blow up as $\upmu_{\star}$ vanishes, which would completely obstruct our efforts to justify the non-degenerate estimates at the lower derivative levels. To overcome this difficulty, one exploits the fact that below top order, it is permissible to allow the aforementioned loss of one derivative in the difficult wave equation error terms that depend on the eikonal function. In allowing the loss, one can avoid using modified quantities and thus avoid introducing the factor of $1/\upmu$ into the below-top-order energy identities. The price one pays is that this approach couples the below-top-order energy identities to the degenerate top-order ones. Nonetheless, this approach allows one to derive less degenerate estimates below top order. More precisely, via an energy estimate “descent scheme,” based on successively reducing the strength of the singularity via the estimate[^66] $ \displaystyle \int_{s=0}^t \upmu_{\star}^{-B}(s) \, ds \lesssim \upmu_{\star}^{1-B}(t) $, one can show that the below-top-order energies satisfy a hierarchy of successively less degenerate estimates of the form $$\mathbb{E}_{Top-1}(t) \leq \mbox{\upshape data} \cdot \upmu_{\star}^{-(A-2)}(t),$$ $$\mathbb{E}_{Top-2}(t) \leq \mbox{\upshape data} \cdot \upmu_{\star}^{-(A-4)}(t),$$ $$\cdots,$$ until one reaches a “middle level,” below which all energies are bounded: $$\mathbb{E}_{Mid}(t), \mathbb{E}_{Mid-1}(t), \mathbb{E}_{Mid-2}(t), \cdots, \mathbb{E}_1(t) \leq \mbox{\upshape data}.$$ From these non-degenerate estimates, Sobolev embedding, and a small data assumption, one can finally improve the $L^{\infty}$ bootstrap assumptions for $\Psi$, which closes the whole process. The large number of derivatives needed[^67] to close the proof is due to the large number times that one needs to descend below top order in order to reach the non-degenerate energies $\mathbb{E}_{Mid}(t)$. A preview of the proof of shock formation for solutions to the compressible Euler equations in the presence of vorticity {#SS:PREVIEWONSHOCKS} ------------------------------------------------------------------------------------------------------------------------ In Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], we overviewed a framework, based on techniques introduced by Christodoulou, for proving shock formation in solutions to quasilinear wave equations with suitable nonlinearities, a special case of which is the irrotational compressible Euler equations. We now overview, without giving proofs, how the new geometric and analytic structures provided by Theorems \[T:GEOMETRICWAVETRANSPORTSYSTEM\] and \[T:STRONGNULL\] fit in with the above framework and allow one to prove shock formation in the presence of vorticity. We restrict our attention to discussing solutions that are close to simple outgoing plane symmetric outgoing solutions (see Subsect. \[SSS:PRELIMINARYDICUSSION\]) in three spatial dimensions with spatial topology[^68] $\Sigma_t = \mathbb{R} \times \mathbb{T}^2$. This is the solution regime that we analyze in detail in our forthcoming works [@jLjS2016b; @jLjS2017] (where the spatial topology is $\Sigma_t = \mathbb{R} \times \mathbb{T}^2$ in the case of two spatial dimensions treated in [@jLjS2016b]). ### The regime under consideration: solutions close to simple outgoing plane symmetric waves {#SSS:SOLUTIONREGIME} We expect that our framework for proving finite-time shock formation can be applied to various kinds of initial data, including small compactly supported nearly spherically symmetric data. However, we restrict our attention here to the simplest vorticity-containing solutions to which our framework applies: (non-symmetric) perturbations of simple outgoing[^69] plane symmetric solutions. Note that simple outgoing plane symmetric solutions themselves are irrotational, but perturbations of them generally have non-zero vorticity. We have already discussed simple outgoing plane symmetric solutions in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], in the context of the scalar wave equation . However, in preparation for the subsequent discussion, we now give a slightly different description of such solutions in the context of the compressible Euler equations. To this end, we will rely on Riemann’s famous method [@bR1860] of Riemann invariants. Our discussion applies to any equation of state except for the one $ \displaystyle p = C_0 - \frac{C_1}{\rho} = C_0 - C_1 \exp(-{\uprho}) $ corresponding to a Chaplygin gas,[^70] (where $C_0$ and $C_1$ are arbitrary constants); see also Footnote \[FN:NOSHOCKSFORCHAPLYGIN\] for a description of why our arguments do not apply to the Chaplygin gas. By a “plane symmetric” solution, we mean that relative to Cartesian coordinates, we have ${\uprho}= {\uprho}(t,x^1)$, $v^1 = v^1(t,x^1)$, and $v^2 = v^3 \equiv 0$. In plane symmetry, the compressible Euler equations are equivalent to the system $${\underline{L}}\mathcal{R}_- = 0, \qquad {L}\mathcal{R}_+ = 0,$$ where $\mathcal{R}_{\pm} := v^1 \pm F({\uprho})$, $F$ is defined by $F'({\uprho}) = c_s({\uprho})$ with $F(0) = 0$, where the latter is a convenient normalization condition. Moreover, we have the explicit formulas $${\underline{L}}= \partial_t + (v^1 - {c_s}) \partial_1, \qquad {L}= \partial_t + (v^1 + {c_s}) \partial_1.$$ We will study simple plane symmetric solutions such that ${\uprho}$ and $v^1$ (undifferentiated) are near $0$. As we explained below equation , the background solution $({\uprho},v^1) \equiv (0,0)$ corresponds to a constant state with non-zero density $\bar{\rho} > 0$, which is an analog of the global background solution $\Psi \equiv 0$ from Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. In terms of the Riemann invariants, the background solution takes the form $\mathcal{R}_- = \mathcal{R}_+ \equiv 0$. By a “simple” plane symmetric solution in the present context, we mean that one of the Riemann invariants, say $\mathcal{R}_-$, completely vanishes. Roughly, “simple” means that there is only (say) a right-moving (that is, outgoing) wave rather than a combination of left-moving and right-moving waves. We now discuss the formation of shocks in simple plane symmetric solutions in which $\mathcal{R}_-$ is identically zero. Applying Riemann’s methods to the evolution equation ${L}\mathcal{R}_+ = 0$, we easily deduce by differentiating the equation with $\partial_1$ that for suitable smooth initial conditions, $\partial_1 \mathcal{R}_+$ experiences a Riccati-type blowup along a characteristic[^71] while $\mathcal{R}_+$ remains bounded. That is, a shock forms[^72] through a mechanism similar to the one that drives singularity formation in solutions to Burgers’ equation. In the rest of Subsect. \[SS:PREVIEWONSHOCKS\], we also assume that $\| \mathcal{R}_+ \|_{L^{\infty}(\Sigma_0)}$ is small, a condition that is propagated by the flow of the equations. Then perturbations (away from plane symmetry) of the corresponding shock-forming solution will be $L^{\infty}-$close (at least initially) to the constant state solution described in the previous paragraph; this $L^{\infty}-$closeness assumption is convenient but could most likely be relaxed. ### Elements of the proof and the size of the data {#SSS:FEATURESINCOMMONWITHIRROTATIONALCASE} We now describe how to prove shock formation for perturbations of the simple outgoing plane symmetric solutions described in the previous subsubsection, where the perturbations belong to a suitable Sobolev space (without symmetry assumptions). Although the method of Riemann invariants is convenient for generating a family of shock-forming solutions, it is not applicable to perturbations away from plane symmetry. Hence, to study the perturbed solutions, we will use the formulation of the equations provided by Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\]. A large part of the proof consists of the same steps described in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. The reason is that $\lbrace v^i \rbrace_{i=1,2,3}$ and ${\uprho}$ solve the covariant wave equations -, which are similar to the wave equations discussed in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. In particular, it is straightforward to show that for perturbations of the simple outgoing plane symmetric solutions described above, the initial data for the “wave variables” $v^i$ and ${\uprho}$ verify $\mathring{\upepsilon}-\mathring{\updelta}$ size assumptions that are similar to the ones described in Step (3) of Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. The new feature in the analysis here is the presence of inhomogeneous terms in the wave equations. That is, if not for the inhomogeneous terms on RHSs -, the proof outlined in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\] would go through without significant changes. Our main goal in Subsect. \[SS:PREVIEWONSHOCKS\] is to explain why, under a suitable $\mathring{\upepsilon}-\mathring{\updelta}$ smallness-largeness hierarchy similar to the one described in the irrotational case in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], a shock forms in the solution. Our assumptions on the data of the specific vorticity are that ${\upomega}$ and *all* of its derivatives up to top order are initially of small size $\mathcal{O}(\mathring{\upepsilon})$ (as measured by appropriate norms). We aim to propagate the smallness of ${\upomega}$ and to show that it *does not interfere with the shock formation processes* described in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. That is, the dynamics are not significantly distorted by the presence of small amounts of vorticity. One of course expects that, like the high-order energies in the irrotational case, the $L^2$ norms of the high-order derivatives of ${\upomega}$ can blow up as $\upmu \to 0$, and that controlling their blowup-rates is at the heart of closing the proof. We also note that it is straightforward to construct data such that the solution has non-vanishing vorticity at the shock. To explain this, we first note that it is easy to perturb the data of the simple plane symmetric solutions described above so that ${\upomega}|_{\Sigma_0}$ is everywhere non-zero. Then using the evolution equation , it is straightforward to show that ${\upomega}$ remains strictly non-zero,[^73] all the way up to the shock as desired. ### Geometric vectorfields and their interaction with the transport operator {#SSS:GEOMETRICVECINTERACTWITHTRANSPORT} As we described in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], to close the proof of shock formation, it is critically important to construct, with the help of an eikonal function corresponding to the acoustical metric $g$ (defined in ), a set of commutation vectorfields that are adapted to the acoustic characteristics; we need such vectorfields to control solutions to the wave equations -. A key observation of the present work is that the elements of ${\mathscr{Z}}$ exhibit good commutation properties with $\upmu \partial_{\alpha}$, where $\partial_{\alpha}$, ($\alpha = 0,1,2,3$), is any Cartesian coordinate partial derivative vectorfield. By this, we mean that for $Z \in {\mathscr{Z}}$ and scalar functions $f$, we can bound[^74] $[Z, \upmu \partial_{\alpha}] f$ in terms of the first-order ${\mathscr{Z}}$ derivatives of $f$ *without any dangerous factors of $1/\upmu$ appearing*. Schematically, we may express this by $[{\mathscr{Z}}, \upmu \partial_{\alpha}] \sim {\mathscr{Z}}$. To explain this in more detail, we first define $\gamma_{AB}:=g({Y}_A, {Y}_B)=g_{ab} {Y}_A^a {Y}_B^b$ for $A,B=1,2$. We also define $\gamma^{-1}_{AB}$ to be the inverse of the $(2 \times 2)$ matrix $\gamma_{AB}$. Then we have the following identity for vectorfields: $$\begin{aligned} \label{E:CARTESIANINTERMSOFGEOMETRIC} \upmu \partial_i = (g_{ai} {X}^a) {\breve{X}}+ \upmu \sum_{A,B=1}^2\left( \gamma^{-1}_{AB} g_{ai} {Y}_A^a \right) {Y}_B. \end{aligned}$$ The reason that the commutator $[Z,\upmu \partial_i]$ is controllable is that the elements of ${\mathscr{Z}}$ are designed to have good commutators with each other while the ${\mathscr{Z}}-$derivatives of the scalar functions $g_{ai} {X}^a$ and $ \displaystyle \upmu \left( \gamma^{-1}_{AB} g_{ai} {Y}_A^a \right) $ on RHS  are simple error terms; the ${\mathscr{Z}}-$derivatives of the Cartesian components $g_{ab}$ can be controlled in terms of the ${\mathscr{Z}}$-derivatives of ${\uprho}$ and the Cartesian components $v^a$ (as is evident from the formula ), while the ${\mathscr{Z}}$-derivatives of $\upmu$ and the Cartesian components ${X}^a$ and ${Y}_A^a$ can be estimated by analyzing solutions to transport equations (in the spirit of ). Notice that this is in contrast to the commutators $[Z, \partial_{\alpha}]$; these commutators involve the ${\mathscr{Z}}-$derivatives of $\upmu^{-1}$, which are not uniformly bounded up to the shock. It is because the commutators $[Z, \upmu \partial_{\alpha}]$ are good that we can successfully commute ($\upmu-$weighted versions of) the *first-order* equations , and to obtain control of the specific vorticity (see also Subsubsect. \[sss.inho.vor\]) all the way up to the shock. More precisely, consistent with our above remarks, we view the scalar function $({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i$ to be the unknown in equation and hence we commute *only through the outer operator* $\upmu {B}$ in the $\upmu-$weighted version of equation . We stress that the strategy described above applies *only to commutations through first-order operators*; although the vectorfields $Z \in {\mathscr{Z}}$ also (by design) commute well through $\upmu \square_g$ (see and the remarks below it), their commutator with a typical $\upmu-$weighted second-order differential operator, such as $\upmu \partial_{\alpha} \partial_{\beta}$, produces error terms of the schematic form $(1/\upmu) {\mathscr{Z}}{\mathscr{Z}}$, which are large near the shock; such error terms would prevent us from deriving non-degenerate estimates at the low derivative levels, which are essential for closing the problem. ### Inhomogeneous terms in the specific vorticity equation {#sss.inho.vor} The equation for the specific vorticity has the inhomogeneous term ${\upomega}^a{\partial}_a v^i$. In order to bound ${\upomega}^i$ uniformly in $L^{\infty}$, we need to control the integral of ${\partial}_a v^i$ along the integral curves of the transport operator ${B}$. Even in the irrotational setting, the quantity ${\partial}_a v^i$ can blow up like $\upmu^{-1}$ near the shock, as is suggested by the schematic[^75] relation $\partial_a v^i \sim \upmu^{-1} {\breve{X}}v^i + \cdots$. Nevertheless, by exploiting the *transversality*[^76] of the integral curves of ${B}$ with the acoustic characteristics, it can be shown[^77] that the integral in question is uniformly bounded up to the shock, independent of $\upmu$. As a consequence, one can show that unlike an arbitrary Cartesian coordinate partial derivative of $v^i$, ${\upomega}^i$ remains uniformly bounded up to the shock. Moreover, as we discussed in the previous subsection, since $\upmu {B}$ enjoys good commutation properties with the geometric vectorfields, the lower-order derivatives of ${\upomega}^i$ with respect to the geometric vectorfields are also uniformly bounded in $L^{\infty}$. In contrast, one cannot hope to obtain uniform $L^{\infty}$ bounds for a general Cartesian coordinate partial derivatives of ${\upomega}^i$ up to the shock. This can easily be inferred from the specific vorticity equation , which states that ${B}{\upomega}^i$ is equal to ${\upomega}^a {\partial}_a v^i$, and we have already noted that ${\partial}_a v^i$ can blow up like $\upmu^{-1}$ near the shock. Nevertheless, one can use equation and an argument similar to the one given in the previous paragraph to show that ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ is uniformly bounded in $L^{\infty}$ up to the shock! That is, ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ behaves better than $\partial_{\alpha} {\upomega}$! This argument crucially relies on the good null structure of the term $\mathscr{P}_{({\upomega})}^i$ revealed by Theorem \[T:STRONGNULL\], which in particular shows that the potentially damaging product $({X}{\upomega}) {X}v$ (which is expected to be of size $\upmu^{-2}$) is not present if one decomposes RHS  relative to the geometric vectorfields. Moreover, the geometric vectorfield derivatives of ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ obey similar good $L^{\infty}$ and energy estimates, except for at the top order (as we describe below). These facts are extremely helpful in our approach, since, as we will later discuss, we need to combine these good below-top-order estimates for ${\upomega}$ with appropriate top order estimate for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ in order to obtain, via elliptic estimates, control over the top order geometric derivatives of $\underline{\partial} {\upomega}$, where $\underline{\partial}$ denotes the spatial gradient with respect to the Cartesian coordinates. Notice also that these estimates are relevant for controlling solutions to the wave equation since ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ appears as a source term on RHS . In fact,${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ is the only top-order specific vorticity term featured in the wave equations. ### Easy error terms in the wave equation {#SSS:EASYWAVEEQUATIONERRORTERMS} Most of the error terms on RHSs - are easy to treat. In particular, Christodoulou’s framework [@dC2007] can easily be extended to treat the error terms on RHS - that verify the strong null condition; as we outlined in Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\] (see also the discussion in [@jS2014b; @gHsKjSwW2016]), in the solution regime under consideration, such error terms are essentially harmless and do not interfere with the shock formation processes. This means that the error terms described by Theorem \[T:STRONGNULL\] are expected to have only a negligible effect on the dynamics, even near the shock. The term $2 \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b$ on RHS  is also relatively easy to treat. The reason is that ${\upomega}$ is a below-top-order factor that can be bounded by commuting the transport equation with the geometric vectorfields; as we mentioned at the end of Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], below-top-order terms exhibit less degenerate behavior with respect to $\upmu$ compared to top-order factors. ### The difficult specific vorticity term in the wave equation {#SSS:DIFFICULTVORTICITYTERMS} In contrast to the terms described in Subsubsect. \[SSS:EASYWAVEEQUATIONERRORTERMS\] the factor $({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i$ on RHS  is a challenging top order factor that needs to be treated with the elliptic estimates mentioned in Subsubsect. \[sss.inho.vor\]. To explain why this is the case, let us count derivatives. Using only equation , one can only conclude that ${\upomega}$ has the same regularity as $\partial v$ (because of the factor $\partial v$ on RHS ), which suggests that ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ has the same regularity as $\partial \partial v$. The key point is that *this regularity is not compatible with the factor $({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i$ being on the right-hand side of the wave equation for $v^i$*; the wave equation energy estimates for $v^i$ yield only that $\partial v$ has the same regularity as ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$. That is, this approach leads to the loss of a derivative. ### Elliptic estimates for the specific vorticity near the shock {#SSS:ELLIPTICESTIMATESFORVORTICITY} In this subsubsection, we will sketch the main ideas of how to bound the top-order derivatives of ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ and thus overcome the loss of a derivative mentioned in the previous subsubsection. At the same time, we will discuss how to bound the blowup-rate of the $L^2$ norm of these top derivatives; understanding the blowup-rates lies at the heart of understanding how to close the energy estimates for the full system of equations. It turns out that we cannot control the top-order derivatives of ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ in isolation; due to the $\underline{\partial} {\upomega}$-dependent terms on RHS , we can control them only by obtaining control of the top-order derivatives of $\underline{\partial} {\upomega}$, which requires elliptic estimates on $\Sigma_t$; we recall that here and throughout, $\underline{\partial}$ denotes the spatial gradient with respect to the Cartesian coordinates. To proceed, we let $N_{Top}$ denote the maximum number[^78] of times that we need to commute the wave equations - to close the estimates and we let ${\mathscr{Z}}^{N_{Top}}$ denote an arbitrary $N_{Top}^{th}-$order differential operator corresponding to repeated differentiation with respect to the elements[^79] of ${\mathscr{Z}}$. Then to close the top-order wave equation energy estimates, we must control the wave equation source term $ {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ in $L^2$. Examining the right-hand side of the evolution equation[^80] for the scalar function ${\mbox{\upshape curl}\mkern 1mu}{\upomega}^i$, we see that when deriving $L^2$ estimates for ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}^i$, we must bound error terms depending on the $L^2$ norms of $ \lbrace \underline{\partial} {\mathscr{Z}}^{N_{Top}} {\upomega}^j \rbrace_{j=1,2,3} $. Hence, to close the estimates, we must use equation to obtain bounds for ${\mbox{\upshape div}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ and then employ elliptic estimates to bound the $L^2$ norms of $ \lbrace \underline{\partial} {\mathscr{Z}}^{N_{Top}} {\upomega}^j \rbrace_{j=1,2,3} $. The elliptic estimates that we need are similar to the standard Cartesian elliptic estimates along the constant-time hypersurfaces $\Sigma_t$ and can be caricatured as follows: $$\begin{aligned} \label{E:ELLIPTICCARICATURE} \displaystyle \left\| \sqrt{\upmu} \underline{\partial} {\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)} \lesssim \left\| \sqrt{\upmu} {\mbox{\upshape div}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)} + \left\| \sqrt{\upmu} {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)} + \cdots. \end{aligned}$$ Above, the $L^2$ norms are defined relative to a measure that, roughly speaking, is equal to $du d \vartheta^1 d \vartheta^2$ where $u$, $\vartheta^1$, and $\vartheta^2$ are the geometric coordinates on $\Sigma_t$; see also Footnote \[FN:NOTWRITINGFORMS\]. In view of the relation[^81] $$\begin{aligned} \label{E:VOLFORMCOMPARISON} d^3 x \approx \upmu \, du d \vartheta^1 d \vartheta^2, \end{aligned}$$ where $d^3 x$ is the standard Euclidean volume form on $\Sigma_t$, it follows that indeed, is essentially equivalent to[^82] the standard Cartesian elliptic estimates along the constant-time hypersurfaces $\Sigma_t$. The need for elliptic estimates along $\Sigma_t$ represents a new difficulty not found in[^83] previous works on shock formation in three spatial dimensions. Most importantly, while the elliptic estimates are crucial from the point of view of regularity, they are based on treating *all* spatial derivatives of ${\upomega}$ on equal footing. This clearly clashes with the philosophy of trying to obtain less singular estimates for the derivatives of ${\upomega}$ in directions tangent to the acoustic characteristics and calls into question the usefulness of the good null structure in the equations. The net effect is that top-order estimates for ${\upomega}$ are burdened with a factor of $1/\upmu$, which leads to degenerate top-order bounds for ${\upomega}$. However, by exploiting the non-degenerate nature of an energy for ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ along the acoustic characteristics, we are able to show that the degeneracy created by this factor is not too severe; see the discussion below inequality concerning the small constant $c$. In fact, we are able to show that the degeneracy corresponding to the elliptic estimates for ${\upomega}$ is much less severe than the analogous difficulty that we encountered in the energy estimate in the irrotational case, where the difficult factor of $1/\upmu$ is tied to the difficult top-order regularity properties of the eikonal function. The basic strategy for controlling ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ is to integrate the ${\mathscr{Z}}^{N_{Top}}-$commuted evolution equation to bound, via a Gronwall estimate, ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ in terms of simple error terms and more difficult error terms that, by the elliptic estimates and the ${\mathscr{Z}}^{N_{Top}}-$commuted equation for[^84] ${\mbox{\upshape div}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$, can be controlled back in terms of ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$. As we mentioned above, a difficult factor of $1/\upmu$ (more precisely, $1/\upmu_{\star}$) is present in the inequality for ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ that one needs to treat with Gronwall’s inequality. In total, the integral inequality satisfied by ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ can be caricatured as follows: $$\begin{aligned} \label{E:TOPORDERCURLCARICATURE} \left\| \sqrt{\upmu} {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)}^2 & \leq \mbox{\upshape data} + \bar{A} \int_{s=0}^t \frac{1}{\upmu_{\star}(s)} \left\| \sqrt{\upmu} \underline{\partial} {\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_s)}^2 \, ds + \cdots \\ & \leq \mbox{\upshape data} + \widetilde{A} \int_{s=0}^t \frac{1}{\upmu_{\star}(s)} \left\| \sqrt{\upmu} {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_s)}^2 \, ds + \cdots, \notag \end{aligned}$$ where (as before) $$\upmu_{\star}(s) = \min_{\Sigma_s} \upmu,$$ $\cdots$ denotes easier error terms, and $\widetilde{A} > 0$ is a *small*[^85] constant. We now note that for reasons similar to the ones given below inequality , the factor $ \displaystyle \frac{1}{\upmu_{\star}} $ on RHS  leads to a Gronwall estimate for $ \displaystyle \left\| \sqrt{\upmu} {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)}^2 $ that is *at least as degenerate* as $$\begin{aligned} \label{E:ATLEASTASDEGENERATE} \left\| \sqrt{\upmu} {\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}\right\|_{L^2(\Sigma_t)}^2 \lesssim \mbox{\upshape Data} \cdot \upmu_{\star}^{- c}(t) + \cdots, \end{aligned}$$ where $c > 0$ is a small constant that is controlled by the small constant $\widetilde{A}$. Since $c$ is small, this shows that the need to carry out elliptic estimates for the specific vorticity at the top order does not, in itself, lead to drastically degenerate top-order estimates for ${\upomega}$. In fact, the most degenerate terms are hiding in the terms $\cdots$ on RHS . More precisely, included in $\cdots$ are terms depending on the top-order derivatives of the wave variables $v^i$ and ${\uprho}$, whose energies can blow up like a large power of $\upmu_{\star}^{-1}$ for the same reason as in the irrotational case. These terms in fact make the main contribution to the top-order energy blowup-rate for ${\upomega}$. A closely related fact is that the energy blowup-rates for ${\upomega}$ are compatible with the *same* energy blowup-rates for the wave variables that one derives in the irrotational case. Indeed, the high-order energy blowup-rates of the wave variables are not affected by the presence of small amounts of vorticity in the problem. We also note that to close these estimates, we must use the fact that ${\mbox{\upshape curl}\mkern 1mu}{\upomega}$ satisfies, at all lower-order derivatives, better estimates than a general spatial derivative of ${\upomega}$, as we described in Subsubsect. \[SSS:DIFFICULTVORTICITYTERMS\]. This fact is needed to control various lower-order error terms on RHS , which we have relegated to the terms $\cdots$. For this purpose, it is crucial that the term $\mathscr{P}_{({\upomega})}^i$ on the right-hand side of verifies the strong null condition (see Theorem \[T:STRONGNULL\]). Finally, we note that below the top derivative level, we can avoid the elliptic estimates for ${\upomega}$. The price one pays is that the error terms in the energy estimates for ${\upomega}$ depend on the derivatives of ${\uprho}$ and $v^i$ at one higher derivative level, which is permissible below top order. The gain is that we can prove less singular estimates (in terms of powers of $1/\upmu_{\star}$) below top order. That is, avoiding elliptic estimates allows us to employ an energy “descent scheme” similar to the one we discussed in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], which applies in particular to irrotational solutions of the compressible Euler equations. Eventually, one reaches a level below which all energies, including those for ${\uprho}$, $v^i$, and ${\upomega}$ are bounded, much like in the irrotational case. This is the key technical step in closing the proof of shock formation. Proof of Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] {#E:PROOFOFMAINTHEOREM} =================================================== In this section, we prove Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\]. The theorem is a conglomeration of Lemmas \[L:RENORMALIZEDVORTICITYEVOLUTIONEQUATION\], \[L:WAVEEQUATIONFORLOGDENSITY\], \[L:WAVEEQUATIONFORV\], and \[L:DIVANDCURLEQUATIONS\], in which we separately derive the equations stated in the theorem. Proof of Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\] {#SS:PROOFOFMAINTHEOREM} --------------------------------------------------- We start by deriving the well-known evolution equation for ${\upomega}$. \[L:RENORMALIZEDVORTICITYEVOLUTIONEQUATION\] The compressible Euler equations - imply the following evolution equation for the modified vorticity vectorfield ${\upomega}$ from Def. \[D:MODIFIEDVARIABLES\]: $$\begin{aligned} \label{E:RESTATEDRENORMALIZEDVORTICITYEVOLUTIONEQUATION} {B}{\upomega}^i & = {\upomega}^a \partial_a v^i.\end{aligned}$$ In view of definition , we commute equation with the operator $ \displaystyle \frac{1} {\exp {\uprho}} {\mbox{\upshape curl}\mkern 1mu}$, note that ${\mbox{\upshape curl}\mkern 1mu}$ completely annihilates RHS  (which can be written as a perfect gradient $\partial_i (\cdots)$), and use equation and the antisymmetry of $\epsilon_{\dots}$ to deduce $$\begin{aligned} \label{E:FIRSTSTEPRENORMALIZEDVORTICITYEVOLUTIONEQUATION} {B}{\upomega}^i & = - \frac{1}{\exp {\uprho}} \epsilon_{iab} (\partial_a v^c) \partial_c v^b - \frac{1}{\exp {\uprho}} ({B}{\uprho}) \omega^i \\ & = - \frac{1}{\exp {\uprho}} \epsilon_{iab} (\partial_a v^c) \partial_c v^b + ({\mbox{\upshape div}\mkern 1mu}v) {\upomega}^i \notag \\ & = - \frac{1}{\exp {\uprho}} \epsilon_{iab} (\partial_a v^c) (\partial_c v^b - \partial_b v^c) + ({\mbox{\upshape div}\mkern 1mu}v) {\upomega}^i \notag \\ & = - \epsilon_{iab} \epsilon_{cbd} {\upomega}^d (\partial_a v^c) + ({\mbox{\upshape div}\mkern 1mu}v) {\upomega}^i \notag \\ & = \epsilon_{iab} \epsilon_{cdb} {\upomega}^d (\partial_a v^c) + ({\mbox{\upshape div}\mkern 1mu}v) {\upomega}^i \notag \\ & = (\delta_{ic} \delta_{ad} - \delta_{id} \delta_{ac}) {\upomega}^d (\partial_a v^c) + ({\mbox{\upshape div}\mkern 1mu}v) {\upomega}^i. \notag \end{aligned}$$ Clearly, we have $\mbox{{\upshape RHS}~\eqref{E:FIRSTSTEPRENORMALIZEDVORTICITYEVOLUTIONEQUATION}} = \mbox{{\upshape RHS}~\eqref{E:RESTATEDRENORMALIZEDVORTICITYEVOLUTIONEQUATION}} $ as desired. Recall that the covariant wave operator $\square_g$ is defined in Def. \[E:WAVEOPERATORARBITRARYCOORDINATES\]. In the next lemma, we provide an explicit expression for $\square_g \phi$ that holds relative to the Cartesian coordinates. \[L:COVARIANTWAVEOPRELATIVETOCARTESIAN\] The covariant wave operator $\square_g$ acts on scalar functions $\phi$ via the following identity, where RHS  is expressed in Cartesian coordinates: $$\begin{aligned} \label{E:COVARIANTWVAEOPERATORINRECTANGULARCOORDINATES} \square_g \phi & = - {B}{B}\phi + {c_s}^2 \delta^{ab} \partial_a \partial_b \phi + 2 {c_s}^{-1} {c_s}' ({B}{\uprho}) {B}\phi - (\partial_a v^a) {B}\phi - {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} (\partial_{\alpha} {\uprho}) \partial_{\beta} \phi.\end{aligned}$$ It is straightforward to compute using equations - that relative to Cartesian coordinates, we have $$\begin{aligned} \label{E:DETG} \mbox{\upshape det} g & = - {c_s}^{-6}\end{aligned}$$ and hence $$\begin{aligned} \label{E:ROOTDETGTIMESGINVERSE} \sqrt{|\mbox{\upshape det} g|} g^{-1} & = - {c_s}^{-3} {B}\otimes {B}+ {c_s}^{-1} \sum_{a=1}^3 \partial_a \otimes \partial_a.\end{aligned}$$ Using , , and , we compute that $$\begin{aligned} \label{E:FIRSTFORMULACOVARIANTWVAEOPERATORINRECTANGULARCOORDINATES} \square_g \phi & = - {c_s}^3 \left( {B}^{\alpha} \partial_{\alpha} ({c_s}^{-3}) \right) {B}^{\beta} \partial_{\beta} \phi - (\partial_{\alpha} {B}^{\alpha}) {B}^{\beta} \partial_{\beta} \phi - ({B}^{\alpha} \partial_{\alpha} {B}^{\beta}) \partial_{\beta} \phi \\ & \ \ - {B}^{\alpha} {B}^{\beta} \partial_{\alpha} \partial_{\beta} \phi + {c_s}^2 \delta^{ab} \partial_a \partial_b \phi - {c_s}{c_s}' \delta^{ab} (\partial_a {\uprho}) \partial_b \phi. \notag\end{aligned}$$ Finally, from , the expression for ${B}$, the expression for $g^{-1}$, and simple calculations, we arrive at . In the next lemma, we derive equation . \[L:WAVEEQUATIONFORLOGDENSITY\] The compressible Euler equations - imply the following covariant wave equation for the logarithmic density variable ${\uprho}$ from Def. \[D:MODIFIEDVARIABLES\]: $$\begin{aligned} \label{E:PROOFRENORMALIZEDDENSITYWAVEEQUATION} \square_g {\uprho}& = - 3 {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} {\uprho}+ 2 \sum_{1 \leq a < b \leq 3} \left\lbrace \partial_a v^a \partial_b v^b - \partial_a v^b \partial_b v^a \right\rbrace.\end{aligned}$$ First, using with $\phi = {\uprho}$ and equation , we compute that $$\begin{aligned} \label{E:FIRSTCOMPUTATIONRENORMALIZEDDENSITYWAVEEQUATION} \square_g {\uprho}& = - {B}{B}{\uprho}+ {c_s}^2 \delta^{ab} \partial_a \partial_b {\uprho}+ 2 {c_s}^{-1} {c_s}' {B}{\uprho}{B}{\uprho}+ (\partial_a v^a)^2 - {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} {\uprho}.\end{aligned}$$ Next, we use , , and , to compute that $$\begin{aligned} \label{E:TWOTRANSPORTAPPLIEDTORENORMALIZEDDENSITYEXPRESSION} {B}{B}{\uprho}& = - \partial_a ({B}v^a) + (\partial_a v^b) \partial_b v^a \\ & = {c_s}^2 \delta^{ab} \partial_a \partial_b {\uprho}+ \delta^{ab} (\partial_a {c_s}^2) \partial_b {\uprho}+ (\partial_a v^b) \partial_b v^a \notag \\ & = {c_s}^2 \delta^{ab} \partial_a \partial_b {\uprho}+ 2 {c_s}{c_s}' \delta^{ab} \partial_a {\uprho}\partial_b {\uprho}+ (\partial_a v^b) \partial_b v^a. \notag \end{aligned}$$ Finally, using to substitute for the term $- {B}{B}{\uprho}$ on RHS  and using the identities $$\begin{aligned} \label{E:VELOCITYNULLFORMIDENTITY} (\partial_a v^a)^2 - (\partial_a v^b) \partial_b v^a = 2 \sum_{1 \leq a < b \leq 3} \left\lbrace \partial_a v^a \partial_b v^b - \partial_a v^b \partial_b v^a \right\rbrace\end{aligned}$$ and $ {B}{\uprho}{B}{\uprho}- {c_s}^2 \delta^{ab} \partial_a {\uprho}\partial_b {\uprho}= - (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} {\uprho}$ (see ), we arrive at the desired expression . We now establish equation . \[L:WAVEEQUATIONFORV\] The compressible Euler equations - imply the following covariant wave equation for the scalar-valued function $v^i$, $i=1,2$: $$\begin{aligned} \label{E:PROOFVELOCITYWAVEEQUATION} \square_g v^i & = - {c_s}^2 \exp({\uprho}) ({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i + 2 \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b \\ & \ \ - (1+{c_s}^{-1} {c_s}') (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} v^i. \notag\end{aligned}$$ First, we use with $\phi = v^i$ and equation to deduce $$\begin{aligned} \label{E:FIRSTCOMPUTATIONVELOCITYWAVEEQUATION} \square_g v^i & = - {B}{B}v^i + {c_s}^2 \delta^{ab} \partial_a \partial_b v^i - 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}- (\partial_a v^a) {B}v^i - {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} v^i.\end{aligned}$$ Next, we use , , , and to compute that $$\begin{aligned} \label{E:TWOTRANSPORTAPPLIEDTOVELOCITYEXPRESSION} {B}{B}v^i & = - {c_s}^2 \delta^{ia} {B}\partial_a {\uprho}- 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}\\ & = - {c_s}^2 \delta^{ia} \partial_a ({B}{\uprho}) + {c_s}^2 \delta^{ia} \partial_a v^b \partial_b {\uprho}- 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}\notag \\ & = {c_s}^2 \delta^{ia} \delta_c^b \partial_a (\partial_b v^c) - \delta^{ia} \partial_a v^b {B}v^b - 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}\notag \\ & = {c_s}^2 \delta^{bc} \partial_b \partial_c v^i + {c_s}^2 \delta^{ia} \partial_c (\partial_a v^c - \partial_c v^a) - \delta^{ia} \partial_a v^b {B}v^b - 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}\notag \\ & = {c_s}^2 \delta^{bc} \partial_b \partial_c v^i + {c_s}^2 \delta^{ia} \partial_c (\partial_a v^c - \partial_c v^a) - (\partial_i v^b - \partial_b v^i) {B}v^b - \partial_a v^i {B}v^a - 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}. \notag\end{aligned}$$ Next, we use the identity (see ) $$\begin{aligned} \label{E:VORTICIITYRENORMALIZEDVORTICITYANDDENSITYRELATION} \omega^i & = {\upomega}^i \exp({\uprho})\end{aligned}$$ and equation to derive the identities $$\begin{aligned} {c_s}^2 \delta^{ia} \partial_c (\partial_a v^c - \partial_c v^a) & = {c_s}^2 \delta^{ia} \epsilon_{acd} \partial_c \omega^d = {c_s}^2 {\mbox{\upshape curl}\mkern 1mu}\omega^i \label{E:FIRSTIDUSEDINDERIVINGVELOCITYWAVEEQN} \\ & = {c_s}^2 \exp({\uprho}) {\mbox{\upshape curl}\mkern 1mu}{\upomega}^i + {c_s}^2 \exp({\uprho}) \epsilon_{icd} {\upomega}^d \partial_c {\uprho}\notag \\ & = {c_s}^2 \exp({\uprho}) {\mbox{\upshape curl}\mkern 1mu}{\upomega}^i - \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b, \notag \\ (\partial_i v^b - \partial_b v^i) {B}v^b & = \exp({\uprho}) \epsilon_{ibc} {\upomega}^c {B}v^b = \exp({\uprho}) ({B}v^a) \epsilon_{iab} {\upomega}^b, \label{E:SECONDIDUSEDINDERIVINGVELOCITYWAVEEQN}\end{aligned}$$ Substituting the RHSs of - for the relevant terms on RHS , we obtain $$\begin{aligned} \label{E:TWOTRANSPORTAPPLIEDTOVELOCITYSECONDEXPRESSION} {B}{B}v^i & = {c_s}^2 \delta^{bc} \partial_b \partial_c v^i + {c_s}^2 \exp({\uprho}) {\mbox{\upshape curl}\mkern 1mu}{\upomega}^i - 2 \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b \\ & \ \ - (\partial_a v^i) {B}v^a - 2 {c_s}{c_s}' {B}{\uprho}\delta^{ia} \partial_a {\uprho}. \notag\end{aligned}$$ Next, substituting $-$RHS  for the term $- {B}{B}v^i$ on RHS , we arrive at $$\begin{aligned} \label{first.thm.prelim.1} \square_g v^i & = - {c_s}^2 \exp({\uprho}) ({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i + 2 \exp({\uprho}) \epsilon_{iab} ({B}v^a) {\upomega}^b \\ & \ \ + \left\lbrace ({B}v^a) \partial_a v^i - (\partial_a v^a) {B}v^i \right\rbrace - {c_s}^{-1} {c_s}' (g^{-1})^{\alpha \beta} \partial_{\alpha} {\uprho}\partial_{\beta} v^i. \notag\end{aligned}$$ To handle the terms $\lbrace \cdot \rbrace$ in , we use , , and to obtain $$\label{first.thm.prelim.2} \begin{split} ({B}v^a) \partial_a v^i - (\partial_a v^a) {B}v^i =- {c_s}^2 \delta^{ab} (\partial_b {\uprho}) \partial_a v^i + ({B}{\uprho}) {B}v^i = -(g^{-1})^{\alpha \beta} (\partial_{\alpha} {\uprho}) \partial_{\beta} v^i. \end{split}$$ Finally, substituting into , we conclude the desired equation . We now establish equations -. \[L:DIVANDCURLEQUATIONS\] The compressible Euler equations - imply the following equation for ${\mbox{\upshape div}\mkern 1mu}{\upomega}$ and transport equation for the scalar-valued function $({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i$: $$\begin{aligned} \label{E:PROOFFLATDIVOFRENORMALIZEDVORTICITY} {\mbox{\upshape div}\mkern 1mu}{\upomega}& = - {\upomega}^a \partial_a {\uprho}, \\ {B}({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i & = (\exp {\uprho}) {\upomega}^a \partial_a {\upomega}^i - (\exp {\uprho}) {\upomega}^i {\mbox{\upshape div}\mkern 1mu}{\upomega}+ \epsilon_{iab} (\partial_a {\upomega}^c) \partial_c v^b - \epsilon_{iab} (\partial_a v^c) \partial_c {\upomega}^b. \label{E:PROOFEVOLUTIONEQUATIONFLATCURLRENORMALIZEDVORTICITY}\end{aligned}$$ Since $\omega = {\mbox{\upshape curl}\mkern 1mu}v$, it follows that ${\mbox{\upshape div}\mkern 1mu}\omega = 0$. Equation follows from this identity, , and simple calculations. We now derive . Commuting the already established equation with the Euclidean curl operator and using equations , , and , we obtain the desired equation as follows: $$\begin{aligned} \label{E:FIRSTSTEPEVOLUTIONEQUATIONFLATCURLRENORMALIZEDVORTICITY} {B}({\mbox{\upshape curl}\mkern 1mu}{\upomega})^i & = {\upomega}^a \partial_a \omega^i + \epsilon_{iab} (\partial_a {\upomega}^c) \partial_c v^b - \epsilon_{iab} (\partial_a v^c) \partial_c {\upomega}^b \\ & = (\exp {\uprho}) {\upomega}^a \partial_a {\upomega}^i + (\exp {\uprho}) {\upomega}^i {\upomega}^a \partial_a {\uprho}+ \epsilon_{iab} (\partial_a {\upomega}^c) \partial_c v^b - \epsilon_{iab} (\partial_a v^c) \partial_c {\upomega}^b \notag \\ & = (\exp {\uprho}) {\upomega}^a \partial_a {\upomega}^i - (\exp {\uprho}) {\upomega}^i {\mbox{\upshape div}\mkern 1mu}{\upomega}+ \epsilon_{iab} (\partial_a {\upomega}^c) \partial_c v^b - \epsilon_{iab} (\partial_a v^c) \partial_c {\upomega}^b. \notag\end{aligned}$$ We have therefore established , which completes the proof of Theorem \[T:GEOMETRICWAVETRANSPORTSYSTEM\]. [^1]: $^{\dagger}$JS gratefully acknowledges support from NSF grant \# DMS-1162211, from NSF CAREER grant \# DMS-1454419, from a Sloan Research Fellowship provided by the Alfred P. Sloan foundation, and from a Solomon Buchsbaum grant administered by the Massachusetts Institute of Technology. [^2]: $^{*}$Stanford University, Palo Alto, CA, USA. `jluk@stanford.edu` [^3]: $^{**}$Massachusetts Institute of Technology, Cambridge, MA, USA. `jspeck@math.mit.edu` [^4]: See Subsect. \[SS:NOTATION\] regarding our conventions for indices and implied summation. [^5]: For example, when studying solutions that are perturbations of non-vacuum constant states, one may choose $\bar{\rho}$ so that in terms of the variable ${\uprho}$ from , the constant state corresponds to ${\uprho}\equiv 0$. [^6]: Throughout this article, we avoid discussing the dynamics in regions with vanishing density. The reason is that the compressible Euler equations become degenerate along fluid-vacuum boundaries and not much is known about compressible fluid flow in this context; see, for example, [@dCsS2012] for more information. [^7]: Throughout, if $V$ is a vectorfield and $f$ is a function, then $Vf := V^{\alpha} \partial_{\alpha} f$ denotes the derivative of $f$ in the direction $V$. \[F:VECTORFIELDSACTONFUNCTIONS\] [^8]: The fluid potential $\Phi$ is defined such that ${\partial}_i\Phi=-v^i$ [^9]: Roughly, a variation of $\Phi$ is the derivative of $\Phi$ with respect to some first-order differential operator. [^10]: Roughly, the maximal classical development is the largest possible classical solution that is uniquely determined by the data; see, for example, [@jSb2016; @wW2013] for further discussion. [^11]: We note, however, that shock formation results seem to be significantly less stable than small-data global existence proofs under perturbations of the *equations*; we explore this in detail in Subsect. \[SS:STRONGNULLVSOTHERNULL\]. [^12]: Since we will be considering solutions on spatial domains $\mathbb R \times \mathbb T$ and $\mathbb R\times \mathbb T^2$ in two and three spatial dimensions respectively, the result in two spatial dimensions is strictly weaker than the one in three spatial dimensions. However, since the two-spatial-dimensional-case contains substantial new ideas compared to the irrotational case but is technically simpler than the three-spatial-dimensional-case, we have treated it separately in [@jLjS2016b]; we hope that the two-space-dimensional result will serve as a useful starting point for readers interested in the case of three spatial dimensions. [^13]: A Chaplygin gas has the equation of state $p = p(\rho)=C_0-{\frac}{C_1}{\rho}$, where $C_0\in \mathbb R$ and $C_1>0$. [^14]: This problem was recently solved in the spherically symmetric relativistic case [@dCaL2016] starting from the state of a spherically symmetric irrotational solution at the end of its classical lifespan, which was obtained by Christodoulou in [@dC2007] as a special case. Away from spherical symmetry, the shock development problem remains open and is expected to be quite difficult. [^15]: In three spatial dimensions, it remains unclear whether our methods can be extended to yield a sharp description of the boundary of the maximal development. The main difficulty is technical in nature and is tied to our reliance on elliptic estimates on constant-$t$ hypersurfaces to control the top-order derivatives of the specific vorticity. [^16]: Strictly speaking, the solutions in [@dC2007] contained vorticity. However, the initial conditions studied in [@dC2007] led to the vorticity being confined to a region far away from the shock. Hence, Christodoulou did not have to confront the difficult problem of having to control the vorticity at the shock itself. [^17]: The results of [@jSgHjLwW2016] apply to a large class of wave equations, of which the irrotational compressible Euler equations are a special example. [^18]: Actually, in the context of [@dC2007], in which the physical spacetime was Minkowski spacetime, it would be more accurate to refer to these coordinates as “Minkowski rectangular coordinates.” [^19]: The same difficulty is found in Alinhac’s approach [@sA1999a; @sA1999b; @sA2001b; @sA2002] to proving shock formation. [^20]: Note that this is a different question than whether or not the shock forms. [^21]: Here, by first-order differential operator, we mean one equal to a regular function times a Cartesian coordinate partial derivative. [^22]: A vectorfield $V$ is future directed if $V^0 > 0$, where $V^0$ is $0$ Cartesian component. \[FN:FUTUREDIRECTED\] [^23]: Throughout we use the notation $g(V,W) := g_{\alpha \beta}V^{\alpha} W^{\beta}$. [^24]: We also note the equation $ \displaystyle \partial_t \Phi- \frac{1}{2}(\partial_1 \Phi)^2=h $, where $h$ is the enthalpy, defined such that $dh = {c_s}^2 \,d {\uprho}$. [^25]: It can also be directly verified using - that in the absence of vorticity, is equivalent to $\square_{\tilde g} v^i=0$, where $\tilde{g} : =\exp({\uprho}) {c_s}g$ is a metric conformal to $g$. [^26]: Theorem \[T:STRONGNULL\] is valid only for solutions in the sense that the proof relies on using for algebraic substitution in order to exhibit the desired structure for the term $\mathscr{P}_{({\upomega})}^i$ from ; see Remark \[R:DELICATEVORTICITYQUDRATICTERM\]. \[FN:THEOREMISVALIDONLYFORSOLUTIONS\] [^27]: The topology of the spacetime manifold is not relevant for our discussion here. [^28]: Here and below, we use the Einstein’s summation convention, where uppercase Latin indices such as $A$ and $B$ vary over $1,2,3,4$, lowercase Latin “spatial” indices such as $a$ and $b$ vary over $1,2,3$, uppercase Greek indices such as $\Theta$ and $\Gamma$ vary over $0,1,\dots,6$, and lowercase Greek “spacetime” such as $\alpha$ and $\beta$ indices vary over $0,1,2,3$. \[FN:INDEXCONVENTIONS\] [^29]: Although the quasilinear terms, on the other hand, necessarily violate Klainerman’s null condition in view of shock formation. [^30]: Such a null frame is adapted to flat Minkowski light cones with vertices at the (spatial) origin. [^31]: This is perhaps not surprising in view of the fact that small-data global existence proofs are typically closable because there is a margin of error in the estimates. [^32]: There is precisely one exceptional equation of state for the irrotational special relativistic Euler equations for which the shock-formation results do not hold. The exceptional equation of state corresponds to the Lagrangian $\mathscr{L} = 1 - \sqrt{1 + (m^{-1})^{\alpha \beta} \partial_{\alpha} \Phi \partial_{\beta} \Phi}$, where $m$ is the Minkowski metric. It is exceptional because it is the only Lagrangian for relativistic fluid mechanics such that Klainerman’s null condition is satisfied for perturbations near the non-vacuum constant states. Due to the null condition, small-data global existence holds [@hL2004]. A similar statement holds for the non-relativistic compressible Euler equations; see [@dCsM2014]\*[Sect. 2.2]{} for more information. \[FN:EXCEPTIONALLAGRANGIANS\] [^33]: More precisely, Christodoulou had to rescale $g$ by a conformal factor in order to bring the equation into the form , but that detail is not important for our discussion. [^34]: We may caricature the negligible effect by the Riccati-type ODE $\dot{y} = y^2 + \epsilon y$, where the $y^2$ term caricatures the shock-producing quadratic term obtained from expanding the covariant wave operator relative to Cartesian coordinates and $\epsilon y$ caricatures the $\mathscr{Q}_{\nu}$. For $\epsilon$ small relative to the data $y(0) > 0$, the $\epsilon y$ term does not interfere with the Riccati-type blowup. [^35]: The “smallness” in [@jS2014b] was stated in terms of a Sobolev norm of the data for $\Phi$ in equation , for the data of $\vec{\Psi}$ in the system , and for the data of $\Psi$ in equation . In contrast, in his study of equation in [@dC2007], Christodoulou assumed that the data of $\Phi - k t$ was small, where $k$ is a non-zero constant and $k t$ is a global background solution corresponding to a global non-vacuum fluid state. The fact that Christodoulou studied perturbations of the solution $kt$ rather than the solution $0$ is a minor detail that has no important bearing on the analysis; see [@gHsKjSwW2016] for more details. \[FN:SMALLNESS\] [^36]: Roughly, a simple wave $\Psi$ in one spatial dimension is such that $\Psi(u,v)$ is *independent of $u$*, where $(u,v)$ form a coordinate system of eikonal functions (that is, the level sets of $u$ and $v$ are null hypersurfaces). Put differently, $u$ and $v$ are coordinate functions that solve the eikonal equation . \[FN:SIMPLEPLANE\] [^37]: Roughly, outgoing means right-moving. This choice was made for convenience; the left-moving case can be treated with the same arguments. [^38]: In [@jSgHjLwW2016], the characteristics were a family of null hyperplanes adapted to the approximate plane symmetry of the problem. They are analogous to the acoustic characteristics that we encounter in our study of the compressible Euler equations with vorticity. [^39]: As was discussed in [@jSgHjLwW2016], the results of [@jSgHjLwW2016] can be generalized to the case of three spatial dimensions using established techniques. [^40]: Note that in one spatial dimension, the linear wave equation can be written as ${L}_{(Flat)} \partial_1 \Psi = {L}_{(Flat)} \partial_t \Psi$ and thus ${L}_{(Flat)} \partial_1 \Psi = {\frac}12 {L}_{(Flat)} {L}_{(Flat)} \Psi$. Hence, in , we have included ${L}_{(Flat)} {L}_{(Flat)} \Psi$, which vanishes for simple outgoing waves, in the term $\mbox{\upshape Error}$. [^41]: In Lagrangian coordinates $(t,u)$, where $t$ is the Cartesian time coordinate and $u$ is defined to be constant along integral curves of $\partial_t + \Psi \partial_x$, Burgers’ equation reads $ \displaystyle \frac{\partial}{\partial t} \Psi = 0 $, where $ \displaystyle \frac{\partial}{\partial t} $ denotes partial differentiation with respect to $t$ at fixed $u$. We note that $ \displaystyle \frac{\partial}{\partial t} = \partial_t + \Psi \partial_x $, where $\partial_t$ and $\partial_x$ are the usual Cartesian coordinate partial derivative vectorfields. but are such that the Cartesian coordinate partial derivative $\partial_x \Psi$ blows up. \[FN:BURGER\] [^42]: Let us note that $ \displaystyle {\frac}{{\partial}}{{\partial}t} $ here is defined with respect to the $(t,u,\vartheta^1,{\vartheta}^2)$ coordinate system, and it is not equal to the Cartesian vectorfield ${\partial}_t$. $ \displaystyle {\frac}{{\partial}}{{\partial}t} $ can viewed as a dynamically constructed analog of ${L}_{(Flat)}$, which takes into account the quasilinear geometry. [^43]: We note, however, that $\mbox{\upshape Error}$ contains, among other terms, terms involving $ \displaystyle \frac{\partial^2 \Psi}{\partial (\vartheta^1)^2} $ and $ \displaystyle \frac{\partial^2 \Psi}{\partial (\vartheta^2)^2} $. In other words, while these terms can be viewed as error terms for heuristic considerations near the shock, they are “main terms” from the point of view of the top-order energy estimates. [^44]: Throughout we use the notation $A \sim B$ to imprecisely indicate that $A$ is well-approximated by $B$. [^45]: Actually, strictly speaking, is not a null frame in the sense of Def. \[D:NULLFRAME\] because the non-zero normalization conditions for the frame are different; this is a minor issue that we ignore here. \[E:SORTOFANULLFRAME\] [^46]: In fact, while it is most convenient to explain the necessity of a null condition using a null frame, in deriving estimates in [@jLjS2016b; @jLjS2017], we use a slightly different frame which still captures the “good” and “bad” directions, but is more convenient from the point of view of commutations; see the discussion preceding Def. \[frame.def.1\]. [^47]: As a consequence, $e_1$ and $e_2$ are indeed orthogonal to ${L}$ and ${\breve{\underline{L}}}$. [^48]: Actually, in putting the metric into this form, we introduce a semilinear term proportional to $(g^{-1})^{\alpha \beta}(\Psi) \partial_{\alpha} \Psi \partial_{\beta} \Psi$ in the covariant wave equation corresponding to the rescaled metric. However, this term verifies the strong null condition of Def. \[D:STRONGNULLCONDITION\] and therefore has a negligible impact on the dynamics. We therefore ignore it in the exposition. \[FN:GINVERSE00ISMINUSONE\] [^49]: After great effort; see Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\]. [^50]: We refer to it as a rescaled frame since ${\breve{X}}$ is “rescaled” by a factor of $\upmu$. [^51]: In particular, $\{L,{\breve{X}},{Y}_1,{Y}_2\}$ is *not* a $g$–null frame. [^52]: Note also that $\underline{L}$ as defined does *not* remain regular with respect to the geometric coordinates as the shock forms. That is, ${\underline{L}}$ contains a component proportional to $ \displaystyle \frac{1}{\upmu} \frac{\partial}{\partial u} $. In contrast, the rescaled vectorfield $\breve{\underline{L}}:=\upmu \underline{L}$ *does* remain regular. This also explains the form of the frame from Subsubsect. \[SSS:SOMEREMARKSONTHEPROOF\]. [^53]: Roughly speaking, if $N_{Top}$ denotes the maximum number of times that we need to commute the wave equation when deriving energy estimates, then we make the $L^{\infty}$ bootstrap assumptions for the up-to-order $N_{Top}/2$ derivatives of $\Psi$. [^54]: In the proof, one can allow the implicit constants in $\mathcal{O}(\mathring{\upepsilon})$ to depend on $t$. The reason is that it is possible to make a good guess about the (data-dependent) time of blow up. Therefore, one needs only to propagate estimates for an amount of time that can be estimated to high accuracy in advance. [^55]: For reasons described to below, the proofs require many derivatives. In [@jSgHjLwW2016], the authors showed that the proof closes if, at time $0$, $\Psi \in H^{19}$ and $\partial_t \Psi \in H^{18}$ (with suitable tensorial smallness assumptions enforcing the $\mathring{\upepsilon}-\mathring{\updelta}$ hierarchy described above). [^56]: Here we raise and lower indices with $g^{-1}$ and $g$. [^57]: That is, $g({T},{T}) = - 4 \upmu(1+\upmu) < 0$. [^58]: In the interest of brevity, we have avoided discussing the volume forms corresponding to the integrals $ \int_{\Sigma_t^u} \cdots $ and $ \int_{\mathcal{P}_u^t} \cdots $. Let us simply note that the implicit forms are non-degenerate in the sense that relative to the geometric coordinates, they remain uniformly bounded from above and below (strictly away from zero), all the way up to the shock. \[FN:NOTWRITINGFORMS\] [^59]: Recall that $V$ being future-directed simply means that $V^0 > 0$, where $V^0$ is the Cartesian time component of $V$. [^60]: $V$ being causal means that $g(V,V) \leq 0$. [^61]: As we indicated in , it is important to commute the $\upmu-$weighted wave equation. [^62]: That is, a key part of the proof involves showing that when $\upmu$ is small, ${L}\upmu$ must be quantitatively negative. [^63]: Note that in two spatial dimensions, the $\ell_{t,u}$ are one-dimensional, and it turns out that the energy estimates close without elliptic estimates. [^64]: In equation , we have ignored the presence of other difficult terms that lead to related but distinct difficulties. [^65]: We stress that since the factor $1/\upmu$ is present in the energy identities, one must derive detailed information about the way that $\upmu_{\star}$ vanishes in order to close the energy estimates. The reason is that the vanishing rate of $\upmu_{\star}$ is tied to the blowup-rate of the high-order energies. In particular, it is crucially important that $\upmu_{\star}$ goes to $0$ *linearly*, as is captured by our caricature estimate. [^66]: This estimate is just a quasilinear version of the bound $\int_{s=0}^t s^{-B} \, ds \lesssim t^{1 - B}$, where $s=0$ represents the “vanishing” of $\upmu$. In the proof of the estimate, it is again critically important that $\upmu_{\star}$ vanishes linearly. [^67]: In [@jSgHjLwW2016], the authors commuted the wave equations $18$ times in order close the estimates. [^68]: The factor $\mathbb{T}^2$ corresponds to perturbations away from plane symmetry. [^69]: We recall that, roughly speaking, outgoing means right-moving, as is depicted in Figure \[F:FRAME\]. [^70]: The Chaplygin gas equation of state corresponds to the exceptional Lagrangian mentioned in Footnote \[FN:EXCEPTIONALLAGRANGIANS\] In plane symmetry, the Riemann invariants $\mathcal{R}_{\pm}$ for the Chaplygin gas solve a *totally linearly degenerate* system of PDEs, which is not expected to exhibit shock formation; see [@aM1984] for more discussion on totally linearly degenerate systems. [^71]: Here, in the setting of plane symmetry, a characteristic is simply an integral curve of ${L}$. [^72]: The shock-formation argument does not work for the Chaplygin gas. The reason is that for this equation of state, we have ${L}= \partial_t + (\mathcal{R}_- + C) \partial_1$ (where $C$ is a constant), while $\mathcal{R}_- \equiv 0$ by assumption. That is, the evolution equation ${L}\mathcal{R}_+ = 0$ is effectively semilinear in this case. \[FN:NOSHOCKSFORCHAPLYGIN\] [^73]: To prove this, one considers $\upmu \times \mbox{\eqref{E:RENORMALIZEDVORTICTITYTRANSPORTEQUATION}}$. It is easy to show that $\upmu {B}= \frac{d}{du}$ along the integral curves of $\upmu {B}$ and that the factors $\upmu \partial_a v^i$ on the RHS remain uniformly bounded all the way up to the shock. We can therefore view $\upmu \times \mbox{\eqref{E:RENORMALIZEDVORTICTITYTRANSPORTEQUATION}}$ as a linear ODE in ${\upomega}$ with regular coefficients, and by the uniqueness of the $0$ solution, ${\upomega}$ never vanishes. [^74]: Here $[P,Q]$ denotes the commutator of the differential operators $P$ and $Q$. [^75]: See for the precise formula for $\partial_a$ in terms of the geometric vectorfields. [^76]: The transversality of ${B}$ is critically important for this argument. For example if one integrates the same quantity ${\partial}_a v^i$ along the integral curve of the null vectorfield ${L}$ (which is *tangent* to the acoustic characteristics), then one can at best show that the integral is bounded by $\ln \upmu^{-1}$. [^77]: In proving this, one relies on the fact that the differential operator $\upmu {B}$ can be viewed as $ \displaystyle \frac{d}{du} $ along the integral curves of $\upmu {B}$. [^78]: At the end of this subsubsection, we will explain why $N_{Top}$ is the same as in the irrotational case. [^79]: Recall that in Subsubsect. \[SSS:QUICKSUMMARYOFPROOFOFSHOCKFORMATION\], when deriving energy estimates, we were able to close the estimates by commuting the equations only with the elements $P$ of the $\mathcal{P}_u-$tangential set ${\mathscr{P}}$. Here, to keep the discussion simple, we ignore this detail and allow for commutations with all elements of ${\mathscr{Z}}$. [^80]: In reality, to close the estimates, one must study the $\upmu-$weighted version of this equation, but we ignore this detail here. [^81]: The estimate can be derived by analyzing the change of variables map between geometric and rectangular coordinates. That is, follows from proving precise versions of the following heuristic statements, which are suggested by Figure \[F:FRAME\] (which corresponds to the nearly plane symmetric solutions under consideration): at a fixed $t$, we have $dx^1 \sim - \upmu du$, $d x^2 \sim d \vartheta^1$, $d x^3 \sim d \vartheta^2$. [^82]: In fact, it is perhaps preferable to derive the estimate relative to the Cartesian spatial coordinates and volume form $d^3 x$. [^83]: As we have mentioned, this difficulty is also absent in the case of two spatial dimensions, even in the presence of vorticity. \[FN:TWOSPACENOELLITPIC\] [^84]: Unlike the estimates for ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$, the estimates for ${\mbox{\upshape div}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ are easy to derive. [^85]: In deriving , one can apply, to the main error integral involving the top order derivatives of ${\upomega}$, Young’s inequality in the form $ab \lesssim a^2 + b^2$ to ensure that the constant $\widetilde{A}$ is small. This procedure also generates an error integral with a large constant, but it can be suitably controlled with a non-degenerate energy for ${\mbox{\upshape curl}\mkern 1mu}{\mathscr{Z}}^{N_{Top}} {\upomega}$ on the null hypersurface $\mathcal{P}_u^t$, which we have suppressed from the inequality .
i[[i]{}]{} 10.125in 6.125in -1in [**NONCOMMUTATIVE DYNAMICS**]{}\ [*Dedicated to L.C. Biedenharn*]{} Jakub Rembieliński University of [Ł]{}ódź\ Department of Theoretical Physics\ ul. Pomorska 149/153\ 90–236 [Ł]{}ódź, Poland [**INTRODUCTION**]{} The first step in the noncommutative dynamics was undertaken by L.C. Biedenharn$^1$ who considered the quantum noncommutative harmonic oscillator. Recently Aref’eva and Volovich$^2$ published paper devoted to some nonrelativistic dynamical system in a noncommutative phase-space framework. Noncommutative analogon of the Galilean particle, as described in Aref’eva and Volovich$^2$, has two main features: – Consistency of the formalism demands noncommutativity of the inertial mass. This phenomena holds also in Rembielinski$^3$ in the relativistic case. – There is no unitary time development of the system on the quantum level. In this paper we formulate unitary noncommutative $q$-dynamics on the quantum level. To do this let us notice that a possible deformation of the standard quantum mechanics lies in change of the algebra of observables with consequences on the level of dynamics. This is pictured on the Fig. 1. can be possibly changed:=\ \ unchanged:\ \ cannot be changed:\ \ This scheme is showing possible changes in the structure of QM The main observation is the well known statement, that probabilistic interpretation of quantum mechanics causes an unitary time evolution of physical system irrespectively of the choice of the algebra of observables (standard or $q$-deformed). As a consequence the Heisenberg equations of motion hold in each case (in the Heisenberg picture). In the following we restrict ourselves to the one degree of freedom systems. [**ALGEBRA OF OBSERVABLES—STANDARD QM CASE**]{} Construction of quantum spaces by Manin$^4$ as quotient of a free algebra by two-sided ideal can be applied also to the Heisenberg algebra case. In fact the Heisenberg algebra can be introduced as the quotient algebra $$\label{1} {\cal H}=A(I,x.p)/J(I,x.p)$$ where $A(I,x,p)$ is an unital associative algebra freely generated by $I$, $x$ and $p$, while $J(I,x,p)$ is a two-sided ideal in $A$ defined by the Heisenberg rule $$\label{2} xp=px+\i\hbar I.$$ There is an antilinear anti-involution (star operation) in $A$ defined on generators as below $$\label{3} x^*=x,\quad p^*=p.$$ From the above construction it follows that this anti-involution induces in $\cal H$ a $^*$-anti-automorphism defined again by the eqs. (\[3\]). Now, according to the result of Aref’eva & Volovich$^2$, confirmed in Rembielinski$^3$ for the relativistic case, some parameters of the considered dynamics, like inertial mass, do not commute with the generators $x$ and $p$. This means that these parameters should be treated themselves as generators of the algebra. To be more concrete let us consider a conservative system described by the Hamiltonian $$\label{4} H+p^2\kappa^2+V(x,\kappa,\lambda).$$ Here $\kappa$ and $\lambda$ are assumed to be additional hermitean generators of the extended algebra $\cal H'$ satisfying the following re-ordering rules $$\begin{aligned} xp&=&px+\i\hbar\lambda^2\nonumber\\ x\lambda&=&\lambda x\nonumber\\ p\lambda&=&\lambda p\label{5}\\ x\kappa&=&\kappa x\nonumber\\ p\kappa&=&\kappa p\nonumber\\ \kappa\lambda&=&\lambda\kappa.\nonumber\end{aligned}$$ We observe that the generators $\kappa$ and $\lambda$ belong to the center of $\cal H'$. Thus the irreducibility condition on the representation level implies that $\lambda$ and $\kappa$ are multipliers of the identity $I$. Consequently they can be chosen as follows $$\begin{aligned} \lambda&=&I\nonumber\\ \kappa&=&\frac{1}{\sqrt{2\mu}}I\label{6}\end{aligned}$$ so the extended algebra $\cal H'$ reduces to the homomorphic Heisenberg algebra $\cal H$ defined by (\[1\]) and (\[2\]). Notice that $\cal H'$ can be interpreted as a quotient of a free unital, associative and involutive algebra $A(I,x,p,\kappa,\lambda)$ by the two-sided ideal $J(I,x,p,\kappa,\lambda)$ defined by eqs. (\[5\]) i.e.$$\label{7} {\cal H'}=A(I,x,p,\kappa,\lambda)/J(I,x,p,\kappa,\lambda)$$ It is remarkable, that eqs. (\[5\]) are nothing but the Bethe Ansatz for $\cal H'$. Finally, dynamics defined by the Hamiltonian $H$ and the Heisenberg equations lead to the Hamilton form of the equations of motion: $$\begin{aligned} \dot{\lambda}&=&0\nonumber\\ \dot{\kappa}&=&0\nonumber\\ \dot{x}&=&\frac{1}{\mu}p\label{8}\\ \dot{p}&=&-V'(x).\nonumber\end{aligned}$$ [**ALGEBRA OF OBSERVABLES—$q$-QM CASE**]{} Now, the formulation of the standard quantum mechanics by means of the algebra $\cal H'$ suggest a natural $q$-deformation of the algebra of observables; namely the $q$-deformed algebra ${\cal H}_q$ is a quotient algebra $$\label{9} {\cal H}_q=a(I,x,p,K,{\mit\Lambda})/J(I,x,p,K,{\mit\Lambda})$$ where the two-sided ideal $J$ is defined now by the following Bethe Ansatz re-ordering rules $$\begin{aligned} xp&=&q^2px+\i\hbar q{\mit\Lambda}^2\nonumber\\ x{\mit\Lambda}&=&\xi{\mit\Lambda}x\nonumber\\ p{\mit\Lambda}&=&\xi^{-1}{\mit\Lambda}p\label{10}\\ xK&=&\tau^2Kx\nonumber\\ pK&=&\varepsilon^2Kp\nonumber\\ {\mit\Lambda}K&=&\tau\varepsilon K{\mit\Lambda}\nonumber\end{aligned}$$ where $K$ and $\mit\Lambda$ are assumed to be invertible and $$\label{11} x^*=x,\quad p^*=p,\quad K^*=K,\quad {\mit\Lambda}^*={\mit\Lambda}.$$ A consistency of the system (\[10\]) requires $$\label{12} |q|=|\xi|=|\tau|=|\varepsilon|=1.$$ The corresponding conservative Hamiltonian has the form $$\label{13} H=p^2K^2+V(x,K,{\mit\Lambda}).$$ Now, similary to the standard case, $\mit\Lambda$ and $K$ are assumed constant in time: $$\begin{aligned} \dot{\mit\Lambda}&=&\frac{\i}{\hbar}[H,{\mit\Lambda}]\equiv0\label{14}\\ \dot{K}&=&\frac{\i}{\hbar}[H,{\mit\Lambda}]\equiv0\label{15}\end{aligned}$$ which implies, under the assumption of the proper classical limit (\[5\]), $$\begin{aligned} \varepsilon&=&1\label{16}\\ \tau&=&\xi^{-1}\nonumber\end{aligned}$$ and by means of eqs. (\[16\]) $$\begin{aligned} V(x,K,{\mit\Lambda})&=&V(\xi x,\xi K,{\mit\Lambda})\label{17}\\ V(x,K,{\mit\Lambda})&=&V(\xi^2x,K,\xi{\mit\Lambda})\nonumber\end{aligned}$$ Furthermore, taking into account (\[16\]) $$\label{18} \dot{x}=\frac{\i}{\hbar}[H,x]=K^2[\frac{\i}{\hbar}{\textstyle (1-(\frac{q}{\xi})^4)p^2x+q\xi^{-4}((\frac{q}{\xi})^2+1){\mit\Lambda}^2p}],$$ and $$\begin{aligned} \lefteqn{\dot{p}=\frac{\i}{\hbar}[H,p]= -\frac{\i}{\hbar}p[V(x,K,{\mit\Lambda})-V(q^2x,K,\xi{\mit\Lambda})]+} \nonumber\\ && -\frac{q}{\textstyle(\frac{q}{\xi})^2-1}\frac{1}{x} [{\textstyle V((\frac{q}{\xi})^2x,\xi^{-2}K,\xi{\mit\Lambda})-V(x,\xi^{-2}K, \xi{\mit\Lambda})}]{\mit\Lambda}^2.\label{19}\end{aligned}$$ Notice that the last term is the quantum (Gauss-Jackson) gradient of $V(x,\xi^{-2}K,\xi{\mit\Lambda}){\mit\Lambda}^2$. Now, a consistency of the Hamilton form of the equations of motion (\[14\]), (\[15\]), (\[18\]) and (\[19\]) with the algebra (\[10\]) and with the Leibniz rule confirms (\[16\])–(\[17\]) and implies $$V(x,K,{\mit\Lambda})=V(({\textstyle\frac{q}{\xi}})^2x,K,{\mit\Lambda}) \label{20}$$ Furthermore, eqs. (\[17\]) and (\[20\]) implies that in the formula (\[19\]) the term linear in $p$ vanish. Consequently $$\dot{p}=-q{\textstyle\partial^{(q/\xi)^2}_x} V(x,\xi^{-2}K,\xi{\mit\Lambda}){\mit\Lambda}^2 \label{21}$$ where $\partial^{(q/\xi)^2}_x$ is the Gauss-Jackson derivative as defined in the eq. (\[19\]). Moreover, under the assumption of the proper classical limit, eq. (\[20\]) implies that $$\xi=q \label{22}$$ and $V$ depends only on the variable $xK^{-1}{\mit\Lambda}^{-2}$ or $V$ does not depend on $x$, so taking into account (\[17\]) we obtain in this case $$V=0. \label{23}$$ Therefore we have two cases. [**Case I**]{} $$H=p^2K^2$$ $$\label{24} \dot{x}=\left[\frac{\i}{\hbar}\left(\xi^4-q^4\right)+ q\left(\xi^2+q^2\right)p{\mit\Lambda}^2\right]K^2$$ $$\dot{p}=0$$ and $$\begin{aligned} xp&=&q^2px+\i\hbar q{\mit\Lambda}^2\nonumber\\ x{\mit\Lambda}&=&\xi{\mit\Lambda}x\nonumber\\ p{\mit\Lambda}&=&\xi^{-1}{\mit\Lambda}p\label{25}\\ xK&=&\xi^{-2}Kx\nonumber\\ pK&=&Kp\nonumber\\ {\mit\Lambda}K&=&\xi^{-1}K{\mit\Lambda}.\nonumber\\\end{aligned}$$ [**Case II**]{} $$H=p^2K^2+V\left((2m)^{-1/2}q^{-1}xK^{-1}{\mit\Lambda}^{-2}\right)$$ $$\dot{x}=2({\mit\Lambda}K)^{2}p \label{26}$$ $$\dot{p}=-q(\partial_xV){\mit\Lambda}^2.$$ and the algebra (\[25\]) holds under the condition (\[22\]) $\xi=q$. The meaning of the normalisation factor $\sqrt{2m}$, $m>0$, will be evident later. Notice that from the eqs. (\[26\]) we can identify the inertial mass $M$ as $$M={\textstyle\frac{1}{2}}q(K{\mit\Lambda})^{-2}, \label{27}$$ so $$\begin{aligned} xM&=&q^2Mx\nonumber\\ pM&=&q^2Mp\label{28}\\ {\mit\Lambda}M&=&q^2M{\mit\Lambda}.\nonumber\end{aligned}$$ Now, let us consider the dynamical models by Aref’eva & Volovich$^2$. [**Free particle**]{} We choose the potential $V=0$ so $H=p^2K^2$ and consequently $$\dot{x}=q^{-1}M^{-1}p\label{29}$$ $$\dot{p}=0.$$ Notice that eqs. (\[29\]) do not contain $\mit\Lambda$. The equations (\[29\]) and the algebra (\[28\]) are the same as in Aref’eva & Volovich$^2$. However it is impossible to fulfil the unitarity condition without of the operator $\mit\Lambda$ (rest of the algebra is defined by eqs. (\[25\]), (\[26\]). Therefore the lacking of the unitarity in Aref’eva & Volovich$^2$ is caused by the choice ${\mit\Lambda}=I$ which contradicts the reordering rules (\[25\]). [**Harmonic oscillator**]{} We start with the Hamiltonian: $$H=p^2K^2+\frac{\omega^2}{2}(q^{-1}xK^{-1}{\mit\Lambda}^{-2})^2. \label{30}$$ Consequently $$\begin{aligned} \dot{x}&=&q^{-1}M^{-1}p\label{31}\\ \dot{p}&=&-\frac{\omega^2}{2}xM.\nonumber\end{aligned}$$ Eqs. (\[31\]) still do not contain $\mit\Lambda$. The reason of the lacking unitarity in the Aref’eva & Volovich$^2$ is the same as in the free-particle case. [****]{} The dependence of the potential $V$ on the element $q^{-1}(2m)^{-1/2}xK^{-1}{\mit\Lambda}^{-2}$ and the form of the kinetic term in Hamiltonian (\[26\]) suggest the following non-canonical reparametrisation of the $q$-QM dynamics in the Case II: $$\begin{aligned} X&:=&q^{-1}(2m)^{-1/2}xK^{-1}{\mit\Lambda}^{-2}\label{32}\\ P&:=&(2m)^{1/2}pK.\nonumber\end{aligned}$$ By means of the eqs. (\[32\]), (\[25\]) and (\[22\]) we obtain the following form of the reordering rules (in terms of $X$, $P$, $K$, and $\mit\lambda$) $$\begin{aligned} XP&=&PX+\i\hbar I\nonumber\\ K{\mit\Lambda}&=&q{\mit\Lambda}K\label{33}\\ \null[{\mit\Lambda},X]=[{\mit\Lambda},P]&=&[K,X]=[K,P]=0.\nonumber\end{aligned}$$ Therefore $${\cal H}_q={\cal H}\oplus{\cal M}^2_q \label{34}$$ i.e. ${\cal H}_q$ is the direct sum of the Heisenberg algebra generated by $X$, $P$ and of the real Manin’s plane ${\cal M}^2_q$ (generated by $K$ and ${\mit\Lambda}$). Moreover the Hamilton equations take the standard form $$\begin{aligned} \dot{X}&=&\frac{1}{m}P\label{35}\\ \dot{P}&=&-V'(X)\nonumber\end{aligned}$$ with $$H=p^2K^2+V(q^{-1}(2m)^{-1/2}xK^{-1}{\mit\Lambda}^{-2})=P^2\frac{1}{2m}+V(X). \label{36}$$ It is evident that energy spectra of both dynamics (defined by $x$ and $p$ or by $X$ and $P$) are the same. However both theories are unitary nonequivalent so its physical content (identification of observables) is rather different. In the Case I for $\xi=q$ an analogous reparametrisation is impossible. It is remarkable, that a similar analysis given in Brzezinski & al.$^5$ for a quantum particle on a $q$-circle leads to quite analogous conclusions. [****]{} Now, we observe that the Hamiltonian equations of motion (\[8\]) in the standard quantum mechanics can be written as $$\begin{aligned} \d x&\equiv&\dot{x}\d t=\frac{1}{\mu}p\,\d t\label{37}\\ \d p&\equiv&\dot{p}\d t=-V'(x)\,\d t.\nonumber\end{aligned}$$ By means of the Heisenberg reordering rule (\[2\]) it is easy to calculate that $$\begin{aligned} x\,\d x&=&\d x\,(x+\i\hbar p^{-1})\nonumber\\ p\,\d x&=&\d x\,p\label{38}\\ x\,\d p&=&\d p\,x\nonumber\\ p\,\d p&=&\d p\,(p-\i\hbar V''(x)/V'(x))\nonumber\end{aligned}$$ or in a more symmetric form $$\begin{aligned} px\,\d x&=&\d x\,xp\nonumber\\ p\,\d x&=&\d x\,p\label{39}\\ x\,\d p&=&\d p\,x\nonumber\\ V'(x)p\,\d p&=&\d p\,pV'(x).\nonumber\end{aligned}$$ Assuming that $\d x$ and $\d p$ are obtained from $x$ and $p$ respectively as an effect of the application of external differential $\d$ satisfying usual conditions (linearity, nilpotency and the graded Leibniz rule) we can complete the differential algebra with a two-form sector. It is matter of simple calculations to show that $$\begin{aligned} \d x\,\d p&=&-\d p\,\d x\nonumber\\ p^2(\d x)^2&=&(px-\i\hbar/2)\d x\,\d p\label{40}\\ (\d p)^2&=&-\frac{\i\hbar}{2}\d x\,\d p\,{\rm D}_x\left(\frac{V''(x)}{V'(x)}\right), \nonumber\end{aligned}$$ where ${\rm D}_x$ is the partial $\hbar$-derivative with respect to $x$, defined via $\d f(x,p)=\d x\,{\rm D}_xf+\d p\,{\rm D}_pf$. As a consequence $$(\d x)^3=(\d p)^3=\d x\,(\d p)^2=(\d x)^2\,\d p=0. \label{41}$$ Therefore we have defined a $Z_2$ graded $\cal H$-bi-module with $\dim({\cal H})=1+2+1=4$—a quantum analogon of the deRham complex. Now, the above quantum deRham complex can be $q$-deformed according to the deformation of the Heisenberg algebra $\cal H$. The resulting first order differential calculus reads $$\begin{aligned} px\,\d x&=&q^{-4}\d x\,xp\nonumber\\ x\,\d p&=&q^2\d p\,x\nonumber\\ \d x\,p&=&q^2p\,\d x\label{42}\\ \partial_xV(X)p\,\d p&=&q^{-4}\d p\,p\partial_xV(X)\nonumber\\ \d K&=&\d{\mit\Lambda}=0,\nonumber\end{aligned}$$ where $X$ is given in (\[32\]) while the derivative $\partial_x$ is with respect to $x$. It can be verified that the Hamilton equations (\[26\]) can be reconstructed from (\[42\]) by means of the eqs. (\[22\]), (\[25\]) under substitution $$\begin{aligned} \d x&=&\dot{x}(x,p,K,{\mit\Lambda})\d t\label{43}\\ \d p&=&\dot{p}(x,p,K,{\mit\Lambda})\d t.\nonumber\end{aligned}$$ Therefore the quantum deRham complex contain all information about the algebra of observables and dynamics of the theory. Recently Dimakis et al.$^6$ also applied some differential geometric methods to the Heisenberg algebra but from another point of view. [****]{} I am grateful for interesting discussions to Prof. H.D. Doebner, Prof. W. Tybor, Mr. T. Brzezinski and Mr. K. Smolinski. This work is supported under Grant KBN 2 0218 91 01. [****]{} 1\. L.C. Biedenharn, [*J. Phys. A: Math. Gen.*]{} 22:L873 (1989).\ 2. I.Ya. Aref’eva, I.V. Volovich, Quantum group particles and non-Archimedean geometry,\ CERN–TH–6137/91 (1991).\ 3. J. Rembieliński, [*Phys. Lett. B*]{}287:145 (1992).\ 4. Yu.I. Manin, “Quantum Groups and Non-Commutative Geometry”, publication CRM, Montreal\ (1988).\ 5. T. Brzeziński, J. Rembieliński, K.A. Smoliński, Quantum particle on a quantum circle,\ KFT U[Ł]{} 92–10 (1992).\ 6. A. Dimakis, F. Müller-Hoissen, Quantum mechanics as a non-commutative symplectic geometry,\ , (1992).
--- abstract: 'One classical measure of the quality of an interpolating function is its Lipschitz constant. In this paper we consider interpolants with additional smoothness requirements, in particular that their derivatives be Lipschitz. We show that such a measure of quality can be easily computed, giving two algorithms, one optimal in the dimension of the data, the other optimal in the number of points to be interpolated.' author: - | Matthew J. Hirn[^1]\ Yale University\ Department of Mathematics\ P.O. Box 208282\ New Haven, Connecticut 06520-8283\ `matthew.hirn@yale.edu` bibliography: - '/Users/matthewhirn/Dropbox/Mathematics/Bibliography/MainBib.bib' title: Algorithms for computing the optimal Lipschitz constant of interpolants with Lipschitz derivative --- Introduction ============ For an arbitrary function $g: \R^d \rightarrow \R^n$, recall that the *Lipschitz constant* of $g$ is defined as: $${\mathrm{Lip}}(g) \triangleq \sup_{\substack{x,y \in \R^d \\ x \neq y}} \frac{|g(x) - g(y)|}{|x-y|},$$ where $|\cdot|$ is taken to be the standard Euclidean norm. Additionally, set $\nabla g: \R^d \rightarrow \R^d$ to be the [*gradient*]{} of $g$, where $\nabla g \triangleq (\frac{\partial g}{\partial x_1}, \ldots, \frac{\partial g}{\partial x_d})$. Given a finite set $E \subset \R^d$ with $\#(E) = N$ and a function $f: E \rightarrow \R$, it is will known that the function $f$ can be extended to a function $F: \R^d \rightarrow \R$ such that ${\mathrm{Lip}}(F) = {\mathrm{Lip}}(f)$ (see the work of Whitney [@whitney:analyticExtensions1934] and McShane [@mcshane:extensionRangeFcns1934] for the original result). Such a function $F$ is a [*minimal Lipschitz extension*]{} of $f$, since the Lipschitz constant of $F$ cannot be lowered while still interpolating the function $f$. Thus to compute ${\mathrm{Lip}}(F)$, we must compute ${\mathrm{Lip}}(f)$. This can clearly be accomplished in $O(N^2)$ operations. However, using the well separated pairs decomposition [@callahan:wspd1995], one can compute a near approximation of ${\mathrm{Lip}}(f)$ in only $O(N\log N)$ operations. In this paper, we address a related problem. We assume that along with the function values, we are also given information about the derivatives at each point in $E$. We wish to efficiently compute the minimal value of ${\mathrm{Lip}}(\nabla F)$, where $F: \R^d \rightarrow \R$ is a differentiable function whose derivative is Lipschitz that additionally interpolates the given functional and derivative information. Let $C^{1,1}(\R^d)$ denote the space of functions mapping $\R^d$ to $\R$ whose derivatives are Lipschitz: $$C^{1,1}(\R^d) \triangleq \{ g: \R^d \rightarrow \R : {\mathrm{Lip}}(\nabla g) < \infty \}.$$ Let $\PP$ denote the set of first order polynomials (i.e., affine functions) mapping $\R^d$ to $\R$. For $F \in C^{1,1}(\R^d)$, let $J_xF \in \PP$ denote the first order *jet* of $F$ centered at $x$, i.e., $J_xF(z) \triangleq F(x) + \nabla F(x) \cdot (z-x)$. A *Whitney 1-field* $\PP_E \triangleq \{P_x \in \PP: x \in E\}$ is a set polynomials in $\PP$ indexed by the set $E \subset \R^d$. In this paper we address some of the computational aspects of the following problem: : Suppose we are given a finite set $E \subset \R^d$ and a 1-field $\PP_E = \{P_x \in \PP : x \in E\}$. Compute a function $F \in C^{1,1}(\R^d)$ such that 1. $J_xF = P_x$ for all $x \in E$. 2. ${\mathrm{Lip}}(\nabla F)$ is minimal. There are two theoretical problems tied into the <span style="font-variant:small-caps;">Jet Interpolation Problem</span>. The first of these involves determining the optimal value of the semi-norm ${\mathrm{Lip}}(\nabla F)$. It is, by definition, given by: $$\LL(\PP_E) \triangleq \inf \{ {\mathrm{Lip}}(\nabla F) : F \in C^{1,1}(\R^d) \text{ \& } J_xF = P_x \enspace \forall \, x \in E \}.$$ The second problem is to construct a function $F \in C^{1,1}(\R^d)$ that interpolates the 1-field $\PP_E$ such that ${\mathrm{Lip}}(\nabla F) = \LL(\PP_E)$. Remarkably, there are solutions to both of these problems. In [@legruyer:minLipschitzExt2009], Le Gruyer gives a closed formula for $\LL(\PP_E)$, while in [@wells:diffFuncLipDeriv1973] Wells gives a construction for the interpolant $F$. The two theoretical problems lead to two corresponding computational problems: (1) efficiently computing $\LL(\PP_E)$ and (2) efficiently computing the interpolant $F$. The theoretical results of Le Gruyer and Wells give a roadmap by which to accomplish these tasks. The main result of this paper is to give an algorithm that efficiently computes a number $M$ with the same order of magnitude of $\LL(\PP_E)$. In a follow up paper, we shall address the problem of efficiently computing an interpolant $F \in C^{1,1}(\R^d)$ for $\PP_E$ such that ${\mathrm{Lip}}(\nabla F) = M$. Two numbers $X, Y$ that are dependent upon $E, \PP_E,$ and $d$ are said to have the same *order of magnitude* if there exist universal constants $c$ and $C$ such that $cY \leq X \leq CY$. By compute we mean develop an algorithm that can run on an idealized computer with standard von Neumann architecture, able to work with exact real numbers. We ignore roundoff, overflow, and underflow errors, and suppose that an exact real number can be stored at each memory address. Additionally, we suppose that it takes one machine operation to add, subtract, multiply, or divide two real numbers $x$ and $y$, or to compare them (i.e., decide whether $x < y$, $x > y$, or $x = y$). The *work* of an algorithm is the number of machine operations needed to carry it out, and the *storage* of an algorithm is the number of random access memory addresses required. Throughout, we shall set $\#(E) = N$ to be the number of points in $E$. Some related work on the computation of interpolants in $C^m(\R^d)$ is given in [@fefferman:fittingDataI; @fefferman:fittingDataII; @fefferman:interLinProg2011; @fefferman:nearOptC2R2I]. In particular, this work is most closely related to [@fefferman:fittingDataI; @fefferman:fittingDataII], but by working in $C^{1,1}(\R^d)$, and using the semi-norm ${\mathrm{Lip}}(\nabla F)$ as opposed to some $C^m$ norm, we are able to achieve order of magnitude constants that do not depend on the dimension. Computing $\LL(\PP_E)$ ====================== In this section we present two algorithms for computing $\LL(\PP_E)$. One is an exact computation that is simply a corollary of the results found in [@legruyer:minLipschitzExt2009]; it runs in $O(d N^2)$ time and requires $O(dN)$ storage. The second, which requires more effort to develop, computes the order of magnitude of $\LL(\PP_E)$ in $O(d^{d/2}N\log N)$ time and requires $O(d^{d/2}N)$ storage. Closed formula for $\LL(\PP_E)$ and an efficient algorithm in the dimension $d$ ------------------------------------------------------------------------------- In [@legruyer:minLipschitzExt2009], Le Gruyer gives a closed formula for $\LL(\PP_E)$, which is immensely useful for its computation. We summarize the results in this section. For the 1-field $\PP_E = \{P_x \in \PP : x \in E\}$, define two functionals $A: E \times E \rightarrow [0,\infty]$ and $B: E \times E \rightarrow [0,\infty]$, $$A(x,y) \triangleq \frac{|P_x(x) - P_y(x) + P_x(y) - P_y(y)|}{|x-y|^2}, \qquad B(x,y) \triangleq \frac{|\nabla P_x - \nabla P_y|}{|x-y|}.$$ Note that $A$ was originally formulated differently in [@legruyer:minLipschitzExt2009], we have simply rewritten it in a form more useful for our purposes. Additionally, recall that $\PP$ is the set of first order polynomials, so for any $P \in \PP$, $\nabla P$ is a constant vector in $\R^d$. Using $A$ and $B$, define $\Gamma$ as: $$\Gamma(\PP_E) \triangleq \max_{\substack{x,y \in E \\ x \neq y}} \sqrt{A(x,y)^2 + B(x,y)^2} + A(x,y).$$ We then have the following theorem: \[thm: le gruyer\] For any finite $E \subset \R^d$ and any 1-field $\PP_E$, $$\LL(\PP_E) = \Gamma(\PP_E).$$ Thus the functional $\Gamma(\PP_E)$ is the closed form of $\LL(\PP_E)$. If the number of data points $N$ is reasonable, then it yields an obvious algorithm for computing $\LL(\PP_E)$ by simply evaluating $A(x,y)$ and $B(x,y)$ for all unique pairs $x,y \in E$ and computing $\Gamma(\PP_E)$. We state this as a corollary. \[cor: le gruyer alg\] There is an algorithm, whose inputs are the set $E$ and the 1-field $\PP_E$, that computes $\LL(\PP_E)$ exactly. It requires $O(dN^2)$ work and $O(dN)$ storage. The obvious benefit of this algorithm is that it computes $\LL(\PP_E)$ exactly. Additionally, the storage is asymptotically optimal both in $d$ and in $N$, and the work is asymptotically optimal in $d$. On the other hand, if the number of points $N$ is large, then the $O(dN^2)$ work is at best impractical, and at worst impossible. In order to handle this situation, we turn to the well separated pairs decomposition. When we say that we input $\PP_E$ into the computer, what we mean is that we input $P_x(x) \in \R$ and $\nabla P_x \in \R^d$ for each $x \in E$. In fact Theorem \[thm: le gruyer\] holds not only for $\R^d$, but for any Hilbert space with real valued inner product. Consequently, Corollary \[cor: le gruyer alg\] can be applied to work in any Hilbert space (replacing the Euclidean norm with the Hilbert space norm), including infinite dimensional Hilbert spaces, so long as one has a method (or “black box”) by which to compute inner products. This is often the case when the set $E \subset \R^d$ but one utilizes a kernel function $k: E \times E \rightarrow \R$ such that $k(x,y)$ is the inner product in a Hilbert space $\HH$ after some implicit mapping $\varphi: E \rightarrow \HH$. Well separated pairs decomposition {#sec: wspd} ---------------------------------- The well separated pairs decomposition was first introduced by Callahan and Kosaraju in [@callahan:wspd1995]; we shall make use of a modified version that was described in detail in [@fefferman:fittingDataI]. First, recall the standard definitions of the *diameter* of a set and the *distance* between two sets. Let $S, T \subset \R^d$, $${\mathrm{diam}}(S) \triangleq \sup_{\substack{x,y \in S \\ x \neq y}} |x-y|, \qquad {\mathrm{dist}}(S,T) \triangleq \inf_{\substack{x \in S \\ y \in T}} |x-y|.$$ Let $\varepsilon > 0$; two sets $S, T \subset \R^d$ are $\varepsilon$*-separated* if $$\max \{{\mathrm{diam}}(S), {\mathrm{diam}}(T)\} < \varepsilon {\mathrm{dist}}(S,T).$$ We follow the construction detailed by Fefferman and Klartag in [@fefferman:fittingDataI]. Let ${\mathcal{T}}$ be a collection of subsets of $E$. For any $\Lambda \subset {\mathcal{T}}$, set $$\cup \Lambda \triangleq \bigcup_{S \in \Lambda} S = \{x : x \in S \text{ for some } S \in \Lambda \}.$$ Let ${\mathcal{W}}$ be a set of pairs $(\Lambda_1, \Lambda_2)$ where $\Lambda_1, \Lambda_2 \subset {\mathcal{T}}$. For any $\varepsilon > 0$, the pair $({\mathcal{T}}, {\mathcal{W}})$ is an *$\varepsilon$-well separated pairs decomposition* or *$\varepsilon$-WSPD* for short if the following properties hold: 1. \[item: F-K 1\] $\bigcup_{(\Lambda_1,\Lambda_2) \in {\mathcal{W}}} \cup \Lambda_1 \times \cup \Lambda_2 = \{ (x,y) \in E \times E : x \neq y \}$. 2. \[item: F-K 2\] If $(\Lambda_1, \Lambda_2), (\Lambda_1', \Lambda_2') \in {\mathcal{W}}$ are distinct pairs, then $(\cup \Lambda_1 \times \cup \Lambda_2) \cap (\cup \Lambda_1' \times \cup \Lambda_2') = \emptyset$. 3. $\cup \Lambda_1$ and $\cup \Lambda_2$ are $\varepsilon$-separated for any $(\Lambda_1, \Lambda_2) \in {\mathcal{W}}$. 4. $\#({\mathcal{T}}) < C(\varepsilon, d) N$ and $\#({\mathcal{W}}) < C(\varepsilon, d) N$. As shown in [@fefferman:fittingDataI], there is a data structure representing $({\mathcal{T}},{\mathcal{W}})$ that satisfies the following additional properties as well: 1. The amount of storage to hold the data structure is $O((\sqrt{d}/\varepsilon)^dN)$. 2. \[item: F-K 6\] The following tasks require at most $O((\sqrt{d}/\varepsilon)^dN\log N)$ work and $O((\sqrt{d}/\varepsilon)^dN)$ storage: 1. Go over all $S \in {\mathcal{T}}$, and for each $S$ produce a list of elements in $S$. 2. Go over all $(\Lambda_1,\Lambda_2) \in W$, and for each $(\Lambda_1, \Lambda_2)$ produce the elements (in ${\mathcal{T}}$) of $\Lambda_1$ and $\Lambda_2$. 3. Go over all $S \in {\mathcal{T}}$, and for each $S$ produce the list of all $(\Lambda_1, \Lambda_2) \in W$ such that $S \in \Lambda_1$. 4. Go over all $x \in E$, and for each $x \in E$ produce a list of $S \in {\mathcal{T}}$ such that $x \in S$. As a result of property \[item: F-K 6\], it follows that the following properties also hold: 1. \[item: F-K 7\] For $C(\varepsilon,d) = O((\sqrt{d}/\varepsilon)^d)$, 1. $\sum_{(\Lambda_1, \Lambda_2) \in {\mathcal{W}}} (\#(\Lambda_1) + \#(\Lambda_2)) < C(\varepsilon,d) N\log N$. 2. $\sum_{S \in {\mathcal{T}}} \#(S) < C(\varepsilon,d) N\log N$. \[thm: F-K WSPD\] There is an algorithm, whose inputs are the parameter $\varepsilon > 0$ and a subset $E \subset \R^d$ with $\#(E) = N$, that outputs a $\varepsilon$-WSPD $({\mathcal{T}},{\mathcal{W}})$ of $E$ such that properties \[item: F-K 1\],$\ldots$,\[item: F-K 7\] hold. The algorithm requires $O((\sqrt{d}/\varepsilon)^dN\log N)$ work and $O((\sqrt{d}/\varepsilon)^d) N)$ storage. The algorithm presented in [@fefferman:fittingDataI] is built upon the well separated pairs decomposition algorithm developed by Callahan and Kosaraju in [@callahan:wspd1995]. In fact, ${\mathcal{T}}$ is a completely balanced binary tree based off the inorder relation derived from the fair split tree presented in [@callahan:wspd1995]. In particular, $\#({\mathcal{T}}) < 2N$ and the height of the tree is bounded by $\lceil \log_2 N \rceil + 1$. The list ${\mathcal{W}}$ has a one-to-one correspondence with the well separated pair list presented in [@callahan:wspd1995], hence $\#({\mathcal{W}}) = O((\sqrt{d}/\varepsilon)^dN)$. Efficient computation of $\LL(\PP_E)$ in the number of points $N$ ----------------------------------------------------------------- In this section we prove the following theorem: \[thm: efficient L(PE)\] There is an algorithm, whose inputs are the set $E$ and the 1-field $\PP_E$, that computes the order of magnitude of $\LL(\PP_E)$. It requires $O(d^{d/2}N\log N)$ work and $O(d^{d/2}N)$ storage. The plan for proving Theorem \[thm: efficient L(PE)\] is the following. First we view Le Gruyer’s $\Gamma$ functional from the perspective of the classical Whitney conditions. Once we formalize this concept, we can use the $\varepsilon$-WSPD of Fefferman and Klartag, since they built it to handle interpolants in $C^m(\R^n)$ satisfying Whitney conditions. Concerning the first part, consider the original Whitney conditions for $C^{1,1}(\R^n)$: 1. \[W0\] $|(P_x-P_y)(x)| \leq M|x-y|^2$ for all $x,y \in E$. 2. \[W1\] $|\frac{\partial}{\partial x_i}(P_x - P_y)(x)| \leq M|x-y|$ for all $x,y \in E$, $i = 1, \ldots, d$. Whitney’s extension theorem states that if \[W0\] and \[W1\] hold, then there exists an $F \in C^{1,1}(\R^d)$ that interpolates $\PP_E$ such that ${\mathrm{Lip}}(\nabla F) \leq C(d)M$. The main contribution of [@legruyer:minLipschitzExt2009] is to refine \[W0\] and \[W1\] such that $C(d) = 1$; this is $\Gamma$. Indeed, the functional $A$ corresponds to \[W0\], the functional $B$ corresponds to \[W1\], and $\Gamma$ pieces them together. Note there are some small, but significant differences. In particular, the functional $A$ is essentially a symmetric version of \[W0\]; using one is equivalent to using the other, up to a factor of two. The functional $B$ though, merges all of the partial derivative information into one condition, unlike \[W1\]. Thus they are equivalent only up to a factor of $d$, the dimension of the Euclidean space we are working in. For the algorithm in this section, we will use the functional $B$ since it is both simpler and more useful than \[W1\], but use \[W0\] instead of $A$. Additionally, we will treat them separately instead of together like in $\Gamma$; Lemma \[lem: first gamma estimate\] contains the details. For the 1-field $\PP_E$, define the functional $\widetilde{A}: E \times E \rightarrow [0,\infty]$ (which is essentially the same as \[W0\]), $$\widetilde{A}(x,y) \triangleq \frac{|P_x(x) - P_y(x)|}{|x-y|^2}.$$ Additionally, set $$\widetilde{\Gamma}(\PP_E) \triangleq \max_{\substack{x,y \in E \\ x \neq y}} \Big\{ \max\{\widetilde{A}(x,y), B(x,y)\} \Big\}.$$ The functional $\widetilde{\Gamma}(\PP_E)$ is more easily approximated via the $\varepsilon$-WSPD than $\Gamma(\PP_E)$. Furthermore, as the following Lemma shows, they have the same order of magnitude. \[lem: first gamma estimate\] For any finite $E \subset \R^d$ and any 1-field $\PP_E$, $$\widetilde{\Gamma}(\PP_E) \leq \Gamma(\PP_E) \leq 2(1+ \sqrt{2}) \widetilde{\Gamma}(\PP_E).$$ To bridge the gap between $\Gamma(\PP_E)$ and $\widetilde{\Gamma}(\PP_E)$, we first consider $$\Gamma'(\PP_E) \triangleq \max_{\substack{x,y \in E \\ x \neq y}} \Big\{ \max \{A(x,y), B(x,y)\} \Big\}.$$ Clearly $\Gamma'(\PP_E) \leq \Gamma(\PP_E)$. Furthermore, $$\begin{aligned} \Gamma(\PP_E) &= \max_{\substack{x,y \in E \\ x \neq y}} \sqrt{A(x,y)^2 + B(x,y)^2} + A(x,y) \\ &\leq \sqrt{\Gamma'(\PP_E)^2 + \Gamma'(\PP_E)^2} + \Gamma'(\PP_E) \\ &\leq (1 + \sqrt{2})\Gamma'(\PP_E).\end{aligned}$$ Thus $\Gamma(\PP_E)$ and $\Gamma'(\PP_E)$ have the same order of magnitude, and in particular, $$\label{eqn: Gamma and Gamma'} \Gamma'(\PP_E) \leq \Gamma(\PP_E) \leq (1+\sqrt{2})\Gamma'(\PP_E).$$ Now let us consider $\Gamma'(\PP_E)$ and $\widetilde{\Gamma}(\PP_E)$ (which means considering $A(x,y)$ and $\widetilde{A}(x,y)$). First, $$\begin{aligned} |P_x(x) - P_y(x) + P_x(y) - P_y(y)| &\leq |P_x(x) - P_y(x)| + |P_x(y) - P_y(y)| \\ &\leq 2\widetilde{\Gamma}(\PP_E) |x-y|^2,\end{aligned}$$ and so, $\Gamma'(\PP_E) \leq 2\widetilde{\Gamma}(\PP_E)$. For a reverse inequality, we note, $$|P_x(x) - P_y(x) + P_x(y) - P_y(y)| = |2(P_x(x) - P_y(x)) + (\nabla P_y - \nabla P_x) \cdot (x-y)|.$$ Thus, $$\begin{aligned} 2|P_x(x) - P_y(x)| &\leq \Gamma'(\PP_E)|x-y|^2 + |(\nabla P_y - \nabla P_x) \cdot (x-y)| \\ &\leq 2\Gamma'(\PP_E) |x-y|^2,\end{aligned}$$ which yields $\widetilde{\Gamma}(\PP_E) \leq \Gamma'(\PP_E)$. Combining the two inequalities, $$\label{eqn: Gamma' and Gammatilde} \widetilde{\Gamma}(\PP_E) \leq \Gamma'(\PP_E) \leq 2\widetilde{\Gamma}(\PP_E).$$ Putting and together completes the proof. We will also need the following simple Lemmas. \[lem: wspd estimates\] Let $({\mathcal{T}},{\mathcal{W}})$ be a $\varepsilon$-WSPD, $(\Lambda_1, \Lambda_2) \in {\mathcal{W}}$, $x,x',x'' \in \cup \Lambda_1$, and $y,y' \in \cup \Lambda_2$. Then, $$\begin{aligned} &|x'-x''| \leq \varepsilon |x-y| \\ &|x'-y'| \leq (1+2\varepsilon) |x-y|.\end{aligned}$$ Use the definition of $\varepsilon$-separated. \[lem: polynomial shift\] Suppose that $P \in \PP$, $x \in \R^d$, $\delta > 0$, and $M > 0$ satisfy $$\begin{aligned} &|P(x)| \leq M \delta^2 \\ &|\nabla P| \leq M \delta.\end{aligned}$$ Then, for any $y \in \R^d$, $$|P(y)| \leq M(\delta + |x-y|)^2.$$ Using Taylor’s Theorem, $$\begin{aligned} |P(y)| &= |P(x) + \nabla P(x) \cdot (y-x)| \\ &\leq |P(x)| + |\nabla P| |x-y| \\ &\leq M\delta^2 + M\delta |x-y| \\ &\leq M(\delta + |x-y|)^2.\end{aligned}$$ In order to simplify notation, let $\widetilde{\Gamma}(x,y)$ denote the quantity maximized in the definition of $\widetilde{\Gamma}(\PP_E)$, i.e., $$\widetilde{\Gamma}(x,y) = \max\{\widetilde{A}(x,y), B(x,y)\}.$$ Additionally, set $$\widetilde{A}(\PP_E) \triangleq \max_{\substack{x,y \in E \\ x \neq y}} \widetilde{A}(x,y), \qquad B(\PP_E) \triangleq \max_{\substack{x,y \in E \\ x \neq y}} B(x,y).$$ Our algorithm works as follows. For now, let $\varepsilon > 0$ be arbitrary and invoke the algorithm from Theorem \[thm: F-K WSPD\]. This gives us an $\varepsilon$-WSPD $({\mathcal{T}},{\mathcal{W}})$ in $O((\sqrt{d}/\varepsilon)^dN\log N)$ work and using $O(\sqrt{d}/\varepsilon)^d N)$ storage. For each $(\Lambda_1, \Lambda_2) \in {\mathcal{W}}$, pick at random a representative $(x_{\Lambda_1}, x_{\Lambda_2}) \in \cup \Lambda_1 \times \cup \Lambda_2$. Additionally, for each $S \in {\mathcal{T}}$, pick at random a representative $x_S \in S$. Now compute the following: $$\begin{aligned} \widetilde{\Gamma}_1 &\triangleq \max_{(\Lambda_1, \Lambda_2) \in {\mathcal{W}}} \widetilde{\Gamma}(x_{\Lambda_1}, x_{\Lambda_2}) \\ \widetilde{\Gamma}_2 &\triangleq \max_{(\Lambda_1, \Lambda_2) \in {\mathcal{W}}} \, \max_{i=1,2} \, \max_{S \in \Lambda_i} \widetilde{\Gamma}(x_{\Lambda_i}, x_S) \\ \widetilde{\Gamma}_3 &\triangleq \max_{S \in {\mathcal{T}}} \max_{x \in S} \widetilde{\Gamma}(x,x_S) \\ \widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}}) &\triangleq \max\{\widetilde{\Gamma}_1, \widetilde{\Gamma}_2, \widetilde{\Gamma}_3\}.\end{aligned}$$ Define $\widetilde{A}(\PP_E, {\mathcal{T}}, {\mathcal{W}})$ and $B(\PP_E, {\mathcal{T}}, {\mathcal{W}})$ analogously. Using properties \[item: F-K 6\] and \[item: F-K 7\] from Section \[sec: wspd\], we see that computing $\widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}})$ requires $O((\sqrt{d}/\varepsilon)^dN\log N)$ work and $O((\sqrt{d}/\varepsilon)^dN)$ storage. Now we show that $\widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}})$ has the same order of magnitude as $\widetilde{\Gamma}(\PP_E)$. Clearly, $\widetilde{\Gamma}(\PP_E, {\mathcal{T}}, {\mathcal{W}}) \leq \widetilde{\Gamma}(\PP_E)$. For the other inequality, we break $\widetilde{\Gamma}$ into its two parts, noting that $\widetilde{\Gamma}(\PP_E) = \max\{\widetilde{A}(\PP_E), B(\PP_E)\}$ and $\widetilde{\Gamma}(\PP_E, {\mathcal{T}}, {\mathcal{W}}) = \max\{\widetilde{A}(\PP_E, {\mathcal{T}}, {\mathcal{W}}), B(\PP_E, {\mathcal{T}}, {\mathcal{W}})\}$. Thus we can work with $\widetilde{A}$ and $B$ separately. The functional $B$ is simply the Lipschitz constant of the mapping $x \mapsto \nabla P_x$. It is known that $$\label{eqn: B approx} B(\PP_E) \leq (1+C\varepsilon) B(\PP_E, {\mathcal{T}}, {\mathcal{W}}).$$ See for example Proposition 2 of [@fefferman:smthIntEffAlg2013]. Using the particular construction in this proof, we can take $C = 6$. We now turn to $\widetilde{A}$. Let $x,y \in E$, $x \neq y$. By properties \[item: F-K 1\] and \[item: F-K 2\] of Section \[sec: wspd\], there is a unique pair $(\Lambda_1, \Lambda_2) \in {\mathcal{W}}$ such that $(x,y) \in \cup \Lambda_1 \times \cup \Lambda_2$. Additionally, by the definition of $({\mathcal{T}},{\mathcal{W}})$, there exists a set $S \in \Lambda_1$ such that $x \in S$ and a set $T \in \Lambda_2$ such that $y \in T$. Let $M = \widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}})$. We then have, using the triangle inequality, the definition of $\widetilde{\Gamma}_3$, and Lemma \[lem: wspd estimates\], $$\begin{aligned} |P_x(x) - P_y(x)| &\leq |P_x(x) - P_{x_S}(x)| + |P_{x_S}(x) - P_y(x)| \nonumber \\ &\leq \widetilde{\Gamma}_1|x-x_S|^2 + |P_{x_S}(x) - P_y(x)| \nonumber \\ &\leq \varepsilon M |x-y|^2 + |P_{x_S}(x) - P_y(x)|. \label{eqn: A estimate 1}\end{aligned}$$ Continuing with the second term of the right hand side of , we use the triangle inequality, Lemma \[lem: polynomial shift\], the definition of $\widetilde{\Gamma}_2$, and Lemma \[lem: wspd estimates\], $$\begin{aligned} |P_{x_S}(x) - P_y(x)| &\leq |P_{x_S}(x) - P_{x_{\Lambda_1}}(x)| + |P_{x_{\Lambda_1}}(x) - P_y(x)| \nonumber \\ &\leq \widetilde{\Gamma}_2 (|x_S - x_{\Lambda_1}| + |x - x_{\Lambda_1}|)^2 + |P_{x_{\Lambda_1}}(x) - P_y(x)| \nonumber \\ &\leq 4\varepsilon^2 M |x-y|^2 + |P_{x_{\Lambda_1}}(x) - P_y(x)|. \label{eqn: A estimate 2}\end{aligned}$$ Continuing with the second term of the right hand side of , we use the triangle inequality, Lemma \[lem: polynomial shift\], the definition of $\widetilde{\Gamma}_3$, and Lemma \[lem: wspd estimates\], $$\begin{aligned} |P_{x_{\Lambda_1}}(x) - P_y(x)| &\leq |P_y(x) - P_{x_T}(x)| + |P_{x_T}(x) - P_{x_{\Lambda_1}}(x)| \nonumber \\ &\leq \widetilde{\Gamma}_3 (|y-x_T| + |x-y|)^2 + |P_{x_T}(x) - P_{x_{\Lambda_1}}(x)| \nonumber \\ &\leq (1+\varepsilon)^2 M |x-y|^2 + |P_{x_T}(x) - P_{x_{\Lambda_1}}(x)|. \label{eqn: A estimate 3}\end{aligned}$$ Continuing with the second term of the right hand side of , we use the triangle inequality, Lemma \[lem: polynomial shift\], the definitions of $\widetilde{\Gamma}_1$ and $\widetilde{\Gamma}_2$, as well as Lemma \[lem: wspd estimates\], $$\begin{aligned} |P_{x_T}(x) - P_{x_{\Lambda_1}(x)}(x)| &\leq |P_{x_T}(x) - P_{x_{\Lambda_2}}(x)| + |P_{x_{\Lambda_2}}(x) - P_{x_{\Lambda_1}}(x)| \nonumber \\ &\leq \widetilde{\Gamma}_2(|x_T-x_{\Lambda_2}| + |x-x_{\Lambda_2}|)^2 + \widetilde{\Gamma}_1(|x_{\Lambda_2}-x_{\Lambda_1}| + |x-x_{\Lambda_1}|)^2 \nonumber \\ &\leq 2(1+3\varepsilon)^2 M |x-y|^2. \label{eqn: A estimate 4}\end{aligned}$$ Putting , , , together, we get: $$\label{eqn: A estimate 5} |P_x(x) - P_y(x)| \leq 3M |x-y|^2 + 23\varepsilon M |x-y|^2.$$ Taking $\varepsilon = 1/2$ gives the desired bounds on the work and storage, and in addition yields $$\widetilde{\Gamma}(\PP_E) \leq C \widetilde{\Gamma}(\PP_E, {\mathcal{T}}, {\mathcal{W}}).$$ The proof is completed by applying Lemma \[lem: first gamma estimate\]. Examining and , we see that $\widetilde{\Gamma}(\PP_E)$ and $\widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}})$ have the same order of magnitude with constants $c=1$ and $C = C(\varepsilon) = 3 + 23\varepsilon$. Thus, $$\widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}}) \leq \widetilde{\Gamma}(\PP_E) \leq C(\varepsilon) \widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}}),$$ Recalling Lemma \[lem: first gamma estimate\], we then have $$\widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}}) \leq \Gamma(\PP_E) \leq C'(\varepsilon) \widetilde{\Gamma}(\PP_E,{\mathcal{T}},{\mathcal{W}}),$$ where $C'(\varepsilon) = 2(1+\sqrt{2})C(\varepsilon) = 2(1+\sqrt{2})(3+23\varepsilon)$. Therefore, as $\varepsilon \rightarrow 0$, $C'(\varepsilon) \rightarrow 6(1+\sqrt{2})$. Acknowledgements ================ The author would like to thank Charles Fefferman for introducing him to the problem and Hariharan Narayanan for numerous insightful conversations. [^1]: www.math.yale.edu/$\sim$mh644
--- abstract: 'We have constructed a quantum field theory in a finite box, with periodic boundary conditions, using the hypothesis that particles living in a finite box are created and/or annihilated by the creation and/or annihilation operators, respectively, of a quantum harmonic oscillator on a circle. An expression for the effective coupling constant is obtained showing explicitly its dependence on the dimension of the box.' author: - | V. B. Bezerra\ Universidade Federal da Paraiba,\ Depto. de Física\ Caixa Postal 5008, 58051-970 - João Pessoa, PB, Brazil\ e-mail: valdir@fisica.ufpb.br\ and\ M. A. Rego-Monteiro\ Centro Brasileiro de Pesquisas Físicas,\ Rua Xavier Sigaud 150, 22290-180 - Rio de Janeiro, RJ, Brazil\ e-mail: regomont@cbpf.br title: | [**Some boundary effects in quantum\ field theory [^1]**]{} --- xxxxxxxxxxxxxxxxxx= [**Keywords:**]{} Deformed Heisenberg algebra; quantum harmonic oscillator on a circle;\ field theory in a compact space; variation of the coupling constant.\ Introduction ============ The fact that the energy eigenvalues of the quantum harmonic oscillator is given by $E_n = (n+1/2) \hbar w$ allow us to interpret their successive energy levels as being obtained by the creation of a quantum particle of frequency $w$. This interpretation of the energy spectrum of the quantum harmonic oscillator was successfully used in the second quantization formalism. In short, one could say that Planck’s hypothesis is realized in the second quantization formalism by the use of creation and annihilation operators of the quantum harmonic oscillator system $^{\cite{tdlee}}$. This realization is obtained for the quantum harmonic oscillator defined on an infinite line. Let us consider a situation in which we want to describe the interaction of quantum particles living in a finite box with boundary conditions, for example, using the second quantization formalism. In this context, it seems natural to assume the statement concerning the connection between Planck’s hypothesis and the energy levels of a quantum harmonic oscillator in this finite space and therefore analyze the consequences of this assumption in the construction of a quantum field theory (QFT) in a compact manifold. In [@circulo] a discussion of a quantum harmonic oscillator in a circle and its associated Heisenberg algebra was presented. It was found that Mathieu’s equation can satisfactorily describe the system and that the creation and annihilation operators of the system satisfy a sort of deformed Heisenberg algebra. In [@qft1] a construction of a deformed scalar QFT based on $q$-oscillator [@qosc], which is a deformed Heisenberg algebra, was presented and in [@qft2] a procedure to perform perturbative computation up to second order in the coupling constant was implemented. Subsequently, it was shown [@qft3] that this deformed scalar quantum model is renormalizable up to second order in the coupling constant. In this paper, we use the same procedure developed in Refs. [@qft1] and [@qft2] to perform a perturbative computation for a QFT in a box. To do this, we use the hypothesis, already mentioned, that in a compact space with periodic boundary conditions, particles are created and/or annihilated by the creation and/or annihilation operators of a Heisenberg algebra of the quantum harmonic oscillator defined on a circle. As a result, we find that the effective coupling constant which appears in the perturbation series depends on a dimensionless quantity related to the linear dimension of the box. This approach permits us to construct a field theory that creates at any point of the space-time particles described by a deformed Heisenberg algebra, which in the present case, the deformation parameter is inversely proportional to the dimension of the box. In this way we can investigate the interaction of point particles in compact spaces, showing how the boundary affects this interaction. Finally, we have computed the variation of the effective coupling constant for two different values of the size of the box, namely one corresponding to the time of nucleosynthesis of the standard cosmological model and the other to the present epoch. The choice of these values for the sizes of the box were done simply to perform a calculation and to show an example of the effect of the boundary on the effective coupling constant. This does not mean that our model has a connection with the standard cosmological model. This paper is organized as follows: In Section II, we present a discussion of a quantum harmonic oscillator on a circle which is described by Mathieu’s equation. The deformed Heisenberg algebra associated with Mathieu’s equation is presented in Section III. In Section IV, we present a construction of a QFT in a box and perform some perturbative computation. In Section V we present the bound for the variation of the coupling constant. Finally, in Section VI we conclude with some comments. The quantum harmonic oscillator on a circle =========================================== In this Section we are going to discuss an equation defined on a finite interval of length $L$ which reproduces the ordinary quantum harmonic oscillator in the limit $L \rightarrow \infty$. For this proposal it is convenient to describe quantum mechanics on a periodic line and to do this we shall follow Ohnuki-Kitakado’s formalism $^{\cite{ok1}}$. According to this formalism there are inequivalent quantum mechanics on $S^1$ (periodic line) depending on a parameter $\alpha$ ($0 \leq \alpha < 1$). The momentum operator $G$ on $S^1$ in the coordinate representation is given in this formalism as $^{\cite{ok1}, \cite{ok2}}$ $$G \longrightarrow \frac{1}{i} \frac{d}{d\theta} + \alpha \, , \,\,\,\,\, 0 \leq \alpha < 1 \, \label{eq:mom1}$$ and the coordinate operator is given in terms of the unitary operator $W$ $$W \longrightarrow e^{i \theta} \, . \label{eq:coord1}$$ Let us consider the following equation on $S^1$ [@circulo]: $$G^2 \Psi + K \left[ W+ W^{\dagger} \right] \Psi = \epsilon \Psi \, , \label{eq:hos1}$$ where $G$ and $W$ were already defined [^2]. In order to have the above equation in the coordinate representation we substitute Eqs. (\[eq:mom1\]) and (\[eq:coord1\]) in Eq. (\[eq:hos1\]) for $\alpha = 0$. Thus we obtain $$\frac{d^2 \Psi(\theta)}{d\theta^2} + (\epsilon - 2 K \cos\theta) \Psi(\theta) = 0 \, , \label{eq:mathieu1}$$ with $\Psi(\theta=0)=\Psi(\theta=2 \pi)$. This equation is the well known Mathieu’s equation which first appeared in 1868 in the study of the vibrations of a stretched membrane of elliptic cross-section $^{\cite{mathieu1}}$. Mathieu’s equation is an important equation in physics arising from the study of a variety of physical problems, from ordered crystals with the potential $\cos 2x$ $^{\cite{mathieu2}}$ to the wave equation of scalar fields in the background of a D-brane metric $^{\cite{mathieu3}}$. Note that this is one possible equation on a periodic line since we chose for simplicity $\alpha =0$ in Eq. (\[eq:mathieu1\]). According to Ohnuki-Kitakado’s formalism $^{\cite{ok1}}$ there are inequivalent quantum mechanics on $S^1$ for each value of the parameter $\alpha$ ($0 \leq \alpha < 1$). In order to consider the limit of Eq. (\[eq:mathieu1\]) when the radius of the circle goes to infinity we perform the change of variables $$\theta = \frac{\pi}{L} y + \pi \, , \,\,\, -L \leq y \leq L \, . \label{eq:mudvar}$$ Using Eq. (\[eq:mudvar\]), Eq. (\[eq:mathieu1\]) becomes $$\frac{d^2 \Psi}{d y^2} + \left(E + \frac{2\pi^2}{L^2} K \cos\frac{\pi}{L}y \right) \Psi = 0 \, , \label{eq:mathieu3}$$ where $E= \pi^2 \epsilon/L^2$. Then, using a trivial trigonometric identity and calling $\lambda \equiv E+2 \pi^2 K/L^2$ we obtain $$\frac{d^2 \Psi}{d y^2} + \left[ \lambda - \frac{\pi^4}{L^4} K y^2 \left( \frac{\sin\pi y/2L}{\pi y/2L} \right)^2 \right] \Psi = 0 \, . \label{eq:mathieu4}$$ The well known Schrödinger’s equation for the quantum harmonic oscillator $$\frac{d^2 \Psi}{d y^2} + \left( \lambda - y^2 \right) \Psi = 0 \, , \label{eq:schroed}$$ is obtained for $K=L^4/\pi^4$ in Eq. (\[eq:mathieu4\]), if $L \rightarrow \infty$. It is then reasonable to take Eq. (\[eq:mathieu1\]) with $K=L^4/\pi^4$ as the Schrödinger equation for the quantum harmonic oscillator on the circle with energy eigenvalue given by $\lambda \equiv \pi^2 \epsilon/L^2+2 L^2/\pi^2$. Suppose now we consider Mathieu’s equation for $K=L^4/\pi^4$ and $L$ asymptotic. In this case the first levels are concentrated in values $y \ll L$ and thus, according to previous discussion, these energy levels of Mathieu’s equation, which we call $\epsilon^L_n$, will provide the energy levels of the standard quantum harmonic oscillator when $L \rightarrow \infty$. Now, analogously to the definition of a quantum particle through the ordinary quantum harmonic oscillator, we define $n$ quantum particles on the circle of length $L$ as having energy $\epsilon^L_n$. By consistency, $\epsilon^{L\rightarrow\infty}_n- \epsilon^{L\rightarrow\infty}_0=n(\epsilon^{L\rightarrow\infty}_1- \epsilon^{L\rightarrow\infty}_0)$. In fact, there is a solution obtained by Ince and Goldstein $^{\cite{mathieu4}, \cite{mathieu5}, \cite{mathieu1}}$ to Mathieu’s equation, Eq. (\[eq:mathieu1\]), for asymptotic values of $K$. Their expansion for $\epsilon$, the characteristic value of the equation, in the present case, i.e, $K=L^4/\pi^4$, provides for $\lambda$ the value $$\lambda_n = \nu_n - \left( \nu_n^2 + 1 \right) \frac{a^2}{2^6} -\left( 5\nu_n^4+34\nu_n^2+9 \right) \frac{a^3}{2^{16}}+ \cdots \, , \label{eq:incegoldstein}$$ where $\nu_n = 2 n +1$ and $a=\pi/L$. For $L \rightarrow \infty$, ($a\rightarrow 0$) we recognize the energy eigenvalues of the quantum harmonic oscillator. Thus we see that the above asymptotic solution $^{\cite{mathieu4}, \cite{mathieu5}, \cite{mathieu1}}$ of the characteristic values of Mathieu’s equation is a deformation of the quantum harmonic oscillator with deformation parameter equal to $a=\pi/L$. The above solution corresponds to the energy levels of Mathieu’s equation when the parameter $K$ appearing in Eq. (\[eq:mathieu1\]) is large, i.e, when $a^4 (2n+1)^2/16$ is small $^{\cite{mathieu4}, \cite{mathieu5}}$. Note that, even if $L$ is large which leads to a localization of the solution, this is periodic with period $2 L$. Let us now consider dimensionfull variables. We call the dimensionless variables $y$ and $L$ as $y \equiv x/x_0$ and $L \equiv Z/x_0$, where $x$, $Z$ are dimensionfull and $x_0$ is a scale dimensionfull parameter. In this case when the dimensionless variable $y$ varies from $-L$ to $L$ the dimensionfull variable $x$ varies as $-Z \leq x \leq Z$. Thus, $Z$ is the dimensionfull length of the one-dimensional space. Furthermore, as explained before, the well known Schrödinger’s equation for the harmonic oscillator is obtained for $K=L^4/\pi^4$ in Eq. (\[eq:mathieu4\]), when $L \rightarrow \infty$. In terms of dimensionfull quantities this limit is achieved when $Z >> x_0$. Therefore, we could say that $x_0$ is a scale where deformed properties become relevant. Deformed Heisenberg algebra associated with Mathieu’s equation ============================================================== The purpose of this section is to construct an algebra, like the Heisenberg algebra, for the Mathieu system described in the previous section. Like the standard algebra for the quantum harmonic oscillator, the algebra we are going to construct has creation and annihilation operators as part of its generators. To this end let us consider an algebra generated by $J_{0}$, $A$ and $A^{\dagger}$ described by the relations $^{\cite{jpa}}$ $$\begin{aligned} J_{0} \, A^{\dagger} &=& A^{\dagger} \, f(J_{0}) , \label{eq:alg1} \\ A \, J_{0} &=& f(J_{0}) \, A , \label{eq:alg2} \\ \left[ A, A^{\dagger} \right] &=& f(J_{0})-J_{0} , \label{eq:alg3}\end{aligned}$$ where $^{\dagger}$ is the Hermitian conjugate and, by hypothesis, $J_{0}^{\dagger}=J_{0}$ and $f(J_{0})$ is a general analytic function of $J_{0}$. Using the algebraic relations in Eqs. (\[eq:alg1\])-(\[eq:alg3\]) we see that the operator $$C = A^{\dagger} \, A - J_{0} = A \, A^{\dagger} - f(J_{0}) \label{eq:casimir}$$ satisfies $$\left[ C,J_{0} \right] = \left[ C,A \right] = \left[ C,A^{\dagger} \right] = 0 , \label{eq:comute}$$ being thus a Casimir operator of the algebra. We present now the representations of the algebra when the function $f(J_{0})$ is a general analytic function of $J_{0}$. We assume that we have an $n$-dimensional irreducible representation of the algebra given by Eqs. (\[eq:alg1\])-(\[eq:alg3\]) and also that there is a state $|0\rangle$ with the lowest eigenvalue of the Hermitian operator $J_{0}$ $$J_{0} \, |0\rangle = \alpha_{0} \, |0\rangle . \label{eq:alfa0}$$ For each value of $\alpha_{0}$ we have a different vacuum and therefore a better notation for this state could be $|0\rangle_{\alpha_0}$. However, for simplicity, we shall omit subscript $\alpha_0$. Let $| m \rangle$ be a normalized eigenstate of $J_{0}$, $$J_{0} |m \rangle = \alpha_{m} |m \rangle \, . \label{eq:alfam}$$ where $$\alpha_m = f^m(\alpha_0) = f(\alpha_{m-1}) \, , \label{eq:alfam3}$$ and $m$ denotes the number of iterations of $\alpha_{0}$ through $f$. As proved in [@jpa], under the hypothesis stated previously [^3], for a general function $f$ we obtain $$\begin{aligned} J_{0} \, |m\rangle &=& f^{m}(\alpha_0) \, |m\rangle , \; \; \; m = 0,1,2, \cdots \; , \label{eq:b1} \\ A^{\dagger} \, |m-1\rangle &=& N_{m-1} \, |m\rangle , \label{eq:b2} \\ A \, |m\rangle &=& N_{m-1} \, |m-1\rangle , \label{eq:b3}\end{aligned}$$ where $N_{m-1}^2 = f^{m}(\alpha_0)-\alpha_0$. Note that for each function $f(x)$ the representations are constructed by the analysis of the above equations as done in [@jpa] for the linear and quadratic $f(x)$. When the functional $f(J_{0})$ is linear in $J_{0}$, i.e., $f(J_{0}) = q^2 J_0 +s$, it was shown in [@jpa] that the algebra in Eqs. (\[eq:alg1\])-(\[eq:alg3\]) recovers the $q$-oscillator algebra for $\alpha_0 = 0$. Moreover, as shown in [@jpa], where the representation theory was constructed in detail for the linear and quadratic functions $f(x)$, the essential tool to construct representations of the algebra in (\[eq:alg1\])-(\[eq:alg3\]) for a general analytic function $f(x)$ is the analysis of the stability of the fixed points of $f(x)$ and their composed functions. It was shown in [@jpa] and [@comhugo] that there is a class of one-dimensional quantum systems described by these generalized Heisenberg algebras. This class is characterized by those quantum systems having energy eigenvalues given by $$\varepsilon_{n+1} = f(\varepsilon_{n}) \, , \label{eq:class}$$ where $\varepsilon_{n+1}$ and $\varepsilon_{n}$ are successive energy levels and $f(x)$ is a different function for each physical system. This function $f(x)$ is exactly the same function that appears in the construction of the algebra in Eqs. (\[eq:alg1\])-(\[eq:alg3\]). In the algebraic description of this class of quantum systems, $J_0$ is the Hamiltonian operator of the system, $A^{\dagger}$ and $A$ are the creation and annihilation operators, respectively. This Hamiltonian and the ladder operators are related by Eq. (\[eq:casimir\]) where $C$ is the Casimir operator of the representation associated to the quantum system under consideration. Now, let us show that the asymptotic solution to Mathieu’s equation we presented in the last section belongs to the class of algebras discussed previously. In other words, we shall construct a Heisenberg-type algebra, an algebra with creation and annihilation operators, for the Ince-Goldstein solution (Eq. (\[eq:incegoldstein\])) to the quantum harmonic oscillator on $S^1$ and we shall find the characteristic function $f(x)$ (see Eqs. (\[eq:alg1\])-(\[eq:alg3\])) for this algebra. Furthermore, we shall also propose a realization, as in the case of the standard quantum harmonic oscillator, of the ladder operators in terms of the physical operators of the system. As described in [@comhugo] and [@campos] the first thing we have to do in order to describe the Heisenberg-type structure of a one-dimensional quantum system is to relate the energy of the system for two arbitrary successive levels (see Eq. (\[eq:class\])). For the energy spectrum given in Eq. (\[eq:incegoldstein\]), i.e, $$\varepsilon_n^L = n + \frac{1}{2} - \left[ (2n+1)^2 + 1 \right] \frac{a^2}{2^7} -\left[5(2n+1)^4+34(2n+1)^2+9 \right]\frac{a^3}{2^{17}}+ \cdots \, , \label{eq:incegoldstein2}$$ we obtain $$\varepsilon_{n+1}^L = \varepsilon_{n}^L + 1 - \left( n+1 \right)\frac{a^2}{2^4} -(n+1)\left[ 10n(n+2)+21 \right]\frac{a^3}{2^{12}}+ \cdots \, . \label{eq:incegoldstein3}$$ Thus, we have to invert Eq. (\[eq:incegoldstein2\]) in order to obtain $n$ in terms of $\varepsilon_n^L$. Taking $n$ from Eq. (\[eq:incegoldstein2\]) we get $$\varepsilon_{n+1}^L \equiv f(\varepsilon_n^L) = \varepsilon_{n}^L + 1 - (2\varepsilon_n^L+1)\frac{a^2}{2^5}- (2\varepsilon_n^L+1) \left[ 20\varepsilon_n^L(\varepsilon_n^L+1)+27 \right]\frac{a^3}{2^{14}} + \cdots \, . \label{eq:deff}$$ According to Refs. [@mathieu4] and [@mathieu5], this solution is valid when $a^4 (2n+1)^2/16$ is small. Thus, since $a=\pi/L$ is considered small, $n$ cannot be very large. Now, if we assume that $\varepsilon_n^L$ is the eigenvalue of operator $J_0$ on state $|n\rangle$ we identify $f(x)$ appearing in Eqs. (\[eq:b1\])-(\[eq:b3\]) with that one in Eq. (\[eq:deff\]) for the quantum system under consideration. Then, the algebraic structure describing the quantum system under consideration is obtained using $f(x)$ defined in Eq. (\[eq:deff\]) into Eqs. (\[eq:alg1\])-(\[eq:alg3\]) and can be written as $$\begin{aligned} \left[J_{0}, A^{\dagger}\right] &=& A^{\dagger} - A^{\dagger} \, (2 J_{0}+1)\frac{a^2}{2^5} -A^{\dagger}(2 J_{0}+1)\left[ 20 J_0(J_0+1)+27 \right] \frac{a^3}{2^{14}}+ \cdots , \nonumber \\ \label{eq:defheis1} \\ \left[ J_{0}, A \right] &=& -A +(2 J_{0}+1) \, A \frac{a^2}{2^5} +(2 J_{0}+1)\left[ 20 J_0(J_0+1)+27 \right] \, A \frac{a^3}{2^{14}}+ \cdots ,\nonumber \\ \label{eq:defheis2} \\ \left[ A, A^{\dagger} \right] &=& 1-(2 J_{0}+1) \frac{a^2}{2^5} -(2 J_{0}+1)\left[ 20 J_0(J_0+1)+27 \right] \frac{a^3}{2^{14}}+ \cdots \,\, , \label{eq:defheis3}\end{aligned}$$ where, according to Eqs. (\[eq:b1\])-(\[eq:b3\]), $A$ and $A^{\dagger}$ are the ladder operators for the system under consideration, i.e, $A^{\dagger}$ when applied to state $|m\rangle$, that has $J_0$ eigenvalue $\varepsilon_m^L$, gives, apart from a multiplicative factor depending on $m$, the state $|m+1\rangle$ has energy eigenvalue $\varepsilon_{m+1}^L$. A similar role played by $A$. Note that, when $a\rightarrow 0$ ($L\rightarrow \infty$), we re obtain the well known Heisenberg algebra, as it should be, since we showed in the previous section that Mathieu’s equation, Eq. (\[eq:mathieu1\]), for $K=L^4/\pi^4=a^{-4}$ gives the well known Schrödinger’s equation for the harmonic oscillator, Eq. (\[eq:schroed\]), in this limit. Now, let us realize the operators $A$, $A^{\dagger}$ and $J_0$ in terms of physical operators as in the case of the one-dimensional quantum harmonic oscillator, following what was done in [@comhugo] and in [@campos] for the square-well potential and $q$-oscillators [@qft1]. Let us consider a one dimensional lattice in a momentum space where the momenta are allowed to take only discrete values, say $p_{0}$, $p_{0}+a$, $p_{0}+2a$, $p_{0}+3a$ etc, with $a>0$. The left and right discrete derivatives are given by $$\begin{aligned} (\partial_{p} \, f) \, (p) & = & \frac{1}{a} \, [f(p+a) - f(p)] \, , \label{eq:partialleft} \\ (\bar{\partial}_{p} \, f) \, (p) & = & \frac{1}{a} \, [f(p) - f(p-a)] \, , \label{eq:partialright}\end{aligned}$$ which are the two possible definitions of derivatives on a lattice. Let us now introduce the momentum shift operators $$\begin{aligned} T & = & 1 + a \, \partial_{p} \label{eq:a} \\ \bar{T} & = & 1 - a \, \bar{\partial}_{p} \, , \label{eq:abarra}\end{aligned}$$ which shift the momentum value by $a$ $$\begin{aligned} (Tf) \, (p) & = & f(p+a) \label{eq:af} \\ (\bar{T}f) \, (p) & = & f(p-a) \label{eq:abarraf}\end{aligned}$$ and satisfies $$T \, \bar{T} = \bar{T} T = \hat{1} \, , \label{eq:aabarra}$$ where $\hat{1}$ means the identity on the algebra of functions of $p$. Introducing the momentum operator $P$$^{\cite{dimakis1}}$ $$(Pf) \, (p) = p \, f(p) \, , \label{eq:momentum}$$ we have $$\begin{aligned} T P & = & (P+a)T \label{eq:ap} \\ \bar{T} P & = & (P-a) \bar{T} \, \, . \label{eq:abarrap}\end{aligned}$$ Now, we go back to the realization of the deformed Heisenberg algebra Eqs. (\[eq:defheis1\])-(\[eq:defheis3\]) in terms of physical operators. We can associate to the crystalline structure of Mathieu’s equation discussed in the previous section the one dimensional lattice we have just presented. Observe that we can write $J_0$ for the asymptotic Ince-Goldstein solution to Mathieu’s equation, Eq. (\[eq:incegoldstein2\]), as $$J_0 = \frac{P}{a} + \frac{1}{2} - \left[ \left( 2\frac{P}{a}+1 \right)^2 + 1 \right] \frac{a^2}{2^7} - \left[5(2\frac{P}{a}+1)^4+34(2\frac{P}{a}+1)^2+ 9 \right]\frac{a^3}{2^{17}} + \cdots \, \, , \label{eq:defj0}$$ where $P$ is given in Eq. (\[eq:momentum\]) and its application to the vector states $|m\rangle$ appearing in (\[eq:b1\])-(\[eq:b3\]) gives $$P \, |m\rangle = m \, a \, |m\rangle \,\, , m=0,1, \cdots \, \, , \label{eq:aplicmom}$$ and $$\bar{T} \, |m\rangle = |m+1\rangle \,\, , m=0,1, \cdots \, \, , \label{eq:aplictbar}$$ where $\bar{T}$ and $T=\bar{T}^{\dagger}$ are defined in Eqs. (\[eq:a\])-(\[eq:aabarra\]). It is useful to note that from Eq. (\[eq:aplicmom\]) it is possible to define the number operator $N$ as $N \equiv P/a$. With the definition of $J_0$ given in Eq. (\[eq:defj0\]) we see that $\varepsilon_n^L$ given in Eq. (\[eq:incegoldstein3\]) is the $J_0$ eigenvalue of state $|n\rangle$ as desired. Let us now define $$\begin{aligned} A^{\dagger} &=& S(P) \, \bar{T} \,\, , \label{eq:real1} \\ A &=& T \, S(P) \,\, , \label{eq:real2} \end{aligned}$$ where, $$S(P)^2 = \frac{P}{a} - \left[ \left( 2\frac{P}{a}+1 \right)^2 -1 \right] \frac{a^2}{2^7} -\left[5\left( 2\frac{P}{a}+1 \right)^4+34\left( 2 \frac{P}{a}+1\right)^2- 39 \right]\frac{a^3}{2^{17}}+ \cdots , \label{eq:defS}$$ satisfies $S^2(P) = J_0 - \alpha_0$ where $\alpha_0$, defined in Eq. (\[eq:alfa0\]), is $\varepsilon_0^L$. Following Ref. [@circulo] one can show that $A^{\dagger}$, $A$ and $J_0$ given in Eqs. (\[eq:real1\]), (\[eq:real2\]) and (\[eq:defj0\]) respectively, obey the algebra defined in Eqs. (\[eq:defheis1\]), (\[eq:defheis2\]) and (\[eq:defheis3\]). Note that the realization we have found in Eqs. (\[eq:real1\]), (\[eq:real2\]) and (\[eq:defj0\]) is qualitatively different from the the realization of the standard harmonic oscillator. This is reasonable, since we have two physically different systems. Even if the standard quantum harmonic oscillator defined on $-\infty \leq x \leq \infty$ is a limiting case of the periodic one, it is not periodic and in this case there is no lattice associated to it. On the other hand, once $L$ is finite, $-L \leq x \leq L$, the periodic structure is explicitly manifest and the realization in the finite case, given in Eqs. (\[eq:real1\]), (\[eq:real2\]) and (\[eq:defj0\]), shows it clearly. A quantum field theory in a box =============================== In section II we have presented a description of a quantum harmonic oscillator on a circle and in section III, its associated Heisenberg-type algebra, i.e., an algebra having the Hamiltonian and the step operators as generators, corresponding to a quantum harmonic oscillator on a circle. This algebra is a deformed Heisenberg algebra which goes to the standard Heisenberg algebra when the radius of the circle goes to infinite. In this section, using the hypothesis that the successive energy levels of the quantum harmonic oscillator on a circle are still obtained by the creation or/and annihilation of a quantum particle on a periodic structure, we are going to construct a quantum field theory in a compact space. In the momentum space appropriated to the realization of the deformed Heisenberg algebra we discussed, besides the operator $P$ defined in Eq. (\[eq:momentum\]), one can define two self adjoint operators as $$\begin{aligned} \chi & \equiv & - i \left( S(P)(1-a \bar{\partial}_p )- (1+a \partial_p )S(P) \right) = - i(A - A^{\dagger}) \, \, , \label{eq:cord1} \\ Q & \equiv & S(P)(1-a \bar{\partial}_p )+ (1+a \partial_p )S(P) = A + A^{\dagger} \, \, , \label{eq:cord2} \end{aligned}$$ where $\partial_p$ and $\bar{\partial}_p$ are the left and right discrete derivatives defined in Eqs. (\[eq:partialleft\]), (\[eq:partialright\]). It can be verified that operators $P$, $\chi$ and $Q$ generate the following algebra on the momentum lattice: $$\begin{aligned} \left[ \chi,P \right] &=& i a Q , \label{eq:fecho1} \\ \left[ P,Q \right] &=& i a \chi , \label{eq:fecho2} \\ \left[ \chi,Q \right] &=& 2 i S(P) \left( S(P+a)-S(P-a) \right) . \label{eq:fecho3}\end{aligned}$$ This algebra is the analog of the Heisenberg algebra in the deformed case. Since the analog of the Heisenberg algebra for the deformed case has three generators, it is convenient to define three fields which we call $\phi(\vec{r},t)$, $\Pi(\vec{r},t)$ and $\wp(\vec{r},t)$. In terms of Fourier series these fields are given as $$\begin{aligned} \phi(\vec{r},t) &=& \sum_{\vec{k}} \frac{1}{\sqrt{2\Omega\omega(\vec{k})}} Q_{\vec{k}}(t) \, e^{i \vec{k}.\vec{r}} , \label{eq:defcampo1} \\ \Pi(\vec{r},t) &=& \sum_{\vec{k}} \frac{i\omega(\vec{k})}{\sqrt{2\Omega\omega(\vec{k})}} \chi_{-\vec{k}}(t) \, e^{i \vec{k}.\vec{r}} , \label{eq:defcampo2} \end{aligned}$$ where $\omega(\vec{k})= \sqrt{\vec{k}^2+m^2}$, $m$ is a real parameter and $\Omega$ is the volume of a rectangular box and $$\wp(\vec{r},t) = \sum_{\vec{k}} \sqrt{\frac{\omega(\vec{k})} {2\Omega}} \, \, \, S_{\vec{k}} \, e^{i \vec{k}.\vec{r}} \, . \label{eq:defcampo3}$$ The time-dependent operators in the Hilbert space $Q_{\vec{k}}(t)$, $\chi_{\vec{k}}(t)$ and $S_{\vec{k}}$ will be defined in what follows and the components of $\vec{k}$ are given by $$k_i = \frac{2 \pi l_i}{Z_i} ,\, \,\,\, i=1,2,3 \,\,\,\,\,\,\, , \label{eq:kspace}$$ with $l_i= 0,\pm 1,\pm 2, \cdots $ and $Z_i$ being the lengths of the three sides of a rectangular box $\Omega$. We introduce for each point of this $\vec{k}$-space an independent deformed quantum harmonic oscillator constructed in the last two previous sections such that the deformed operators commute for different three-dimensional lattice points. We also introduce an independent copy of the one-dimensional momentum lattice defined in the previous section for each point of this $\vec{k}$-lattice so that $P_{\vec{k}}^{\dagger} = P_{\vec{k}}$ and $T_{\vec{k}}$, $\bar{T}_{\vec{k}}$ and $S_{\vec{k}}$ are defined by means of the previous definitions, Eqs. (\[eq:a\])-(\[eq:abarra\]) and (\[eq:defS\]), through the substitution $P \rightarrow P_{\vec{k}}$. It is possible to show that $$\begin{aligned} A^{\dagger}_{\vec{k}} & = & S_{\vec{k}} \, \bar{T}_{\vec{k}} \, , \label{eq:j+k} \\ A_{\vec{k}} & = & T_{\vec{k}} \, S_{\vec{k}} \, , \label{eq:j-k} \\ J_0(\vec{k}) &=& \frac{P_{\vec{k}}}{a} + \frac{1}{2} - \left[ \left( 2\frac{P_{\vec{k}}}{a}+1 \right)^2 + 1 \right] \frac{a^2}{2^7} - \nonumber \\ & & \left[5(2\frac{P_{\vec{k}}}{a}+1)^4+34(2\frac{P_{\vec{k}}}{a}+1)^2+ 9 \right]\frac{a^3}{2^{17}} + \cdots \, \, , \label{eq:j0k}\end{aligned}$$ where $$S_{\vec{k}}^2 = \frac{P_{\vec{k}}}{a} - \left[ \left( 2\frac{P_{\vec{k}}}{a}+1 \right)^2 -1 \right] \frac{a^2}{2^7} -\left[5\left( 2\frac{P_{\vec{k}}}{a}+1 \right)^4+34\left( 2 \frac{P_{\vec{k}}}{a}+1\right)^2- 39 \right]\frac{a^3}{2^{17}}+ \cdots , \label{eq:defSk}$$ satisfy the algebra in Eqs. (\[eq:defheis1\]), (\[eq:defheis2\]) and (\[eq:defheis3\]) for each point of this $\vec{k}$-lattice and the operators $A^{\dagger}_{\vec{k}}$, $A_{\vec{k}}$ and $J_{0}(\vec{k})$ commute among them for different points of this $\vec{k}$-lattice. Now, we define operators $\chi$ and $Q$ for each point of the three-dimensional lattice as $$\begin{aligned} \chi_{\vec{k}} & \equiv & -i(T_{-\vec{k}} \, S_{-\vec{k}} - S_{\vec{k}} \, \bar{T}_{\vec{k}} ) = - i ( A_{-\vec{k}} - A^{\dagger}_{\vec{k}}) \, \, , \label{eq:cord5} \\ Q_{\vec{k}} & \equiv & T_{\vec{k}} \, S_{\vec{k}} + S_{-\vec{k}} \, \bar{T}_{-\vec{k}} = A_{\vec{k}} + A^{\dagger}_{-\vec{k}} \, \, , \label{eq:cord6} \end{aligned}$$ such that $\chi_{\vec{k}}^{\dagger}= \chi_{-\vec{k}}$ and $Q_{\vec{k}}^{\dagger}= Q_{-\vec{k}}$, exactly as it happens in the construction of a spin-$0$ field for the spin-$0$ quantum field theory $^{\cite{tdlee}}$. These operators appear in the Fourier expansion of the fields given in Eqs. (\[eq:defcampo1\])-(\[eq:defcampo3\]). By a straightforward calculation, one can show that the Hamiltonian $$\begin{aligned} H = \frac{1}{2} \int_{\Omega} d^3 r \left( \Pi(\vec{r},t)^2 +\rho \, | \wp(\vec{r},t)|^2 + \phi(\vec{r},t) (-{\vec{\nabla}}^2+m^2) \phi(\vec{r},t) \right) \,\, , \label{eq:defhamilt}\end{aligned}$$ can be written as $$\begin{aligned} H &=& \frac{1}{2} \sum_{\vec{k}} \omega(\vec{k}) \left( A^{\dagger}_{\vec{k}} A_{\vec{k}}+A_{\vec{k}} A^{\dagger}_{\vec{k}}+\rho \, S(N_{\vec{k}})^2 \right) \nonumber \\ &=&\frac{1}{2}\sum_{\vec{k}} \omega(\vec{k}) \left( S(N_{\vec{k}}+1)^2 + (1+\rho) \, S(N_{\vec{k}})^2 \right) \,\, , \label{eq:resulthamilt}\end{aligned}$$ where $\rho$ is an arbitrary number and $$S(N_{\vec{k}})^2 = N_{\vec{k}} - \left[ \left( 2N_{\vec{k}}+1 \right)^2 -1 \right] \frac{a^2}{2^7} -\left[5\left( 2N_{\vec{k}}+1 \right)^4+34\left( 2 N_{\vec{k}}+1\right)^2- 39 \right]\frac{a^3}{2^{17}}+ \cdots , \, \, . \label{eq:defs2}$$ Since the term in the Hamiltonian, (\[eq:defhamilt\]), proportional to $\rho$ is time-independent, it seems that it cannot produce any relevant effect. Thus, for simplicity, we will take $\rho=0$. In order that the energy of the vacuum state becomes zero we replace $H$ in Eq. (\[eq:resulthamilt\]) by $$H = \frac{1}{2}\sum_{\vec{k}} \omega(\vec{k}) \left( S(N_{\vec{k}}+1)^2 + S(N_{\vec{k}})^2 - N_0^2 \right) \,\, \label{eq:resulthamilt1}$$ where $$N_0^2 \equiv f(\alpha_0)-\alpha_0=1-a^2/2^4-21 a^3/2^{12} +\cdots \;. \label{eq:enezero}$$ Note that in the limit $L \rightarrow \infty$, the above Hamiltonian is proportional to the number operator. The eigenvectors of $H$ form a complete set and span the Hilbert space of this system. They are the following $$|0 \rangle, \,\, A^{\dagger}_{\vec{k}} |0 \rangle, \,\, A^{\dagger}_{\vec{k}} A^{\dagger}_{\vec{k}'} |0 \rangle \,\, \mbox{for} \,\, \vec{k}\not= \vec{k}', \,\, (A^{\dagger}_{\vec{k}})^2 |0 \rangle, \,\, \cdots \,\, , \label{eq:hilbert}$$ where the state $|0\rangle$ satisfies as usual $A_{\vec{k}} |0\rangle =0$ (see Eq. (\[eq:alg3\])) for all $\vec{k}$ and $A_{\vec{k}}$, $A^{\dagger}_{\vec{k}}$ for each $\vec{k}$ satisfying the deformed Heisenberg algebra Eqs. (\[eq:defheis1\])-(\[eq:defheis3\]). The time evolution of the fields can be studied by means of Heisenberg’s equation for $A^{\dagger}_{\vec{k}}$, $A_{\vec{k}}$ and $S_{\vec{k}}(\equiv S(N_{\vec{k}}))$. Define $$h(N_{\vec{k}}) \equiv \frac{1}{2} \left( S^2(N_{\vec{k}}+2) - S^2(N_{\vec{k}}) \right) \,\, . \label{eq:defh}$$ Thus, using Eq. (\[eq:resulthamilt\]) or (\[eq:resulthamilt1\]) and $\left[ N,A^{\dagger}\right]=A^{\dagger}$ we obtain $$\left[ H, A^{\dagger}_{\vec{k}} \right] = \omega(\vec{k}) \, A^{\dagger}_{\vec{k}} \,\, h(N_{\vec{k}}) \,\, . \label{eq:comutheis1}$$ We can solve Heisenberg’s equation for the deformed case and the result is $$A^{\dagger}_{\vec{k}}(t) = A^{\dagger}_{\vec{k}}(0) \,\, e^{i \omega(\vec{k}) \, h(N_{\vec{k}}) \, t} \,\, . \label{eq:solveheis1}$$ Note that for $L \rightarrow \infty$ we have $h(N_{\vec{k}}) \rightarrow 1$ and Eq. (\[eq:solveheis1\]) gives the correct result for this undeformed case. Furthermore, we easily see that operators $P_{\vec{k}}$ and $S_{\vec{k}}$ are time-independent. We emphasize that the extra term, $h(N_{\vec{k}})$, in the exponentials depends on the number operator, being this the main difference from the undeformed case. The Fourier transformation of Eq. (\[eq:defcampo1\]) can then be written as $$\phi(\vec{r},t) = \alpha(\vec{r},t) + \alpha(\vec{r},t)^{\dagger} \,\, , \label{eq:defcampotempo}$$ where $$\alpha(\vec{r},t) = \sum_{\vec{k}} \frac{1}{\sqrt{2\Omega\omega(\vec{k})}} \,\, e^{i \vec{k}.\vec{r}-i \omega(\vec{k}) \, h(N_{\vec{k}}) \, t} A_{\vec{k}} \, \,\, , \label{eq:defcampotempo1}$$ with $A_{\vec{k}}$ in Eq. (\[eq:defcampotempo1\]) being time-independent and $\alpha(\vec{r},t)^{\dagger}$ is the Hermitian conjugate of $\alpha(\vec{r},t)$. The Dyson-Wick contraction between [^4] $\phi(x_1)$ and $\phi(x_2)$, can be computed using Eqs. (\[eq:defcampotempo\])-(\[eq:defcampotempo1\]), which results in $$D^{N}_F(x_1,x_2) = \sum_{\vec{k}} \frac{e^{i\vec{k} . \Delta \vec{r}_{12}}}{2\Omega\omega(\vec{k})} \left( S(N_{\vec{k}}+1)^2 \, e^{\mp i \omega(\vec{k})\, h(N_{\vec{k}})\, \Delta t_{12}} - S(N_{\vec{k}})^2 \, e^{\mp i \omega(\vec{k})\, h(N_{\vec{k}}-1)\, \Delta t_{12}} \right) \, , \label{eq:propsum}$$ where $\Delta t_{12}=t_1-t_2$, $\Delta \vec{r}_{12}= \vec{r}_1-\vec{r}_2$. The minus sign in the exponent holds when $t_1 > t_2$ and the positive sign when $t_2 > t_1$. Note that when $L \rightarrow \infty$, $h(N_{\vec{k}}) \rightarrow 1$ and $S(N_{\vec{k}}+1)^2 - S(N_{\vec{k}})^2 \rightarrow 1$ recovering the standard result for the propagator. We shall now present the result concerning the perturbative computation of the first order scattering process $1+2 \rightarrow 1^{'}+2^{'}$ for $p_1 \not= p_2 \not= p_1' \not= p_2'$ with the initial state $$|1,2 \rangle \equiv \frac{1}{N_0^2} A^{\dagger}_{p_1} \, A^{\dagger}_{p_2} |0\rangle \,\, , \label{eq:estinic}$$ and the final state $$|1^{'},2^{'} \rangle \equiv \frac{1}{N_0^2} A^{\dagger}_{p_1^{'}} \, A^{\dagger}_{p_2^{'}} |0\rangle \,\, , \label{eq:estfin}$$ where $A_{p_i}$ and $A^{\dagger}_{p_i}$ satisfy the algebraic relations in Eqs. (\[eq:defheis1\])-(\[eq:defheis3\]).These particles are supposed to be described by the Hamiltonian given in Eq. (\[eq:defhamilt\]) with an interaction given by $\lambda \int_{\Omega_t} :\phi(\vec{r},t)^4: d^3r$, where $\Omega_t=\Omega \otimes t$ is the four volume of integration. To the lowest order in $\lambda$, we have ($\Gamma$ means the standard $S$-matrix) $$\langle 1^{'},2^{'} | \Gamma | 1,2 \rangle_1 = -i \lambda \int_{\Omega_t} d^4 x \langle 1^{'},2^{'} | :\phi^4(x): | 1,2 \rangle \,\, . \label{eq:matrixelem1}$$ The first order computation follows, step by step, the computation of the first order scattering process given in Ref. [@qft1] and gives us the following result $$\begin{aligned} &&\langle 1^{'},2^{'} | \Gamma | 1,2 \rangle_1 = \frac{-6i(2\pi)^4 }{\Omega^2 \sqrt{\omega_{\vec{p}_1}\omega_{\vec{p}_2} \omega_{\vec{p'}_1}\omega_{\vec{p'}_2} }}\frac{\lambda{N_0}^4} {h(0)} \delta^4(P_{1}+P_{2}-P'_{1}-P'_{2}) \,\, , \nonumber \\ \label{eq:finalmatrix1}\end{aligned}$$ where $$\begin{aligned} P_{i} = \left(\vec{p}_i, \omega_{\vec{p}_i} \right), \; P'_{i} = \left(\vec{p'}_i, \omega_{\vec{p'}_i} \right) \nonumber \\ \label{eq:definePs}\end{aligned}$$ and from Eq. (\[eq:defh\]) $$h(0) = 1-3 \frac{a^2}{2^5}- 123 \frac{a^3}{2^{13}} + \ldots \;\; . \label{eq:agazero}$$ Note that when $L \rightarrow \infty \; (a \rightarrow 0)$ we have $N_0 \rightarrow 1$, $h(0) \rightarrow 1$, the box $\Omega$ becomes an infinite box and Eq. (\[eq:finalmatrix1\]) becomes the standard undeformed result $^{\cite{tdlee}}$. It is convenient to note at this point that we are, by hypothesis, identifying the linear dimensions of the box $\Omega$ where we perform the spatial integration in Eq. (\[eq:matrixelem1\]) with the dimensionfull length, $Z$ ($L=Z/x_0$), of the circle where the harmonic oscillator is defined. This identification is not strictly necessary, it comes from our approach that everything happens inside the spatial box $\Omega$. We suppose that in a universe approximated by a finite spatial box $\Omega$ the step operators of the quantum harmonic oscillator defined on a circle of length $Z$, where $Z$ is a linear dimension of the box $\Omega$, create and/or annihilate point particles. Thus, if the spatial box $\Omega$ increases so does the length of the circle where the harmonic oscillator is defined. To second order in $\lambda$ the scattering process $1+2 \rightarrow 1^{'}+2^{'}$ is given as $$\langle 1^{'},2^{'} | \Gamma | 1,2 \rangle_2 = \frac{(-i)^2}{2} \lambda^2 \int \int_{\Omega_t} d^4 x \, d^4 y \langle 1^{'},2^{'} | T(:\phi^4(x): :\phi^4(y):) | 1,2 \rangle \,\, , \label{eq:matrixelem2}$$ where $T$ denotes the time-ordered product. In order to convert the time-ordered product into a normal product we use the Wick’s expansion. The propagator in the present case, see Eq. (\[eq:propsum\]), is not a simple $c$-number since it depends on the number operator $N$ and this fact induces modifications in the standard Wick expansion. This subject was already discussed in [@qft2] where the computation of a scattering process for a deformed QFT to second order in the coupling constant was presented. Following Ref. [@qft2] we find for the scattering process under consideration, up to second order in the coupling constant, is given by $$\begin{aligned} \langle 1^{'},2^{'} | \Gamma | 1,2 \rangle_2 = \frac{1} {2\Omega^2\sqrt{\omega_{\vec{p}_1}\omega_{\vec{p}_2} \omega_{\vec{p'}_1}\omega_{\vec{p'}_2}}} \left(\frac{\lambda{N_0}^4}{h(0)}\right)^2 \delta^4(P_{1}+P_{2}-P'_{1}-P'_{2}) \nonumber \\ \left( I+I'+I''+I''' \right) \, , \label{eq:resmatrixelem2}\end{aligned}$$ where $$I = -\frac{(2\pi)^2}{4\Omega} \sum_{\vec{k}} \frac{1}{\sqrt{(\vec{k}^2+m^2)\left[(\vec{s}-\vec{k})^2 +m^2\right]}} \, , \label{eq:defI}$$ with $\vec{s}=\vec{p}_1+\vec{p}_2$ and $$\begin{aligned} I' &=& I(\vec{s}\rightarrow -\vec{s}) \, , \\ I'' &=& I(\vec{s}\rightarrow \vec{t}\equiv \vec{p}_1-\vec{p'}_1) \, , \\ I''' &=& I(\vec{s}\rightarrow \vec{u}\equiv \vec{p}_1-\vec{p'}_2) \, . \label{eq:defdifI}\end{aligned}$$ In summary, the scattering process $1+2 \rightarrow 1^{'}+2^{'}$ for $p_1 \not= p_2 \not= p_1' \not= p_2'$ with the initial and final states given in Eqs. (\[eq:estinic\]), (\[eq:estfin\]) respectively, where $A_{p_i}$, $A^{\dagger}_{p_i}$ satisfy the algebraic relations in Eqs. (\[eq:defheis1\])-(\[eq:defheis3\]) and the particles are supposed to be described by the Hamiltonian given in Eq. (\[eq:defhamilt\]) with an interaction given by $\lambda \int_{\Omega_t} :\phi(\vec{r},t)^4: d^3r$ is given up to second order in the coupling constant, $\lambda$, as $$\langle 1^{'},2^{'} | \Gamma | 1,2 \rangle = \frac{\lambda N_0^4}{h(0)} A_1 + \left( \frac{\lambda N_0^4} {h(0)}\right)^2 \left( A_2^s + A_2^t + A_2^u \right) \; , \label{eq:finres}$$ where $A_1$ is obtained from Eq. (\[eq:finalmatrix1\]), $A_2^s$ comes from $I$ and $I'$ in Eq. (\[eq:resmatrixelem2\]), $A_2^t$ and $A_2^u$ come from $I''$ and $I'''$ respectively. Note that when $L \rightarrow \infty \; (a \rightarrow 0)$ we have $N_0 \rightarrow 1$, $h(0) \rightarrow 1$, the box $\Omega$ becomes an infinite box, Eq. (\[eq:finres\]) becomes the standard undeformed result with $A_1$, $A_2^s$, $A_2^t$ and $A_2^u$ being the same contributions that we find in the [*standard*]{} $\lambda$ $\phi^4$ (non-deformed) model corresponding to the tree level, the $s$, $t$ and $u$ channels for one-loop level, respectively. Also, it is worth to notice that the perturbative expansion shows that the coupling constant which appear in the interacting Hamiltonian is modified as $\lambda \rightarrow \lambda N_0^4/h(0)$. This means that the effective coupling constant, $\lambda_{eff}\equiv \lambda N_0^4/h(0)$, in this framework is modified due to the presence of the deformation parameter $a=\pi/L$. Contribution from the boundary for the variation of the coupling constant ========================================================================= The comments in the last paragraph of the previous section allow us to connect the effective coupling constant appearing in the perturbation expansion, which is given by $\lambda N_0^4/h(0)$, with the size of the box we are considering, i.e., the linear dimension $Z$ of the box $\Omega$. In this section, based on this connection we are going to compute the variation of the effective coupling constant for two different values of $Z$, namely one corresponding to the time of nucleosynthesis of the standard cosmological model and the other to the present epoch. The choice of these values, as said before, was done just to perform a calculation. With this choice we are not assuming that the Universe is described by our model. In fact, we want just to have an idea of what would be the contribution from the boundary, in a compact space, to the variation of the coupling constant in the framework of our approach. In order to investigate the variation of the effective coupling constant, let us define $p \equiv N_0^4/h(0)$ which for two different values of $L$, namely $L_{\pm}$ gives $$p_{\pm} = \frac{\left(1-a_{\pm}^2/2^4 - 21 a_{\pm}^3/2^{12} +\cdots \right)^2}{1-3 a_{\pm}^2/2^5- 123 a_{\pm}^3/2^{13} + \ldots} \; , \label{eq:pmaismenos}$$ where we have used Eqs. (\[eq:enezero\]) and (\[eq:agazero\]) with $a_{\pm}=\pi/L_{\pm}$, $L_{\pm}=Z_{\pm}/x_0$ and $\lambda_{eff}^{\pm}= \lambda p_{\pm}$. In what follows let us compute the dimensionless quantity, $\Delta\alpha/\alpha$, given by $$\frac{\Delta\alpha}{\alpha}= \frac{(\lambda_{eff}^{+})^2- (\lambda_{eff}^{-})^2}{(\lambda_{eff}^{+})^2} = 1-\frac{(\lambda_{eff}^{-})^2}{(\lambda_{eff}^{+})^2} \; , \label{eq:delta1}$$ where $\pm$ means the present time and the time at the moment of nucleosynthesis, respectively. We have assumed that the creation and/or annihilation operators of the quantum harmonic oscillator on a periodic line creates and/or annihilates a quantum particle. Along these lines we showed in Sections II and III that there is a deformation parameter $a$ which is connected to the linear size of the box where the second quantized formalism is constructed through $a=\pi/L$ with $L$ given by $L=Z/x_0$. As discussed in Section II, $x_0$ is a scale where the deformation starts to become relevant. In what follows, we will assume that the value of $x_0$ is at least equal to the one corresponding to the scale of the electroweak phase transition just to have a reference size which will permit to perform our calculation . Now, let us compute the dimensionless quantity $\Delta\alpha/\alpha$ for two different values of the dimensions of the box, namely, for $Z_+ \approx 10^{28} cm$ and $Z_- \approx 10^{19} cm$ in accordance with our choice . For these values of the size of the box, the deformation parameters are $a_+=\pi/L_+=\pi x_0/Z_+ \approx 10^{-15}$ and $a_- \approx 10^{-6}$. Due to the magnitude of the deformation parameters under consideration it is suffice to take the lowest order expansion for $p_{\pm}$, which is $$p_{\pm} = 1- \frac{a_{\pm}^2}{2^5}+\cdots \; . \label{eq:pmaismenosexpand}$$ Taking into account this expansion and the estimated value of $a_-$ we obtain for $\Delta\alpha/\alpha$ the following result $$\frac{\Delta\alpha}{\alpha} = \frac{a_{-}^2}{2^5}+\cdots < 10^{-12} \; . \label{eq:estimativa}$$ Final comments ============== In this paper we have constructed a QFT in a finite box. In order to construct this QFT, we used the hypothesis that particles living in a finite box with periodic boundary conditions are created and/or annihilated through the creation and/or annihilation operators, respectively, of a quantum harmonic oscillator on a circle. The quantum harmonic oscillator we have used is described by Mathieu’s equation and its associated creation and annihilation operators obey a deformed Heisenberg algebra. We have thus followed the treatment given in Refs. [@qft1] and [@qft2], which presents the construction of a deformed QFT based on $q$-oscillators, in order to construct the present QFT in a finite box. The perturbative series we have found shows an effective coupling constant given by $\lambda_{eff}= \lambda N_0^4/h(0)$, where $N_0$ and $h(0)$, see Eqs. (\[eq:enezero\]) and (\[eq:agazero\]), which depends on the dimensionless quantity $L=Z/x_0$, where $Z$ is a linear dimension of the finite box $\Omega$ and $x_0$ is a scale where the modified description of the creation of particles starts to be relevant. Even though our model is not a cosmological model, we considered an estimation of the bound for the variation of the effective coupling constant taking into account two values, $Z$, of the linear dimension of the finite box $\Omega$, one corresponding to the time of the nucleosynthesis of the standard cosmological model and the other to the size of the Universe nowadays. The result obtained, $\Delta\alpha/\alpha < 10^{-12}$, is in the range of results on the constraints for the variation of the coupling constants for different epochs of Universe [@varyingc], we thus think that it would be interesting to analyze the approach we considered in this paper in a cosmological model. [**Acknowledgments:**]{} The authors thank CNPq and PRONEX/CNPq for partial support. [30]{} See for instance: T. D. Lee, “Particle Physics and Introduction to Field Theory", Harwood academic publishers, New York, 1981. M. A. Rego-Monteiro, Eur. Phys. J. [**C 21**]{} (2001) 749. V. B. Bezerra, E. M. F. Curado and M. A. Rego-Monteiro, Phys. Rev. [**D 65**]{} (2002) 065020. A. J. Macfarlane, J. Phys. [**A 22**]{} (1989) 4581; L. C. Biedenharn, J. Phys. [**A 22**]{} (1989) L873. V. B. Bezerra, E. M. F. Curado and M. A. Rego-Monteiro, Phys. Rev. [**D 66**]{} (2002) 085013. V. B. Bezerra, E. M. F. Curado and M. A. Rego-Monteiro, Phys. Rev. [**D 69**]{} (2004) 085003. Y. Ohnuki and S. Kitakado, J. Math. Phys. [**34**]{} (1993) 2827. S. Tanimura, Prog. Theor. Phys. [**90**]{} (1993) 271; K. Takenaga, Phys. Rev. [**D62**]{} (2000) 065001. R. Campbell, “Théorie Générale de L’ Équation de Mathieu", Masson et Cie. éditeurs, Paris, 1955. E. H. Lieb and D. C. Mattis, “Mathematical Physics in One Dimension", Academic, New York, 1966. S. S. Gubser and A. Hashimoto, Comm. Math. Phys. 203 (1999) 325, hep-th/9805140; M. Cvetič, H. Lü, C. N. Pope and T. A. Tran, Phys. Rev. [**D 59**]{} (1999) 126002, hep-th/9901002. E. L. Ince, Proc. Roy. Soc. Edin. [**46**]{} (1925) 20; S. Goldstein, Camb. Phil. Soc. Trans. [**23**]{} (1927) 303. “Tables Relating to Mathieu Functions", National Bureau of Standards, Columbia Univ. Press, New York, 1951. [**See Eq. (2.35) in page XVIII of this reference for the asymptotic expansion, Eq. (\[eq:incegoldstein\]) of this paper**]{}. E. M. F. Curado and M. A. Rego-Monteiro, J. Phys. [**A 34**]{} (2001) 3253. E. M. F. Curado, M. A. Rego-Monteiro and H. N. Nazareno, Phys. Rev. [**A 64**]{} (2001) 12105; hep-th/0012244. M. A. Rego-Monteiro and E. M. F. Curado, Int. J. Mod. Phys. [**A 17**]{} (2002) 661. A. Dimakis and F. Muller-Hoissen, Phys. Let. [**B 295**]{} (1992) 242. J.-P. Uzan, Rev. of Mod. Phys [**75**]{} (2003) 403. [^1]: To appear in Physical Review D 2004 [^2]: It would be also possible to define an equation with quadratic powers of $W$ and $W^{\dagger}$, but the above equation is the simplest one. [^3]: $J_0$ is Hermitian and there exists a vacuum state. [^4]: $x_i \equiv (\vec{r_i},t_i)$
--- abstract: '**' author: - - '[Asma Afzal, Syed Ali Raza Zaidi, Des McLernon and Mounir Ghogho]{}[^1]' bibliography: - 'D2Dref.bib' title: 'Information-Centric Offloading in Cellular Networks with Coordinated Device-to-Device Communication' --- Introduction ============ devices such as smart phones and tablets have fueled the demand for data intensive applications, including ultra high-definition video streaming, social networking and e-gaming. This puts significant pressure on the traditional cellular networks, which are not designed to support such high data rates and reliability requirements. As a consequence, current research on fifth generation (5G) wireless networks is geared towards developing intelligent ways of data dissemination by deviating from the traditional host centric network architecture to a more versatile information centric architecture. Caching in the IP networks has already proved to be a promising way to reduce the overhead of backhaul communication. Borrowing from these principles, caching at the edge of the cellular networks potentially reduces the backhaul access cost in terms of capacity, latency and energy consumption by turning memory into bandwidth [@bastug2014living]. Recent observations have indicated that the data traffic consists of a lot of duplications of multimedia content requested by various users in the same vicinity[@woo2013comparison]. Therefore, users can leverage this trend to their advantage by accessing information pre-downloaded by their neighboring users using Direct device-to-device (D2D) communication. D2D communication is a promising technique to improve the coverage and throughput of cellular networks [@lin2014overview]. Mobile users in close physical proximity can exchange popular files without the intervention of the base station (BS). This not only offloads the burden of duplicate transmissions from the BS, but it also provides higher rates due to short range D2D communication [@malandrino2014toward]. Several techniques have been proposed to materialize the concept of integrating D2D communication with cellular networks. The major design questions are: Should D2D communication operate in the cellular uplink or downlink, licensed spectrum or unlicensed spectrum, and in the licensed spectrum should it be underlay or overlay, coordinated by the BS or uncoordinated? In this paper, we focus on coordinated in-band overlay D2D communication in the cellular downlink, where a macro base station (MBS) establishes, manages and arbitrates a D2D connection [@fodor2012design]. The reader is referred to a detailed discussion of the other D2D implementation techniques in [@asadi2014survey] and the references therein. So, the novelty of this paper is as follows. We propose a new information-centric offloading mechanism, whereby the MBS maintains a record of the previously downloaded contents by the users in its long term-coverage region. Based on this information, the MBS schedules a D2D link between a user and one of its $k$ neighboring D2D helpers subject to the content availability and helper selection scheme. These D2D helper devices can be considered as users which are not receiving any data form the MBS in the current radio frame and can transmit their data. We consider two different helper selection schemes, namely, 1) nearest helper selection (NS): where the MBS selects the D2D helper closest to the user possessing the requested content and 2) uniform selection (US): where the MBS uniformly selects a D2D helper first and checks for content availability later. The estimation of a user’s location is much accurate thanks to the in-built GPS and location apps in smart phones. Fig. \[fig:SystemModel\] displays a simple example of the scenario under consideration. The MBS examines its records for the arrived content requests and schedules possible D2D transmissions. Here, User\#1 is served by its second nearest D2D helper, while User\#2 is served by the MBS as none of its $k$ neighboring helpers have the content. stochastic geometry to quantify the performance improvement compared to conventional and cache enabled single tier cellular networks. Stochastic geometry has recently emerged as a powerful tool to accurately analyze the performance of large scale cellular networks [@haenggi2012stochastic]. We make use of the Poisson point process (PPP) assumption in modeling the locations of the MBSs and D2D to derive tractable expressions for various performance metrics. The main contributions of this article are summarized as follows. - For the information-centric offloading paradigm, we consider that both the D2D helpers and BSs are equipped with caches and an arbitrary user requests a certain content based on its popularity. It is important to note that in this work, we focus on devising efficient data dissemination techniques for a given content placement strategy. Based on the content placement strategy, helper selection schemes and other caching parameters, we derive expressions for the probability that an arbitrary user is served in D2D mode for both the NS and US schemes. We obtain bounds on this probability and study its behavior as the number of candidate helpers $k$ grows. - With the help of our stochastic geometry framework, we derive the distribution of distance between an arbitrary user and its $i$th nearest D2D helper within the cell using a disk approximation for a Voronoi cell. We show that this approximation is fairly accurate for various values of $i$ and compare it with the distribution of distance between the requesting user and its $i$th nearest D2D helper not necessarily present inside the cell. We investigate the conditions in which our derived distribution reduces to the unconstrained case. - We characterize the coverage probability for individual D2D links and the probability that an arbitrary user is in coverage when it requests a particular content and operates in D2D mode with the NS and US schemes. We also validate our results with network simulations. - We explore the two important performance metrics, including the overall coverage probability and the average rate experienced by an arbitrary user requesting a particular content $c$. We show that there exists an optimal number of candidate D2D helpers $k$ which maximize the overall coverage and the average rate. The optimal $k$ maximizing the coverage probability is independent of caching parameters or the requested content and only depends on the network parameters. However, that is not the case for the optimal $k$ maximizing the average rate. We also show that high performance gains can be harnessed compared to conventional cellular communication when the most popular contents are requested. ![Illustration of cache-enabled coordinated D2D network. The MBS pairs the requesting users with one of their $k$ neighbors depending on the content availability and helper selection scheme. If none of the $k$ neighbors have the content, the MBS serves the user itself. \[fig:SystemModel\]](SystemLevelDiagram) Related Work ------------ Characterization of the performance of caching enabled cellular networks has been widely studied in [@ji2014fundamental; @ji2016wireless; @golrezaei2014base] to name a few. However, all these works make use of the simplistic protocol model, where outage occurs if the intended receiver is at a distance greater than a fixed distance from the transmitter or there is another interfering transmitter present within the range of the receiver. The other approach, which makes use of the physical model, is where the outage occurs on the basis of the received signal-to-interference-and-noise ratio (SINR). Stochastic geometry has been widely applied to analyze the physical layer metrics of the large scale wireless networks. For the case of cache enabled cellular networks, the dynamics of content popularity, propagation conditions and spatial locations are employed in [@PerabathiniBKDC15; @sarzaidi2015; @tamoor2015modeling; @bastug2014cache; @afshang2015fundamentals; @chen2016cooperative; @afshang2015modeling; @AltieriPVG14] to quantify the performance gains. The analysis of rate and energy efficiency for single-tier cache enabled cellular networks is carried out in [@bastug2014cache] and [@chen2016cooperative]. An optimal content placement strategy is devised in [@chen2016cooperative], which maximizes the rate coverage and the energy efficiency of the single-tier cellular networks. The authors in [@afshang2015modeling] and [@AltieriPVG14] consider clustered D2D networks, which operate in isolation from the underlying cellular network. In [@afshang2015modeling], the authors consider clustering to mimic spatial content locality without explicitly considering content popularity and storage. It is assumed that for a user in a given cluster, there will always be a device in that cluster with the requested content i.e., a D2D wireless link is always established. Whereas, the authors in [@AltieriPVG14] consider D2D devices with caching and a Zipf type content popularity distribution. Here, clustering is considered so that there are finite transmissions within the cluster multiplexed in time as in TDMA and one link is active at a given time. In [@afshang2015fundamentals], the analysis is carried out with different D2D transmitter selection schemes, but this work makes use of the same clustered users model as in [@afshang2015modeling]. All these works do not take into account the coexistence of D2D communication with cellular networks and that if D2D communication is infeasible, the users can communicate with the MBS. For the case of multi-tier analysis with caching, the authors in [@sarzaidi2015] and [@tamoor2015modeling] consider distributed caching, where a user can access data from the caches of multiple small base stations (SBSs) inside a cell. However, in case of the content availability in any one of the SBSs within the cell, the user is always served by its nearest SBS assuming that the content transfer takes place among the SBSs. This is different from our case as we cannot expect such level of cooperation between D2D helpers and need explicit characterization of the distances of the individual D2D helpers from the arbitrary user. The selection of cellular and D2D modes is studied for the uplink in [@lin2014spectrum] and [@elsawy2014analytical]. In [@lin2014spectrum], the decision to transmit in D2D mode is based on the distance to the receiver uniformly placed around the D2D transmitter, while in [@elsawy2014analytical], it also depends on the distance from the BS. Both of these approaches ignore the aspects of content availability, popularity and storage. Various content replacement policies and storage techniques are studied in [@niesen2014coded; @dabirmoghaddam2014understanding; @bacstuug2015transfer]. [@dabirmoghaddam2014understanding] shows how updating a cache by evicting the least recently used (LRU) content could provide performance gains. It is shown that least frequently used (LFU) policy outperforms LRU in [@fricker2012impact]. [@bacstuug2015transfer] explores how caches could be updated by exploiting social ties between users using transfer learning approach. [@niesen2014coded] proposes coded caching for delay sensitive applications. In [@blaszczyszyn2015optimal] and [@avrachenkov2016optimization], the authors explore the effect of geometric placement of caches to devise optimal content placement strategies. A simpler, fixed-caching approach is adopted in [@bastug2014cache; @LiuY15; @AltieriPVG14] and [@tamoor2015modeling], where the cache is not updated and the stored files are simply considered to follow the popularity distribution. The remainder of this paper is organized as follows: Section \[sec:System-Model\] describes the spatial setup, signal propagation, content popularity and caching models, and the information-centric offloading paradigm for both the NS and US schemes. Section \[sec:Distance-to-the\] provides the derivation of the distance between an arbitrary user and its $i$th nearest D2D helper within the cell. The distribution of this distance is then used to characterize the overall coverage and the average rate for the NS and US schemes in Section \[sec:Link-Spectral-Efficiency\]. Section \[sec:Results-and-Performance\] discusses the results and validates our analysis with network simulations. Section \[sec:Conclusion\] concludes the paper. System Model\[sec:System-Model\] ================================ We consider a cellular downlink (DL) scenario of MBSs overlaid with D2D helpers. The MBS schedules a requesting user with one of its neighboring D2D helpers inside the cell if the helper has the requested file. The network description and the key assumptions now follow. Spatial Model ------------- ![Spatial model of the network. MBSs are depicted by blue, filled diamonds; D2D helpers by red, filled triangles; and the requesting user by a filled magenta square. The $k$ candidate D2D helpers for D2D communication are marked with black asterisks (here, $k=4$) and $\lambda_{d}=10\lambda_{m}.$ \[fig:SpatialModel\]](SystemModel) According to the theory of HPPPs, the distribution of $\Phi\left(\mathcal{A}\right)$, where $\mathcal{A}$ is a bounded borel set in $\mathbb{R}^{2}$, is given as $$\mathbb{P}[\Phi\left(\mathcal{A}\right)=n]=\frac{\left(\lambda\mu(\mathcal{A})\right)^{n}}{n!}\text{exp}\left(-\lambda\mu(\mathcal{A})\right),$$ where $\lambda$ is the intensity of the HPPP and $\mu\left(\mathcal{A}\right)=\intop_{\mathcal{A}}dx$ is the Lebesgue measure on $\mathbb{R}^{2}$. For a disc of radius $r$ in $\mathbb{R}^{2}$, $\mu\left(\mathcal{A}\right)=\pi r^{2}.$ We consider that the MBSs, D2D helpers and the requesting users are distributed according to independent HPPPs $\Phi_{m}$, $\Phi_{d}$, and $\Phi_{u}$ with intensities $\lambda_{m}$, $\lambda_{d}$ and $\lambda_{u}$ respectively, where $\lambda_{u},\lambda_{d}\gg\lambda_{m}$. The requesting users are associated to the nearest MBS and the user association region is defined as $$\begin{aligned} \mathcal{S}_{i} & \overset{def}{=} & x\in\mathbb{R}^{2}\::\,\Vert y{}_{i}-x\Vert<\Vert y{}_{j}-x\Vert,\nonumber \\ & & \forall\,y{}_{i},y{}_{j}\in\Phi_{m},i\neq j,\label{eq:vcell}\end{aligned}$$ where $\mathcal{S}_{i}$ represents a Voronoi cell of the MBS located at $y_{i}\in\Phi_{m}$[^2]. Without any loss of generality, we measure performance at the requesting user located at the origin. This follows from the palm distribution of HPPPs and Slivnyak’s theorem [@stoyan1987stochastic]. The MBS selects one of the $k$ closest D2D helpers and establishes a D2D link (the selection process is described in detail in the next section). A realization of the spatial setup is shown in Fig. \[fig:SpatialModel\]. Propagation Model and Spectrum Access ------------------------------------- We assume that both the cellular and D2D links experience channel impairments including path loss and small-scale Rayleigh fading. The power received at the origin from the MBS/ D2D helper located at $y\in\Phi_{n},n=\{m,d\}$ is given as $P_{n}h\,y^{-\alpha}\text{,}$ where $P_{m}$ and $P_{d}$ are the transmit powers of the MBS and D2D helper respectively, $\alpha$ represents the path loss exponent ranging between 2 and 5 and $h$ is the channel power. We assume that $h$ is a unit-mean exponential RV representing the squared-envelope of Rayleigh fading. We consider an in-band overlay spectrum access strategy, where fixed bandwidths $W_{m}$ and $W_{d}$ are allocated for cellular and D2D communication respectively. We assume that there is universal frequency reuse across the network, but the number of resource blocks is greater than the number of users within the cell and hence, there is no intra-cell interference. Content Popularity and Caching Model \[subsec:Content-Popularity-and\] ---------------------------------------------------------------------- The performance of caching is crucially determined by the content popularity distribution. It has been observed that the popularity of data follows a Zipf popularity distribution where, the popularity of the $c$th content is proportional to the inverse of $c^{\zeta}$ for some real, positive, skewness parameter $\zeta$. It is mathematically represented as $$pop(c)=\rho c^{-\zeta}\;\;1\leq c\leq L,\label{eq:zipf}$$ where $\rho=\left(\sum_{l=1}^{L}l^{-\zeta}\right)^{-1}$ is the distribution normalizing factor and $L$ is the file library size. $\zeta=0$ corresponds to uniform popularity while a higher value of $\zeta$ results in a more skewed distribution. Empirical evidence shows that the value for $\zeta$ exists between 0.6 to 0.8 for different content types including web, file sharing, user generated content (UGC) and video on demand (VoD) [@fricker2012impact]. We consider that the MBS and the D2D helpers are equipped with caches of sizes $C_{m}$ and $C_{d}$ respectively. All files are considered to have a unit size. Our analysis can easily be extended for variable file sizes as each memory slot will then contain a chunk of a file. We further assume that user requests follow the independent reference model (IRM) as introduced in [@fricker2012impact]. According to the IRM, the user requests for a file in the library are independently generated following the popularity distribution and there is no spatio-temporal locality, i.e. identical contents have the same popularity in space and time [@sarzaidi2015]. ### Content Placement We consider that the MBS stores $C_{m}$ most popular files in its cache. The MBS caches the most popular contents, which coincides with the least frequently used (LFU) content placement strategy. Because the content popularity does not rapidly change in time, LFU placement is shown to be well-suited for the MBS. The cellular hit rate for content $c$, which is the probability that the content $c$ is present in the MBS’s cache is given as $$h_{m}(c)=\mathbb{I}_{c\leq C_{m}},$$ where $\mathbb{I}_{c\leq C_{m}}$ is an indicator variable taking the value 1 when $c=\{1,..,C_{m}\}$ and 0 otherwise. When there is a set of candidate D2D helpers which can serve a single user, the LFU placement for all D2D helpers is not optimal. Such a scenario requires a collaborative content placement strategy which takes into account the number and the locations of the D2D helpers [@blaszczyszyn2015optimal; @avrachenkov2016optimization]. Investigating the optimal content placement strategy for D2D helpers for this network setup is a research issue in itself and left for future work. We consider a sub-optimal but tractable content placement strategy for the D2D helpers to quantify the advantage of employing content-centric offloading on D2D mode. We consider that each D2D helper stores the content $c$ in each memory slot independently according to the popularity distribution $pop(c)$. The D2D hit rate for content $c$, is then given as $$\begin{aligned} h_{d}(c) & = & 1-\mathbb{P}[\textnormal{Content }c\text{ not present in }C_{d}\text{ slots}]\nonumber \\ & = & 1-[1-\rho c^{-\zeta}]^{C_{d}}.\label{eq:hitid}\end{aligned}$$ Information-Centric Offloading ============================== We assume that for a typical user requesting content $c$, the MBS will examine the contents of up to $k$ neighboring D2D helpers within the cell. The selection of the D2D helper depends on the helper selection scheme and the popularity of the requested content itself. The following proposition gives the probability for the selection of a particular D2D helper. The probability that an arbitrary user requesting content ’c’ is served by the ith nearest D2D helper within the cell under NS and US schemes is given by $$p_{d,NS}^{(k)}(i,c)=\frac{343}{30}\sqrt{\frac{14}{\pi}}\frac{\textnormal{\ensuremath{\Gamma}}(i+3.5)\,\eta_{d}^{i}\,\tensor[_{2}]{F}{_{1}}\left(1,i+3.5;i+1;\frac{\eta_{d}}{(\eta_{d}+3.5)}\right)}{i!\,(\eta_{D}+3.5)^{i+3.5}}\left(1-h_{d}(c)\right)^{i-1}h_{d}(c),\label{eq:pd2diNS}$$ and $$\begin{aligned} p_{d,US}^{(k)}(i,c) & = & h_{d}(c)\biggl[\frac{343}{30}\sqrt{\frac{14}{\pi}}\frac{\textnormal{\ensuremath{\Gamma}}(k+4.5)\,\eta_{d}^{k+1}\,\tensor[_{2}]{F}{_{1}}\left(1,k+4.5;k+2;\frac{\eta_{d}}{(\eta_{d}+3.5)}\right)}{k(k+1)!\,(\eta_{d}+3.5)^{k+4.5}}+\label{eq:pd2diUS}\\ & & \sum_{j=0}^{k-i}\frac{3.5^{3.5}\textnormal{\ensuremath{\Gamma}}(i+j+3.5)\eta_{d}^{i+j}}{\textnormal{\ensuremath{\Gamma}}(3.5)\,(i+j)\,(i+j)!\,(\eta_{d}+3.5)^{i+j+3.5}}\biggr]\nonumber \end{aligned}$$ respectively, where $\eta_{d}=\lambda_{d}/\lambda_{m}$, $i=\{1,..k\},$ $\textnormal{\ensuremath{\Gamma}}(a)$ is the complete Gamma function and $\tensor[_{2}]{F}{_{1}}\left(a,b;c;x\right)$ is the generalized hypergeometric function. For the user to be served by the $i$th nearest D2D helper, there must be at least $i$ D2D helpers inside the cell. In the NS scheme, the user is served by the $i$th helper if no closer helper has the requested content. This implies $$\begin{aligned} p_{d,NS}^{(k)}(i,c) & = & \mathbb{P}\left[N_{d}\geqslant i\right]\left(1-h_{d}(c)\right)^{i-1}h_{d}(c),\label{eq:pd2di}\end{aligned}$$ where $N_{d}$ is the number of D2D helpers in a Voronoi cell whose probability mass function is given as [@yu2013downlink] $$\mathbb{P}\left[N_{d}=j\right]=\frac{3.5^{3.5}\textnormal{\ensuremath{\Gamma}}(j+3.5)(\lambda_{d}/\lambda_{m})^{j}}{\textnormal{\ensuremath{\Gamma}}(3.5)\,i!\,(\lambda_{d}/\lambda_{m}+3.5)^{i+3.5}}.$$ This implies $\mathbb{P}\left[N_{d}\geqslant i\right]=1-\sum_{j=0}^{i-1}\mathbb{P}\left[N_{d}=j\right]$. Substituting this expression in (\[eq:pd2di\]) gives (\[eq:pd2diNS\]). For the US scheme, the user is served by the $i$th helper if it is uniformly selected and has the requested content. This implies $$p_{d,US}^{(k)}(i,c)=h_{d}(c)\left[\frac{1}{k}\mathbb{P}\left[N_{d}>k\right]+\sum_{j=0}^{k-i}\frac{1}{i+j}\mathbb{P}\left[N_{d}=i+j\right]\right],\label{eq:pdUSi}$$ Substituting the expressions for $\mathbb{P}\left[N_{d}>k\right]$ and $\mathbb{P}\left[N_{d}=i+j\right]$ completes the proof. The probability that an arbitrary user is served in D2D mode under NS and US schemes is a straightforward summation of $p_{d,NS}^{(k)}(i,c)$ and $p_{d,US}^{(k)}(i,c)$ over $i=\{1,2..,k\}.$ This gives $$p_{d,NS}^{(k)}(c)=\sum_{i=1}^{k}p_{d,NS}^{(k)}(i,c),\label{eq:pd2dNSc}$$ and $$p_{d,US}^{(k)}(c)=p_{d,US}^{(1)}(c)=h_{d}(c)\left[1-(1+3.5^{-1}\eta_{d})^{-3.5}\right].\label{eq:pd2dUSc}$$ It is interesting to note that in case of the US scheme, $p_{d,US}^{(k)}(c)$ is independent of $k$. This is because the probability of the D2D mode depends on the contents of only one helper selected at random. \[cor:hitSimple\] The bounds on $p_{d,\varPi}^{(k)}(i,c),\varPi=\{NS,US\}$ with respect to $\eta_{d}$ are given as $$p_{d,NS}^{(k)}(i,c)\leq\left(1-h_{d}(c)\right)^{i-1}h_{d}(c)\label{eq:pdiNSsimple}$$ and $$p_{d,US}^{(k)}(i,c)\leq\frac{1}{k}h_{d}(c)\label{eq:pdiUSsimple}$$ where the equalities hold when $\lambda_{d}\gg\lambda_{m}$ and $\eta_{d}\rightarrow\infty$. As $\eta_{d}\rightarrow\infty$, $\mathbb{P}\left[N_{d}\geqslant k\right]\rightarrow1$, i.e. there are definitely at least $k$ D2D helpers within the cell and (\[eq:pd2di\]) reduces to $\left(1-h_{d}(c)\right)^{i-1}h_{d}(c)$. It can be easily seen from (\[eq:pdUSi\]) that as $\mathbb{P}\left[N_{d}\geqslant k\right]\rightarrow1$, $\mathbb{P}\left[N_{d}=i+j\right]\rightarrow0,$ where $i+j<k$. Hence, $p_{d,US}^{(k)}(c)$ reduces to $h_{d}(c)$. Before moving on to further analysis, we explore the behavior of the D2D mode probability in Figs. \[fig:Mode-Selection-Probability\] and \[fig:Effect-of-\]. The values of the simulation parameters used in plotting the results are listed further on in Table \[tab:List\] unless stated otherwise. We can see that there is a rapid increase in $p_{d,NS}^{(k)}(c)$ initially with the increase in $k$, but diminishing gains are observed when $k$ is further increased. A sharp decrease in $p_{d,NS}^{(k)}(c)$ is observed as the requested content becomes less popular or the skewness parameter $\zeta$ decreases. As established in (\[eq:pd2dUSc\]), we see that there is no effect of increasing $k$ on $p_{d,US}^{(k)}(c).$ In Fig. \[fig:D2Dapprox\], we plot the D2D mode probabilities $p_{d,\varPi}^{(k)}(c)=\sum_{i=1}^{k}p_{d,\varPi}^{(k)}(i,c),\varPi=\{NS,US\}$ using the upper bounds from Corollary \[cor:hitSimple\] and compare them for the actual values of $p_{d,\varPi}^{(k)}(c)$ in (\[eq:pd2dNSc\]) and (\[eq:pd2dUSc\]). We see that the deviation for the NS scheme becomes large as the value of $k$ increases, but convergence is fast and the bounds are fairly tight for $\eta_{d}\geq10$. ![Effect of helper density on D2D mode probability.\[fig:D2Dapprox\]](pd2dKapprox) Distance to the $i$th nearest D2D helper within a Macrocell\[sec:Distance-to-the\] ================================================================================== One of the main contributions of this paper is to characterize the distribution of the distance between the typical user and the $i$th nearest D2D helper within the macro cell. It is a well-known fact that the distance between the nearest neighbors for a 2-D Poisson process is Rayleigh distributed and this has been widely adopted for the stochastic geometry analysis of cellular networks [@lin2014spectrum; @elsawy2014analytical], [@andrews2011tractable; @bastug2014cache]. In our case, however, the MBS only keeps a record of the files stored in the memory of D2D helpers within its coverage region. Therefore, it can only connect the requesting user with the helpers within its cell. Fig. \[fig:SpatialModel\] illustrates that in our spatial setup, the $i$th nearest D2D helper is not always within the macrocell. Hence, this adds a layer of complexity to our model as the distance is no longer independent of the geometrical attributes of the cell, including its shape and size. The distribution of the exact shape and size of a typical Voronoi cell in a 2-D Poisson Voronoi tessellation is still unknown. In their analysis of bivariate Poisson processes in [@foss1996voronoi], Foss and Zuyev make use of the maximal disk approximation for the Voronoi cell. The maximal disk, $B_{max}$, is the largest disk centered at the MBS inscribing the Voronoi cell. The exact characterization of the distribution of the radius $X$ of $B_{max}$ is straight forward as it is the probability that there is no other BS at a distance $2x$ from the tagged BS and is expressed as $\mathbb{P}[X\geq2x]=\text{exp}\left(-4\lambda_{m}\pi x^{2}\right).$ This implies $$f_{X}(x)=8\lambda_{m}\pi x\text{ exp}(-4\lambda_{m}\pi x^{2}).\label{eq:rhomax}$$ In this work, we utilize the maximal ball approximation for the Voronoi cell to derive the distribution of the distance between the typical user and its $i$th nearest D2D helper[^3]. The following Lemmas provide some useful preliminary results which are necessary conditions for the characterization of the distance distribution. The probability that the typical user lies inside $B_{max}$ is the probability that the radius of $B_{max}$ is greater than the distance between the MBS and typical user. It is a constant value and is equal to $$p_{in}=\mathbb{P}\left[X\geq Y\right]=1/5,\label{eq:pin}$$ where $f_{Y}(y)=2\lambda_{m}\pi y\,\textnormal{exp}(-\lambda_{m}\pi y^{2}).$ The distance between the typical user and its tagged MBS is Rayleigh distributed [@andrews2011tractable]. This implies $p_{in}=\int_{0}^{\infty}\left[1-F_{X}(y)\right]f_{Y}(y)\,dy$, where $F_{X}(y)=\int_{0}^{y}f_{X}(x)\,dx$. Solving the integrals we get (\[eq:pin\]). The probability that there are at least i D2D helpers inside $B_{max}$ is given as $$p_{N_{d}}^{(i)}=5(1+4\eta_{d}^{-1})^{-i}+\frac{10}{3}(1+6\eta_{d}^{-1})^{-i}-8(1+5\eta_{d}^{-1})^{-i}.\label{eq:pNdi}$$ Given a disk $B_{max}$ with radius $X=x$, the probability that there are at least $i$ D2D helpers inside the disk is given as $$\begin{aligned} \mathbb{P}\left[N_{d}\geq i|X=x\right] & = & 1-\sum_{j=0}^{i-1}\frac{\left(\lambda_{d}\pi x^{2}\right)^{j}}{j!}\textnormal{exp}\left(-\lambda_{d}\pi x^{2}\right)=1-\frac{\textnormal{\ensuremath{\Gamma}}(i,\lambda_{d}\pi x^{2})}{\textnormal{\ensuremath{\Gamma}}(i)}.\end{aligned}$$ Integrating over $X=x$ while ensuring $X>D$, we obtain $$p_{N_{d}}^{(i)}=\intop_{0}^{\infty}\biggl[\intop_{y}^{\infty}\left[1-\frac{\textnormal{\ensuremath{\Gamma}}(i,\lambda_{d}\pi x^{2})}{\textnormal{\ensuremath{\Gamma}}(i)}\right]\,f_{X}(x)\,F_{Y}(x)\,dx\biggr]\,f_{Y}(y)\,dy,$$ where $F_{Y}(x)=\int_{0}^{x}f_{Y}(y)\,dy$. Simplification after evaluating the integrals and normalizing with $p_{in}$ yields (\[eq:pNdi\]). We now give the distribution of distance in the following Theorem. \[prop:dist\]The distribution of the distance between the typical user and its ith nearest D2D helper within the cell can be well approximated using the inscribed disk approximation for a Voronoi cell and is given as $$\begin{gathered} f_{R_{i}}(r)=\biggl[\intop_{0}^{\infty}f_{Y}(y)\intop_{a_{1}}^{a_{2}}f_{i,1}(r,y,x)\,f_{X}(x)\,F_{Y}(x)dx\,dy+f_{i,2}(r)\kappa(r)\biggr]\frac{1}{p_{in}\,p_{N_{d}}^{(i)}},\label{eq:fr}\end{gathered}$$ where, $$f_{i,1}(r)=\frac{\lambda_{d}^{i}}{\textnormal{\ensuremath{\Gamma}}(i)}\nabla(r,y,x)^{i-1}\nabla^{'}(r,y,x)\,\textnormal{exp}(-\lambda_{d}\nabla(r,y,x)),\label{eq:f1r}$$ $$f_{i,2}(r)=2\frac{\left(\lambda_{d}\pi\right)^{i}}{\textnormal{\ensuremath{\Gamma}}(i)}r^{2i-1}\textnormal{exp}(-\lambda_{d}\pi r^{2}),\label{eq:f2r}$$ $$\begin{aligned} \kappa(r) & = & \frac{\textnormal{exp}\left(-4b\right)}{15}+b\sqrt{\pi}\biggl[\frac{\sqrt{6}}{9}\textnormal{exp}\left(\frac{b}{6}\right)\textnormal{erfc}\left(\frac{5\sqrt{6b}}{6}\right)-\frac{4\sqrt{5}}{25}\textnormal{exp}\left(-\frac{4b}{5}\right)\textnormal{erfc}\left(\frac{4\sqrt{5b}}{5}\right)\biggr],\nonumber \\ \label{eq:kappaR}\end{aligned}$$ where $b=\lambda_{m}\pi r^{2}$, $\nabla(r,y,x)=r^{2}\arccos\left(\frac{\omega_{1}}{2y\,r}\right)+x^{2}\arccos\left(\frac{\omega_{2}}{2y\,x}\right)-\frac{1}{2}\sqrt{4y^{2}x^{2}-\omega_{2}^{2}}$, $\omega_{1}=r^{2}+y^{2}-x^{2}$, $\omega_{2}=x^{2}+y^{2}-r^{2}$, $\nabla^{'}(r,y,x)$ is the derivative of $\nabla(r,y,x)$ with respect to $r$, $a_{1}=\max(y,r-y)$ and $a_{2}=r+y$. Please refer to Appendix \[sec:Proof1\]. The expression in (\[eq:fr\]) is validated with network simulations in Section \[sec:Results-and-Performance\]. Before further analysis, we develop some insights on the derived distance distribution in (\[eq:fr\]). We can write $f_{R_{i}}=T_{1}+T_{2}$, where $T_{1}=(p_{in}\,p_{N_{d}}^{(i)})^{-1}\intop_{0}^{\infty}\intop_{a_{1}}^{a_{2}}f_{i,1}(r,y,x)\,f_{X}(x)\,F_{Y}(x)dx\,f_{Y}(y)dy$ and $T_{2}=(p_{in}\,p_{N_{d}}^{(i)})^{-1}f_{i,2}(r)\kappa(r)$. We wish to see how the density of MBSs impacts $T_{1}$ and $T_{2}$ and in turn $f_{R_{i}}$. \[cor:sparseDist\]For sparse networks, i.e. when $\lambda_{m}\rightarrow0(\eta_{d}\rightarrow\infty),$ $f_{R_{i}}(r)$ reduces to the distribution of distance to the unconstrained nearest D2D helper and is given by (\[eq:f2r\]). Referring to Appendix \[sec:Proof1\], we see that when $\lambda_{m}\rightarrow0$, $x\gg r$ and $b(o,r)$ almost surely lies inside $B_{max}$. This in turn means $T_{1}\rightarrow0$. However, as $\lambda_{m}\rightarrow0$, we see from (\[eq:kappaR\]) and (\[eq:pNdi\]) that $\kappa(r)=1/15$ and $p_{N_{d}}^{(i)}=1/3$. As $p_{in}=1/5$ is fixed, $T_{2}$ reduces to $f_{i,2}(r)$. Fig. \[fig:T1T2\] reinforces the result in Cor. \[cor:sparseDist\]. We compare (\[eq:fr\]) with the unconstrained $i$th nearest neighbor distribution [@moltchanov2012survey]. We see that when the network is sparse, the term $T_{2}$ dominates $f_{R_{i}}(r)$ and the distribution of the distance to the $i$th nearest neighbor essentially approaches that of unconstrained case. This conclusion can be intuitively explained as we would expect that for very large cell sizes, the $i$th nearest D2D helper will reside in the same macrocell. However, as the MBS density increases, $T_{1}$ begins to increase and cannot be ignored. That is when (\[eq:fr\]) begins to significantly deviate from (\[eq:f2r\]). Performance Analysis under NS and US schemes\[sec:Link-Spectral-Efficiency\] ============================================================================ To assess the performance of cellular networks enhanced with coordinated D2D communication, we define the following quality-of-service (QoS) parameters. Overall coverage probability ----------------------------- The typical user is in coverage in cellular mode when the received signal to interference and noise ratio (SINR) is greater than a certain modulation dependent decoding threshold $\tau_{m}$. This is mathematically characterized as $$\varGamma_{m}=\mathbb{P}\left[SINR_{m}\geq\tau_{m}\right].$$ Similarly, in D2D mode, when the user is served by its $i$th nearest D2D helper, the coverage probability is written as $$\Gamma_{d,i}=\mathbb{P}\left[SINR_{d,i}\geq\tau_{d}\right],$$ where $\tau_{d}$ is the SINR threshold in D2D mode. We define the overall coverage probability $\Gamma_{\varPi}^{(k)}(c)$ of an arbitrary user requesting content $c$ by the following expression $$\Gamma_{\varPi}^{(k)}(c)=(1-p_{d,\varPi}^{(k)}(c))\Gamma_{m}+p_{d,\varPi}^{(k)}(c)\Gamma_{d,\varPi}^{(k)}(c),\label{eq:OverallCov}$$ where $\Gamma_{d,\varPi}^{(k)}(c)$ is the probability of coverage in D2D mode for a given $k$, content request $c$ and D2D helper selection scheme $\varPi=\{NS,US\}$. Here, $p_{d,\varPi}^{(k)}(c)$ and $\left(1-p_{d,\varPi}^{(k)}(c)\right)$ are the probabilities for D2D and cellular modes respectively. Our first step is to obtain the D2D and cellular coverage probabilities of a typical link. When the typical user is operating in cellular mode, we have from [@andrews2011tractable] $$\Gamma_{m}=\pi\lambda_{m}\int_{0}^{\infty}\textnormal{exp}\left(-\pi\lambda_{m}\nu\left(1+\delta_{m}(s_{m},\alpha)\right)-\frac{\tau_{m}\sigma^{2}}{P_{m}}\nu^{\alpha/2}\right),\label{eq:covC}$$ where $\Gamma_{m}$ is the cellular coverage probability, $s_{m}=\tau_{m}\nu^{\alpha}$ and $\delta_{m}(s_{m},\alpha)=\tau_{m}^{2/\alpha}\intop_{\tau_{m}^{-2/\alpha}}(1+u^{\alpha/2})du$. ![Effect of varying $\lambda_{m}$ on $T_{1}$ and $T_{2}$: $i=1,\lambda_{d}=200/A,A=\pi500^{2}.$ \[fig:T1T2\]](PDF_T1_T2) \[prop:covD\]Given that the typical user is served in D2D mode, the probability of coverage under the NS and US schemes can be expressed as $$\Gamma_{d,\varPi}^{(k)}(c)=\frac{\sum_{i=1}^{k}p_{d,\varPi}^{(k)}(i,c)\Gamma_{d,i}}{\sum_{i=1}^{k}p_{d,\varPi}^{(k)}(i,c)},\;\varPi=\{NS,US\},\label{eq:covDk}$$ where $\Gamma_{d,i}$ is the coverage probability when the typical user is served by the ith nearest D2D helper and is given as $$\begin{aligned} \varGamma_{d,i} & \approx & \intop_{r=0}^{\infty}\textnormal{exp}\left(-2\pi\frac{s_{d}\tilde{\lambda}_{m}\delta_{d}(s_{d},\alpha)}{(\alpha-2)}\right)\textnormal{exp}\left(-s_{d}\frac{\sigma^{2}}{P_{d}}\right)f_{R_{i}}(r)\,dr\label{eq:covD2D}\end{aligned}$$ where $s_{d}=\tau_{d}r^{\alpha}$, $\delta_{D}(s,\alpha)=\mathbb{E}_{Q}\left[q^{-(\alpha-2)}\tensor[_{2}]{F}{_{1}}\left(1,\bar{\alpha};1+\bar{\alpha};-sq^{-\alpha}\right)\right]$, $\bar{\alpha}=1-2/\alpha$ and $f_{Q}(q)=2\pi\tilde{\lambda}_{M}q\,\textnormal{\textnormal{\textnormal{exp}}}(-\tilde{\lambda}_{M}\pi q^{2}).$ Please refer to Appendix \[sec:Proof2\] Average Rate ------------ Using a similar exposition as in the previous subsection, we express the average rate $T_{\varPi}^{(k)}(c)$ experienced by an arbitrary user requesting content $c$ under the NS and US schemes as $$T_{\varPi}^{(k)}(c)=\left(1-p_{d,\varPi}^{(k)}(c)\right)\overline{R_{m,\varPi}}(c)+p_{d,\varPi}^{(k)}(c)\overline{R_{d,\varPi}}(c)\text{\;bps},\label{eq:tp}$$ where $p_{d,\varPi}^{(k)}(c)$ and $\left(1-p_{d,\varPi}^{(k)}(c)\right)$ are the probabilities for D2D and cellular communication and $\overline{R_{m,\varPi}}(c)$ and $\overline{R_{d,\varPi}}(c)$ are respectively the average cellular and D2D rates. \[prop:Rd\]The average rate experienced by an arbitrary user requesting content ’c’ in D2D mode under NS and US schemes is expressed as $$\overline{R_{d,\varPi}^{(k)}}(c)=W_{d}\gamma\left(\eta_{u_{d,\varPi}}\right)\frac{\sum_{i=1}^{k}p_{d,\varPi}^{(k)}(i,c)R_{d,i}}{p_{d,\varPi}^{(k)}(c)},$$ $$R_{d,i}=\mathbb{E}\left[\textnormal{log}_{2}(1+SINR_{d,i})\right],\label{eq:Rdi}$$ where $\varPi=\{NS,US\}$, $\gamma(a)=\left(1-\textnormal{exp}\left(-a\right)\right)/a$, $W_{d}$ is the bandwidth reserved for D2D communication and $\eta_{u_{d,\varPi}}=\lambda_{u_{d,\varPi}}/\lambda_{m}$. Here, $\lambda_{u_{d,\varPi}}=\lambda_{u}\rho\sum_{c=1}^{L}c^{-\zeta}p_{d,\varPi}^{(k)}(c)$ is the average density of users operating in D2D mode. The average D2D rate can be written as $$\overline{R_{d,\varPi}^{(k)}}(c)=\mathbb{E}\left[\frac{W_{d}}{N_{u_{d,\varPi}}}\right]R_{d,\varPi}^{(k)}(c),$$ where $R_{d,\varPi}^{(k)}(c)$ is the $\mathbb{E}\left[W_{d}/N_{u_{d,\varPi}}\right]$ is the average bandwidth available for the communication on a D2D link where $N_{u_{d,\varPi}}\sim\textnormal{Poisson }(\eta_{u_{d,\varPi}})$ is the number of simultaneously active users in D2D mode inside the cell[^4]. Hence, $$\begin{aligned} \mathbb{E}\left[1/N_{u_{d,\varPi}}\right] & = & \sum_{n=1}^{\infty}\frac{1}{n}\frac{(\eta_{u_{d,\varPi}})^{n-1}}{(n-1)!}\textnormal{exp}\left(-\eta_{u_{d,\varPi}}\right),\nonumber \\ & = & \eta_{u_{d,\varPi}}^{-1}\left(1-\textnormal{exp}\left(-\eta_{u_{d,\varPi}}\right)\right).\end{aligned}$$ where the summation starts from $n=1$ to account for the presence of the user under consideration. \[prop:Rc\]The average rate experienced by an arbitrary user requesting content ’c’ in cellular mode under NS and US schemes is expressed as $$\overline{R_{m,\varPi}^{(k)}}(c)=W_{m}\hat{R}_{m}\gamma\left(\eta_{u_{m,\varPi}}\right)$$ where $\hat{R}_{m}(c)=R_{m}\left(\mathbb{I}_{c\leq C_{m}}+\beta\,\mathbb{I}_{c>C_{m}}\right)$ is the average capacity of cellular links, $\mathbb{I}_{c\leq C_{m}}$ and $\mathbb{I}_{c>C_{m}}$ are respectively the cellular hit and miss rates, $R_{m}=\mathbb{E}\left[\textnormal{log}_{2}(1+SINR_{m})\right]$ , $\beta$ is the backhaul delay coefficienct, $W_{m}$ is the cellular bandwidth, and $\eta_{u_{m,\varPi}}=\lambda_{u_{m,\varPi}}/\lambda_{m}$ with $\lambda_{u_{m,\varPi}}=\lambda_{u}(1-\rho\sum_{c=1}^{L}c^{-\zeta}p_{d,\varPi}^{(k)}(c))$. The proof is on the similar lines as Proposition \[prop:Rd\] with the exception that the average density of cellular users is $\lambda_{u}-\lambda_{u_{d,\varPi}}$, i.e. all the users which are not operating in D2D mode are shifted to cellular mode. Furthermore, when the requested content is not present in the MBS cache, the average rate of the cellular link is reduced by a factor $\beta$, which accounts for the delay introduced by fetching the content from backhaul. Results and Discussion\[sec:Results-and-Performance\] ===================================================== Parameter Description Value --------------------------------------- ---------------------------------- --------------------------- $\alpha$ Path loss exponent 4 $\lambda_{m},\lambda_{d},\lambda_{u}$ MBS, D2D helper and user density $[10,100,200]/\pi500^{2}$ $\zeta$ Popularity skewness parameter 0.8 $c,L$ Requested content, Library size 1, $10^{4}$ $\beta$ Backhaul delay coefficient 0.8 $C_{d},C_{m}$ D2D and MBS cache sizes 20, 500 $W_{m},W_{d}$ Cellular and D2D bandwidth $[7,3]$ MHz $P_{m},P_{d}$ Cellular and D2D transmit power $[30,23]$ dBm $\tau_{m},\tau_{d}$ Cellular and D2D SINR threshold $[30,30]$ dBm $\sigma^{2}$ Noise power -110 dBm : List of simulation parameters\[tab:List\] In this section, we will give some key results and verify our analysis with Monte Carlo simulations. For our simulation setup, the MBSs and D2D helpers are distributed according to HPPPs with intensities $\lambda_{m}$ and $\lambda_{d}$ respectively and the performance is measured at the origin. We first validate the distribution of distance to the $i$th nearest D2D helper derived in Theorem \[prop:dist\] for various values of $i$. For the simulations, we ignore the realizations in which the number of D2D helpers is less than $i$ in the typical cell. In case of the disk approximation, the realizations in which the typical user lies outside $B_{max}$, or there are less than $i$ D2D helpers inside $B_{max}$, are all ignored. Fig. \[fig:Dist\] shows that the disk approximation is very accurate, while the unconstrained nearest neighbor distribution in (\[eq:f2r\]) does not encapsulate the behavior of the distance distribution and the deviations from the actual distribution become large as the value of $i$ increases. ![Distribution of the distance to the $i$th nearest D2D helper from the tagged user within the Voronoi cell, where $\lambda_{m}=20/\pi500^{2},$ and $\lambda_{d}=200/\pi500^{2}$.\[fig:Dist\]](PDF_rk1_rk2_rk3_20_200) Fig. \[fig:covD\] validates our analysis of the probability of coverage $\Gamma_{d,i}$ when the typical user is being served by the $i$th nearest D2D helper ((\[eq:covD2D\]) in Theorem \[prop:covD\]). We see that the disk approximation holds fairly accurately for all values of $i$. The slight deviation of the analysis using the disk approximation from the simulations using disk approximation is because of the equi-dense HPPP approximation for the D2D interferers. As expected, we see a decrease in $\Gamma_{d,i}$ with the increases in $i$ for a fixed SINR threshold. This is because, as $i$ increases, the distance between the transmitting D2D helper and the typical user increases, thereby aggravating the path loss. For comparison, we also plot the cellular coverage $\Gamma_{m}$ given in (\[eq:covC\]). For a given SINR threshold, we see that small values of $i$ result in a much better coverage for a D2D link compared to the cellular link. ![Probability of coverage when a typical user is served by the MBS or the $i$th nearest D2D helper. \[fig:covD\]](SINRd_k2) Fig. \[fig:Overall-D2D-coverage\] illustrates the probability of being in coverage in D2D mode. We also validate the D2D coverage probability $\Gamma_{d,\varPi}^{(k)}(c),\varPi=\{NS,US\}$ derived in Theorem \[prop:covD\] (\[eq:covDk\]). For each simulation trial, maximum $k$ closest D2D helpers are first checked for content availability. The content availability ($c=1$ in this case) is a Bernoulli event with probability $h_{d}(c)$. Out of the successful D2D helpers (if there are any), the helper is selected either uniformly (US scheme) or closest to the origin (NS scheme). We see that the D2D coverage probability for the NS scheme outperforms the US scheme. This is because in the US scheme, a D2D helper is uniformly selected out of the maximum $k$ closest helpers, while the closest helper is given preference in the NS scheme. $$\Gamma_{US}^{(k)}(c)\approx\left[1-\rho c^{-\zeta}\right]^{C_{d}}\Gamma_{m}+\frac{1}{k}\left(1-\left[1-\rho c^{-\zeta}\right]^{C_{d}}\right)\sum_{i=1}^{k}\Gamma_{d,i}\label{eq:CovUSsimp}$$ $$\Gamma_{NS}^{(k)}(c)\approx\left[1-\rho c^{-\zeta}\right]^{kC_{d}}\Gamma_{m}+\left(1-\left[1-\rho c^{-\zeta}\right]^{C_{d}}\right)\sum_{i=1}^{k}\left[1-\rho c^{-\zeta}\right]^{(i-1)C_{d}}\Gamma_{d,i}\label{eq:CovNSsimp}$$ ![\[fig:Overall-D2D-coverage\]Coverage probability in D2D mode under the NS and US schemes. ](SINRd_k_AVGvsUniform_B10D100) The behavior of the D2D coverage probability $\Gamma_{d,\varPi}^{(k)}(c),\varPi=\{NS,US\}$ is further investigated when the value of $k$ is changed. We can see from Fig. \[fig:D2D-coverage-for\] that the increase in $k$ adversely affects $\Gamma_{d,US}^{(k)}(c)$. However, the effect on $\Gamma_{d,NS}^{(k)}(c)$ is much less pronounced. ![\[fig:D2D-coverage-for\]D2D coverage for various values of $k$ (increasing to the left). ](SINRd_k_NearestVsUniform_Analysis) ![\[fig:OverallCov\]Effect of increasing $k$ on the overall coverage probability for various content requests. ](AvgCoverage) ![Effect of D2D caching parameters on $k_{NS}^{*}$.\[fig:kOptNS\]](AvgCoverageNSoptK) ![Percentage maximum gain in the overall coverage with coordinated D2D under NS and US schemes.\[fig:Percentage-gain-in\]](CovGain) Performance Evaluation ---------------------- $$T_{US}^{(k)}(c)\approx\left[1-\rho c^{-\zeta}\right]^{C_{d}}W_{m}\gamma\left(\eta_{u_{m,US}}\right)\hat{R}_{m}(c)+\frac{W_{d}}{k}\gamma\left(\eta_{u_{d,US}}\right)\left(1-\left[1-\rho c^{-\zeta}\right]^{C_{d}}\right)\sum_{i=1}^{k}R_{d,i}\label{eq:tpsimpUS}$$ $$\begin{aligned} T_{NS}^{(k)}(c) & \approx & \left[1-\rho c^{-\zeta}\right]^{kC_{d}}W_{m}\gamma\left(\eta_{u_{m,NS}}\right)\hat{R}_{m}(c)\nonumber \\ & & +W_{d}\gamma\left(\eta_{u_{d,NS}}\right)\left(1-\left[1-\rho c^{-\zeta}\right]^{C_{d}}\right)\sum_{i=1}^{k}\left[1-\rho c^{-\zeta}\right]^{(i-1)C_{d}}R_{d,i}\label{eq:tpsimpNS}\end{aligned}$$ We now study the performance metrics defined in Sec. \[sec:Link-Spectral-Efficiency\] with respect to the two key parameters, namely, the number of candidate neighboring D2D helpers $k$ and the requested content $c$. We first consider the overall coverage probability $\Gamma_{\varPi}^{(k)}(c),\varPi=\{NS,US\}$ given in (\[eq:OverallCov\]). The simplified expressions for $\Gamma_{\varPi}^{(k)}(c)$ are presented in (\[eq:CovUSsimp\]) and (\[eq:CovNSsimp\]) using the upper bounds for $p_{\varPi}^{(k)}(i,c)$ from Corollary \[cor:hitSimple\]. In Fig. \[fig:OverallCov\], the overall coverage probability for the US scheme $\Gamma_{US}^{(k)}(c)$ is seen to monotonically decrease with the increase in $k$, starting with a maximum value at $k_{US}^{*}=1$. This is because, from Fig. \[fig:covD\] we see that the D2D link coverage $\Gamma_{d,i}$ decreases as $i$ increases and even gets worse compared to the cellular link coverage $\Gamma_{m}$. As $k$ increases in (\[eq:CovUSsimp\]), the contribution of $\Gamma_{d,1}$ decreases in the second term corresponding to the coverage in D2D mode. Intuitively, this means that the MBS does not make an intelligent decision in the selection of the D2D helper and may select a helper farther from the requesting user for D2D communication. As the first term in (\[eq:CovUSsimp\]) is independent of $k$, and $\Gamma_{d,i}$ is a decreasing function in $i$, (\[eq:CovUSsimp\]) is always maximized when $k_{US}^{*}=1$. On the contrary, the plots for $\Gamma_{NS}^{(k)}(c)$ reveal an interesting trade off in the selection of $k$ to maximize the overall coverage. We see that for a given SINR threshold (and also $\tau_{d}=\tau_{m}$), there exists an optimal value of $k=k_{NS}^{*}$, which maximizes $\Gamma_{NS}^{(k)}(c)$ (shown by the dotted line). The explanation for this phenomenon is given as follows. We see that as $k$ is increased, the probability of cellular mode is decreased as the term $\left[1-\rho c^{-\zeta}\right]^{kC_{d}}$ premultiplied with $\Gamma_{m}$ in (\[eq:CovNSsimp\]) decreases. Initially, $\Gamma_{NS}^{(k)}(c)$ rises with the increase in $k$ and attains a maximum value at $k_{NS}^{*}$, but with further increase in $k$, the coverage $\Gamma_{d,i}$ with the activated D2D links is no longer better than the cellular coverage as already seen from Fig. \[fig:covD\], hence, $\Gamma_{NS}^{(k)}(c)$ begins to drop. We observe that as the requested content $c$ becomes less popular, both $p_{NS}^{(k)}(c)$ and $p_{US}^{(k)}(c)$ drop, and for the least popular requested content $\Gamma_{\varPi}^{(k)}(c)\rightarrow\Gamma_{m}$ as $h_{d}(c)\rightarrow0$ implying that the requesting user can only be served in cellular mode. It can also be seen that varying $c$ does not affect the optimal value $k_{NS}^{*}$. In Fig. \[fig:kOptNS\], we also observe the effect of the other crucial D2D caching parameters, $\zeta$ and $C_{d}$ on $\Gamma_{NS}^{(k)}(c)$. We see that $k_{NS}^{*}$ is also resilient to the changes in $\zeta$ and $C_{d}$, but the maximum value of the overall coverage $\Gamma_{NS}^{(k)}(c)$ increases with the increase in $\zeta$ and $C_{d}$. This is due to the fact that popular content $c$ or higher values of $\zeta$ and $C_{d}$ all translate into a higher probability of being served by the $i$th nearest D2D helper as $p_{\varPi}^{(k)}(i,c)$ increases $\forall i=\{1,..,k\}$, but these parameters do not affect the link quality ($\Gamma_{m}$ and $\Gamma_{d,i}$). Therefore, $k_{NS}^{*}$ is independent of the caching parameters. To better visualize the improvement in coverage compared to the conventional cellular network scenario, we compute the gain in the overall coverage probability. It is the percentage difference between the maximum attainable overall coverage probability under the NS and US schemes $\Gamma_{\varPi}^{(k_{\varPi}^{*})}(c),\varPi=\{NS,US\}$ and the conventional cellular coverage probability. It is given as $$G(c)=\frac{\Gamma_{\varPi}^{(k_{\varPi}^{*})}(c)-\Gamma_{m}}{\Gamma_{m}}\times100\%.$$ Figure \[fig:Percentage-gain-in\] shows that for popular content requests and skewed popularity distribution, more than 50% better coverage can be obtained with the NS scheme compared to the conventional scenario. The US scheme does not perform as good as the NS scheme, but it yields sufficient gains (35% at best under the given network setting). ![Effect of increasing $k$ on the average rate experienced by an arbitrary user for various content requests.\[fig:AvgRateC\]](AvgRateVsK_VaryingC) We will now focus on the analysis of average rate experienced by the user in the US and NS schemes to gain some more useful insights on the design of cellular networks enabled with coordinated D2D communication. The simplified expressions for $T_{US}^{(k)}(c)$ and $T_{NS}^{(k)}(c)$ using Corollary \[cor:hitSimple\] are given in (\[eq:tpsimpUS\]) and (\[eq:tpsimpNS\]) respectively. For the average rate with the US scheme, we see from (\[eq:tpsimpUS\]) that $T_{US}^{(k)}(c)$ exhibits the same behavior as $\Gamma_{US}^{(k)}(c)$ the increase in $k$ reduces the average rate in D2D mode because D2D helpers located farther from the requesting user are selected with the probability equal to the nearest helpers. Hence, $\Gamma_{US}^{(k)}(c)$ is only maximized when $k=k_{US}^{*}=1$. To visualize when maximum gains can be harnessed with the NS scheme, we plot the $\Gamma_{NS}^{(k)}(c)$ in Fig. \[fig:AvgRateC\] and compare it with the rates experienced by the user in a cellular only network. The rate experienced by the user in a cellular network where the MBS is equipped with caching capability is given as $$T_{m}^{(ca)}=(W_{m}+W_{d})\gamma\left(\eta_{u}\right)R_{m}.$$ Notice that all of the bandwidth $W_{d}+W_{c}$ is used for cellular communication and $\eta_{u}=\lambda_{u}/\lambda_{m}$ as all the requesting users in the cell share the cellular link capacity. When there is no caching at the MBS, the rate is reduced by a fraction $\beta$ due to the delay introduced by the backhaul communication and is given as $$T_{m}^{(bh)}=\beta T_{m}^{(ca)}.$$ The following conclusions can be drawn from Fig. \[fig:AvgRateC\]. Coordinated D2D communication with content centric mode selection greatly enhances data rates compared to cellular only scenarios when popular contents are requested. This is because we know that for small $c$, $h_{d}(c)$ (and in turn $p_{NS}^{(k)}(i,c)$) increases implying that more users can be served by their nearest D2D helper in D2D mode. For least popular contents, the rate experienced by the requesting user is severely degraded and is even lower than the cellular only scenario. This is because, $h_{d}(c)\rightarrow0$ and the user is pushed to communicate in cellular band $W_{m}$, which is less than the total bandwidth $W_{m}+W_{d}$ of the cellular only scenario. Yet again, there exists a trade off in the selection of the number of neighboring D2D helpers $k$ which maximizes the average throughput experienced by an arbitrary user. The existence of the optimal value of $k=k_{NS}^{*}$ for $T_{NS}^{(k)}(c)$ follows the same reasoning as $\Gamma_{NS}^{(k)}(c)$, as $R_{d,i}$ follows the same trend as $\Gamma_{d,i}$ and is decreasing in $i.$ However, Unlike the $\Gamma_{NS}^{(k)}(c)$, this value of $k=k_{NS}^{*}$ does vary with the changes in $c$ and increases as $c$ increases. This is because, as we have from Proposition \[prop:Rd\] $$\begin{aligned} \eta_{u_{d,NS}} & = & \frac{\lambda_{u}}{\lambda_{m}}\left[\rho\sum_{c=1}^{L}c^{-\zeta}p_{d,NS}^{(k)}(c)\right]\\ & = & \frac{\lambda_{u}}{\lambda_{m}}\left[1-\rho\sum_{c=1}^{L}c^{-\zeta}[1-\rho c^{-\zeta}]^{kC_{d}}\right],\end{aligned}$$ where $\gamma\left(\eta_{u_{d,NS}}\right)=\eta_{u_{d,\varPi}}^{-1}\left(1-\textnormal{exp}\left(-\eta_{u_{d,\varPi}}\right)\right)$ is a decreasing function in $k$. This means that more users are offloaded from cellular to D2D mode as $k$ increases and the available bandwidth **$W_{d}\gamma\left(\eta_{u_{d,NS}}\right)$** for a user in D2D mode decreases. When $k\rightarrow\infty,$ $\gamma\left(\eta_{u_{d,NS}}\right)\rightarrow\gamma\left(\eta_{u}\right)$ implying that all the requesting users are served in D2D mode. We know that as $c$ increases, then the term $1-\left[1-\rho c^{-\zeta}\right]^{C_{d}}$ decreases in (\[eq:tpsimpNS\]) and the D2D rate decreases. To effectively utilize the D2D bandwidth $W_{d}$, more users need to be activated in D2D mode and hence, $k_{NS}^{*}$ increases. Conclusion\[sec:Conclusion\] ============================ In this paper, we presented a novel framework for the analysis of cache-enabled cellular networks with coordinated D2D communication. The arbitrary user requesting a particular content is offloaded to communicate with one of its $k$ neighboring D2D helpers within the cell based on the content availability and helper selection schemes. We derived the distribution of the distance between the user and its $i$th nearest D2D helper within the cell using disk cell approximation, which is shown to be fairly accurate. We obtained the probabilities for being served in cellular and D2D modes and the coverage and data rates experienced by the user in both these modes. With the help of our analysis, we showed that the information-centric offloading with coordinated D2D results in high performance gain. However, to maximize the performance, the number of candidate D2D helpers $k$ has to be carefully tuned. Proof of Theorem\[sec:Proof1\] \[prop:dist\] ============================================ The probability that the distance between the requesting user and the $i$th nearest D2D helper within the cell is at least $r$ is the probability that there are exactly $i-1$ helpers inside the region $\mathcal{A}$. It can be expressed as $$1-F_{R_{i}|X=x,X>D,N_{d}\geq i}(r)=\frac{\left(\lambda_{d}\mathcal{A}\right)^{i-1}}{(i-1)!}\,\textnormal{exp}\left(-\lambda_{d}\mathcal{A}\right),\label{eq:cdflens}$$ where $\mathcal{A}$ is the area of intersection between $B_{m}$ and $b(o,r).$ As shown in Fig. \[fig:distnearest\], this area can be divided into two regimes given as follows. - Regime 1 - When $b(o,r)$ partly overlaps $B_{m}$, i.e. $x-y<r<x+y$. The overlapping area $\mathcal{A}$ in this case can be written as [@weisstein2003circle]\ $$\begin{aligned} \nabla(r,y,x) & = & r^{2}\arccos\left(\frac{\omega_{1}}{2y\,r}\right)+x^{2}\arccos\left(\frac{\omega_{2}}{2y\,x}\right)-\frac{1}{2}\sqrt{4y^{2}x^{2}-\omega_{2}^{2}},\label{eq:lens}\end{aligned}$$ where $\omega_{1}=r^{2}+y^{2}-x^{2}$ and $\omega_{2}=x^{2}+y-r^{2}$. - Regime 2 - When $b(o,r)$ lies inside $B_{m}$ i.e. $0<r<x-y$. The overlapping area in this case is straightforward and is given as $\mathcal{A}=\pi r^{2}.$ Differentiating $F_{R_{i}|X=x,X>D,N_{d}\geq i}(r)$ in (\[eq:cdflens\]) with respect to $r$ gives (\[eq:f1r\]) and (\[eq:f2r\]) for regimes 1 and 2 respectively. The unconditional distance distribution $f_{R_{i}}(r)$ in (\[eq:fr\]) is obtained by averaging over $X$ and $Y$, where $X>Y$ and $N_{d}\geq i$. Proof of Proposition\[sec:Proof2\] \[prop:covD\] ================================================ The probability of coverage for the typical user served by the $i$th nearest D2D helper can be expressed as $$\begin{aligned} \varGamma_{d,i} & = & \mathbb{P}\left\{ \frac{h_{i}\,r^{-\alpha}}{\sigma^{2}/P_{d}+I_{d}}>\tau_{d}\right\} \label{eq:covD}\end{aligned}$$ where $I_{d}=\sum_{z_{j}\in\Phi_{d}^{int}}g_{j}\,z_{j}^{-\alpha}$ is the inter-cell interference from other active D2D helpers, which is the sum of powers from the active D2D helpers constituting $\Phi_{d}^{int}$. Here, $z_{j}$ is the distance of the the interfering D2D helper from the typical user and $g_{j}$ is the channel power for the interfering link $j$. Because of the exponentially distributed channel power $h_{i}$ in (\[eq:covD\]), we get $$\varGamma_{d,i}=\mathbb{E}_{R_{i}}\left[\exp\left(-s_{d}\sigma^{2}/P_{d}\right)\mathcal{L}_{I_{d}}\left(s_{d}\right)\right],\label{eq:Covdi}$$ where $s_{d}=\tau_{d}r^{\alpha}$ and $\mathcal{L}_{I_{d}}\left(s_{d}\right)=\mathbb{E}_{I_{d}}\left[\exp\left(-s_{d}I_{d}\right)\right]$ is the Laplace transform of D2D interference. Because only one D2D helper can be active at one channel in a given macrocell, we employ a key assumption that $\Phi_{d}^{int}$ is a HPPP with intensity $\tilde{\lambda}_{m}=p_{int}\times\lambda_{m}$ [^5]. Here, $p_{int}=1-\left(1+3.5^{-1}\eta_{d}\right)^{-3.5}$ is the probability that at least one interfering D2D helper is present in a cell. $\mathcal{L}_{I_{d}}\left(s_{d}\right)$ can then be presented as $$\begin{aligned} \mathcal{L}_{I_{d}}\left(s_{d}\right) & = & \mathbb{E}\left[\text{exp}\left(-s_{d}\sum_{\textbf{z}_{j}\in\Theta_{d}}g_{j}\,z_{j}^{-\alpha}\right)\right]\nonumber \\ & \overset{(a)}{=} & \mathbb{E}_{Q}\biggl[\exp\biggl(-2\pi\tilde{\lambda}_{m}\intop_{q}^{\infty}\frac{\nu}{1+s_{d}^{-1}\nu^{\alpha}}\,d\nu\biggr)\biggr]\label{eq:laplaceD2D}\end{aligned}$$ where $(a)$ follows from the generating functional of a HPPP and the exponential distribution of the channel power $g$. The lower limit of the integral in (\[eq:laplaceD2D\]) represents the guard zone. Notice that the lower limit $q$ in this case is governed by the nearest active D2D interferer, where $f_{Q}(q)=2\pi\tilde{\lambda}_{m}q\,\text{exp}(-\tilde{\lambda}_{m}\pi q^{2})$ because of the equi-dense HPPP approximation. As $\tilde{\lambda}_{M}$ is quite small, we can apply Jensen’s inequality to achieve a tight bound for (\[eq:laplaceD2D\]) $$\mathcal{L}_{I_{d}}\left(s_{d}\right)\thickapprox\exp\left(-2\pi\tilde{\lambda}_{m}\mathbb{E}_{Q}\left[\intop_{q}^{\infty}\frac{\nu}{1+s_{d}^{-1}\nu^{\alpha}}\,d\nu\right]\right).\label{eq:lapDJen}$$ Substituting (\[eq:lapDJen\]) into (\[eq:Covdi\]) gives (\[eq:covD2D\]). The overall D2D coverage probability for NS and US schemes and a particular content request in (\[eq:covDk\]) is obtained by taking expectation over $i$ and by conditioning over the probability of D2D mode. [^1]: [The authors are with the School of Electronic and Electrical Engineering, University of Leeds, United Kingdom.]{}\ [M. Ghogho is also affliated with the University of Rabat, Morocco.]{}\ [Email: {elaaf, s.a.zaidi, d.c.mclernon, m.ghogho}@leeds.ac.uk]{} [^2]: We use the same notation to denote the the node itself and its distance to the origin. [^3]: In our previous work in [@aafzal2016], we show that the maximal disk approximation is not accurate when the user is placed at a fixed distance from the MBS. In fact, a disk with the same area as the Voronoi cell better approximates the distance. However, in this case, we will show that when the distance $Y$ between the requesting user and the MBS is random, the inscribed disk approximation accurately approximates the distance. [^4]: To simplify the analysis, we assume that the number of users in D2D mode is the same as the number of active D2D helpers. This might not be the case in reality as one helper can serve multiple users in its vicinity if they request the same file. With our analysis, such a situation will translate into the transmission of the same file by the same D2D helper but on a separate portion of the spectrum. [^5]: The equi-dense HPPP assumptions ignores the correlations due to the position of helpers inside a cell, but is more tractable. [@elsawy2014analytical]
--- abstract: | We study the generalized Chaplygin gas model (GCGM) using Gamma-ray bursts as cosmological probes. In order to avoid the so-called circularity problem we use cosmology-independent data set and Bayesian statistics to impose constraints on the model parameters. We observe that a negative value for the parameter $\alpha$ is favoured in a flat Universe and the estimated value of the parameter $H_{0}$ is lower than that found in literature.\ \ PACS number: 98.80.Es, 98.70.Rz --- [**Constraints on the Generalized Chaplygin Gas Model from Gamma-Ray Bursts**]{} [ R. C. Freitas$^{a,}$[^1], S. V. B. Gonçalves$^{a,}$[^2] and H. E. S. Velten$^{a,b,}$[^3], *a Grupo de Gravitação e Cosmologia, Departamento de Física,\ Universidade Federal do Espírito Santo, 29075-910, Vitória, Espírito Santo, Brazil*]{} Introduction ============ One of the most important problems of Modern Cosmology is the determination of the matter content of the Universe. The rotation curve of spiral galaxies [@rotation], the dynamics of galaxy clusters [@dynamics] and structure formation [@struc], indicate that there is about ten times more pressureless matter in the Universe than can be afforded by the baryonic matter. The nature of this dark matter component remains unknown. Moreover, the Type Ia supernovae (SNe Ia) data indicates that the Universe is accelerating [@super]. Models considering matter content dominated by an exotic fluid whose pressure is negative [@press], modified gravity theories such as $f(R)$ [@mod] and the evolution of an inhomogeneous Universe model described in terms of spatially averaged scalar variables with matter and backreaction source terms [@back] are some of the proposals to explain this current phase of the Universe. At the same time, the position of the first acoustic peak in the spectrum of CMB anisotropies, as obtained by WMAP, favours a spatially flat Universe[@WM5]. Combining all these data and if we consider the matter content of the Universe dominated by a fluid with negative pressure we have a scenario with a proportion of $\Omega_{m} \sim 0.27$ and $\Omega_{de} \sim 0.73$, with respect to the critical density, for the fractions of the pressureless matter and dark energy, respectively. This scenario is usually called as the concordance cosmological model. The question is to know what is the nature of the dark matter and dark energy components. For dark matter many candidates have been suggested such as axions, a particle until now undetected which would be a relic of a phase where the grand unified theory was valid [@axion], the lightest supersymmetric particle (LSP) like neutralinos [@salati1] and the Kaluza-Klein particles [@salati2] that are stable viable Weakly Interacting Massive Particles (WIMPs) and arise in two frameworks: In Universal Extra Dimensions [@UED] and in some warped geometries like Randall-Sundrum [@RS]. For the dark energy, in the hydrodynamical representations of matter, the most natural candidate is a cosmological constant, but there is a discrepancy of some $120$ orders of magnitude between its theoretical and observed values [@cc]. For this reason, other candidates have been suggested like quintessence models that involve canonical kinetic terms of the self-interacting scalar field with the sound speed $c_s^2 = 1$ [@quinte] and k-essence models that employ rather exotic scalar fields with non-canonical (non-linear) kinetic terms which typically lead to a negative pressure [@kesse]. More recently, a string-inspired fluid has been evoked: The Chaplygin gas [@chapl], that appears as a promising candidate for the dark sector of the Universe. The Chaplygin gas is represented by the equation of state $$p_c = - \frac{A}{\rho_c} \quad ,$$ where $p_c$ represents the pressure, $\rho_c$ the fluid density and $A$ is a parameter connected with the sound speed. This equation of state is suggested by a brane configuration in the context of string theories [@string]. However, a more general equation of state has been suggested [@gcg]: $$\label{chapp} p_c = - \frac{A}{\rho^{\alpha}_c} \quad ,$$ where again $p_c$ and $\rho_c$ stand for the generalized Chaplygin gas component and $\alpha$ is a new parameter, which takes the value $1$ for the traditional Chaplygin gas but values larger than $1$, or even negative may be considered. This is the so-called generalized Chaplygin gas. Much observational data that has been used for comparison with the theoretical cosmological models like the generalized Chaplygin gas model (GCGM). The spectra of anisotropy of cosmic microwave background radiation [@berto1], baryonic acoustic oscillations [@BAOcha], the integrated Sachs-Wolfe effect [@SW], the matter power spectrum [@mass], gravitational lenses [@lens], X-ray data [@raiox] and ages estimates of high-$z$ objects [@ages] have been used in this sense. Also, constraints from combined data sources have been obtained in [@combcha]. Another tool used to make this comparison is the Hubble diagram, the plot of redshift $z$ versus luminosity distance $d_L = \sqrt{\mathcal{L}/4\pi\mathcal{F}}$, where $\mathcal{L}$ is the luminosity (the energy per time produced by the source in its rest frame) and $\mathcal{F}$ is the measured flux, i.e., the energy per time per area measured by a detector. Normally, the SNe Ia data are considered good standard candles and are used to construct the Hubble diagram, because their luminosity are well known [@super; @SN2]. In particular, constraints on the Generalized Chaplygin gas have been studied in [@SNe]. These assumptions rest on a foundation of photometric and spectroscopic similarities between high- and low-redshift SNe Ia. But this discussion is not yet finished [@SN3]. The other problem comes from the fact that there still does not exist SNe Ia data with $z > 1.8$. To know the properties and behavior of dark energy for high values of $z$ we will have to wait for new data of the SNe Ia or to find other distance indicators. In this sense, to extend the comparison between observational data and theoretical models at very high redshift we propose to use Gamma-ray bursts (GRBs) due to the fact that they occur in the range of high z beyond the SNe data found today [@Bromm]. The GRBs are jets that release $\sim 10^{51} - 10^{53}$ ergs or more for a few seconds and becomes, in this brief period of time, the most bright object in the Universe. They were discovered in the sixties by the Vela satellites in the “Outer Space Treaty" that monitored nuclear explosions in space [@hist]. Launched in 1991 The Burst and Transient Source Experiment on the Compton Gamma-Ray Observatory (BATSE on the Compton GRO) [@Costa] observations concluded that the angular distribution of the GRBs on the sky is isotropic within statistical limits. This study ruled out the idea that the GRBs are galactic objects, but it is consistent with the bursts being extra-galactic sources at cosmological distances. More recently, the SWIFT mission (launched in 2004) has provided the most accurate GRB data, available in the Swift BAT Catalog. The search for a self-consistent method to use the GRBs in cosmological problems is intense and promising. But the possibility of using GRBs as standard candles is not a simple question. GRBs are known to have several light curves and spectral properties from which the luminosity of the burst can be calculated once calibrated, and these can make GRBs into standard candles. Just as with SNe Ia, the idea is to measure the luminosity indicators, deduce the source luminosity, measure the observed flux and then use the inverse-square law to derive the luminosity distance. The difficulty arises when these indicators are a priori established through some cosmological model like the concordance one. This means that the parameters of the calibrated relations of luminosity/energy are still coupled to the cosmological parameters derived from a given cosmological model. This is the so called circularity problem. This problem appears in several works that have made use of these GRBs luminosity indicators as standard candles at very high redshift [@circular]. It is possible to treat the circularity problem with a statistical approach [@statist]. On the other hand, many papers have dealt with the use of so called Amati relation, or the Ghirlanda relation for this purpose [@firmani]. However, as argued recently in [@petrosian], these procedure involve many unjustified assumptions which if not true could invalidate the results. In particular, many evolutionary effects can affect the final outcome. However, recently Liang [*et al.*]{} [@liang; @liang2010; @liang11] made a study considering SNe Ia as first-order standard candles for calibrating GRBs, the second-order standard candles. The sample in reference [@liang] was calibrated from the 192 supernovae obtained in [@davies]. The updated sample used in [@liang2010; @liang11] has been obtained and calibrated cosmology-independently from the Union2 (557 data points) compilation [@Union2] released by the Supernova Cosmology Project Collaboration. In these articles the authors found relevant constraints on the Cardassian and Chaplygin gas model by adding to the GRB data the SNe Ia (Union2), the Shift parameter of the Cosmic Microwave Background radiation from the seven-year Wilkinson Microwave Anisotropy Probe and the baryonic acoustic oscillation from the spectroscopic Sloan Digital Sky Survey Data Release galaxy sample. The sample obtained in [@liang2010] will be used in our analysis. These authors obtain the distance moduli $\mu$ of GRB in the redshift range of SNe Ia and extend this result to very high redshift GRB ($z > 1.4$) in a completely cosmological model-independent way. This approach has been also studied in [@calibration]. Some analysis have been made with the GCGM and the GRBs as distant markers [@GRB]. In the reference [@bertolami] the authors build a specific distribution of GRB to probe the flat GCGM and the XCDM model. While the GCGM has an equation of state given by expression (\[chapp\]) the XCDM model is considered in terms of a constant equation of state $\omega = p/\rho < 0$. The main conclusion of this article is that the use of GRBs as a dark energy probe is more limited when compared to SNe Ia. We anticipate that we shall arrive at a similar conclusion. Moreover the XCDM model is better constrained than the GCGM. On the other hand, in [@herman] the GCGM and the $\Lambda$CDM model are compared by using the GRB and SNe Ia data to build the Hubble diagram. These authors show through the statistical analysis that the Chaplygin gas model (they use $\alpha = 1$) have the best fit when compared with the data. Also they verify that the transition redshift between the decelerated and the accelerated state of the Universe occurs at $z \sim 2.5 - 3.5$ rather than $z \sim 0.5 - 1$ based on the analysis made with the SNe Ia. Here, for our purpose, we will assume the plausible assumption that GRBs are standard candles and we will use the data from Liang [*et al.*]{} [@liang2010], calibrated cosmology-independently from the Union2 compilation of SNe Ia, to constraint the cosmological parameters of the GCGM. We want to show how GRB data could constraint different Chaplygin cosmologies. This paper is organized as follows. In next section, we described a brief review of GCGM. In section $3$ the luminosity distance $d_L$ is obtained for the GCGM and compared with the observational data. Finally, in section $4$ we present our discussion and conclusions. The Generalized Chaplygin Gas Model {#sectionCGM} =================================== We consider here an homogeneous and isotropic Universe described by the Friedmann’s equation $$\label{be1} \biggl(\frac{\dot{a}}{a}\biggr)^2 + \frac{k}{a^2} = \frac{8\pi G}{3} (\rho_m + \rho_c)\quad,$$ where the density $\rho$ has the subscripts $m$ for the matter pressureless fluid and $c$ for the generalized Chaplygin gas with equations of state $p_m = 0$ and $p_c=-A/\rho_{c}^{\alpha}$, respectively. Dot means derivative with respect to the cosmic time $t$. Flat, closed and open spatial sections correspond to $k = 0, 1, -1$ for the constant of the curvature. In our case, each fluid obeys separately the energy conservation law. The equations and the respective solutions are given by $$\begin{aligned} \label{mat} \dot\rho_m + 3\frac{\dot a}{a}\rho_m &=& 0\quad\rightarrow\quad\quad\rho_m = \frac{\rho_{m0}}{a^3}\quad,\\ \label{chap} \dot\rho_c + 3\frac{\dot a}{a}\biggl(\rho_c - \frac{A}{\rho^{\alpha}_c}\biggr) &=& 0\quad\rightarrow\quad\quad\rho_c = \rho_{c0}\biggl(\bar{A} + \frac{1 - \bar{A}}{a^{3(1+\alpha)}}\biggr)^{1/(1+\alpha)}\quad,\end{aligned}$$ where $\rho_{m0} = \rho_m(a_0)$, $\rho_{c0} = \rho_c(a_0) = (A + B)^{1/(1 + \alpha)}$ with $a(t = 0) = a_0 = 1$ being the scale factor today. The new definition of the constant $A$ is given by $\bar{A} = A/\rho_{c0}^{1+\alpha}$ and it is connected to the sound velocity today in the gas by the expression $v_{s_0} = \sqrt{\partial p_c/\partial \rho_c} \Big|_{t_0} = \sqrt{\alpha\bar{A}}$. Initially, the GCGM behaves like a dust fluid, with $\rho\propto a^{-3} $, while at late times the GCGM behaves as a cosmological constant term, $\rho \propto A^{1/(1 + \alpha)}$. Hence, the GCGM interpolates a matter dominated phase (where the formation of structure occurs) and a de Sitter phase. At the same time, the pressure is negative while the sound velocity is positive, avoiding instability problems at small scales [@jerome]. In order to proceed with data comparison we need to calculate the luminosity distance in the GCGM. Using the expression for the propagation of light and the Friedmann’s equation (\[be1\]), we can express the luminosity distance as $$d_L = \frac{a_0^2}{a}r_1 = (1 + z)S[f(z)] \quad ,$$ where $r_1$ is the co-moving coordinate of the source and $$\begin{aligned} S(x) &=& x\quad\mbox{for}\quad (k = 0) \nonumber\quad ,\\ S(x) &=& \sin x\quad\mbox{for}\quad(k = 1) \quad,\nonumber\\ S(x) &=& \sinh x\quad\mbox{for}\quad(k = - 1)\quad.\end{aligned}$$ The function $f(z)$ is given by $$f(z) = \frac{1}{H_0}\int_0^z \frac{dz'}{\{\Omega_{m}(z^{\prime}+ 1)^3 + \Omega_{c}[\bar A + (1 - \bar A)(z^{\prime}+ 1)^{3(1+\alpha)}]^{1/(1+\alpha)} - \Omega_{k}(z^{\prime}+ 1)^2\}^{1/2}} \quad ,$$ with the definitions $$\Omega_{m} = \frac{8\pi G}{3}\frac{\rho_{m0}}{H_0^2} \quad , \quad \Omega_{c} = \frac{8\pi G}{3}\frac{\rho_{c0}}{H_0^2} \quad , \quad \Omega_{k} = - \frac{k}{H_0^2} \quad ,$$ and $\Omega_{m0} +\Omega_{c0} + \Omega_{k} = 1$. The final equations have been also expressed in terms of the redshift $z = - 1 + \frac{1}{a}$. In our numerical calculations we relax the restriction that the pressureless matter component is entirely given by baryons. We consider the nucleosynthesis results for the baryonic component of the Universe and assume the total pressureless matter density as $\Omega_{m}=\Omega_{b}+\Omega_{dm}$, where $\Omega_{b}h^{2}=0.0223$ and $H_0=100h Km s^{-1} Mpc^{-1}$. Then, in our notation $\Omega_{dm}$ means the extra dark matter contribution. The Numerical Results {#sectionNA} ===================== The observational data set used in this article is composed by 42 GRBs from [@liang2010; @liang11]. As told in the introduction, this sample has been obtained and calibrated cosmology-independently from the Union2 compilation. This fact is of crucial importance to admit GRBs as cosmological probes since the circularity problem described above is avoided. At the same time, this data set allow us to analyse the free parameters of the GCGM for a redshift range larger than the available data from SNIa reaching up to z $\approx$ 6. It is important to emphasize that with this sample the authors of [@liang2010; @liang11] have obtained stark constraints on the Cardassian and Chaplygin gas model by combining the GRB data with other cosmological probes. If we want to have a reliable sample of GRBs to make our analysis, the Hubble diagram for the GRBs should be calibrated from the SN at $z \leq 1.4$. This allows to obtain the following luminosity/energy relations: The $\tau_{lag}-L$ relation, the $V-L$ relation, the $L-E_p$ relation, the $E_{\gamma}-E_p$ relation, the $\tau_{RT}-L$ relation, the $E_{iso}-E_p$ relation, and the $E_{iso}-E_p-t_b$ relation. In general these relations can be written as $\mbox{log}(y) = a + b~\mbox{log}(x)$ (two-variable relations) and $\mbox{log}(y) = a + b_1~\mbox{log}(x_1) + b_2~\mbox{log}(x_2)$ (multi-variable relation). In this relations $y$ is the luminosity in units of erg s$^{-1}$ or energy in units of erg and $x$ is the GRB parameter measured in the rest frame; in the latter expression $x_1$ and $x_2$ are $E_p (1 + z)/(300~\mbox{keV})$ and $t_b/(1 + z)/(1~\mbox{day})$ respectively, and $b_1$ and $b_2$ are the slopes of $x_1$ and $x_2$ respectively. The calibration’s process is achieved using two methods: linear interpolation (the bisector of the two ordinary least-squares) and the cubic interpolation (the multiple variable regression analysis). The variables $a$ and $b_i$ are determinated with $1-\sigma$ uncertainties. With the linear interpolation, the error of the interpolated distance modulus can be calculated by $\sigma_{\mu} = ([(z_{i + 1} - z)/(z_{i + 1} - z_i)]^2\epsilon^2_{\mu , i} + [(z - z_i)/(z_{i + 1} - z_i)]^2\epsilon^e_{\mu , i + 1})^{1/2}$, where $\mu$ is the interpolated distance modulus of a source at redshift $z$, $\epsilon^2_{\mu , i}$ and $\epsilon^e_{\mu , i + 1}$ are errors of the SNe, $\mu_i$ and $\mu_{i + 1}$ are the distance moduli of the SNe at nearby redshifts $z_i$ and $z_{i + 1}$, respectively. In the case of the cubic interpolation method the error can be estimated by the expression $\sigma_{\mu} = (A_0^2\epsilon^2_{\mu , i} + A_1^2\epsilon^2_{\mu , i + 1} + A_2^2\epsilon^2_{\mu , i + 2} + A_3^2\epsilon^2_{\mu , i + 3})^{1/2}$, where $\epsilon_{\mu , i + j}$ are errors of the SNe and $\mu_{i + j}$ are the distance moduli of the SNe at nearby redshifts $z_{i + j}$ (index $j$ run from $0$ to $3$) with: $$\begin{aligned} A_0 &=&\frac{ [(z_{i + 1} - z)(z_{i + 2} - z)(z_{i + 3} - z)]}{[(z_{i + 1} - z_i)(z_{i + 2} - z_i)(z_{i + 3} - z_i)]}\quad;\nonumber\\ A_1 &=& \frac{[(z_{i} - z)(z_{i + 2} - z)(z_{i + 3} - z)]}{[(z_{i} - z_{i + 1})(z_{i + 2} - z_{i + 1})(z_{i + 3} - z_{i + 1})]}\quad;\nonumber\\ A_2 &=& \frac{[(z_{i} - z)(z_{i + 1} - z)(z_{i + 3} - z)]}{[(z_{i} - z_{i + 2})(z_{i + 1} - z_{i + 2})(z_{i + 3} - z_{i + 2})]}\quad;\nonumber\\ A_3 &=& \frac{[(z_{i} - z)(z_{i + 1} - z)(z_{i + 2} - z)]}{[(z_{i} - z_{i + 3})(z_{i + 1} - z_{i + 3})(z_{i + 2} - z_{i + 3})]}\quad.\nonumber\\\end{aligned}$$ The results obtained by the cubic interpolation method are almost similar to the results obtained by the linear interpolation method. It is important to emphasize again that the calibration results are completely independent of cosmological models used (for further discussion, see [@liang]). In order to compare the GCGM with the observational data, the first step is to compute the theoretical luminosity distance $\mu$, $$\label{dl} \mu^{th}=5\log \left( \frac{d_{L}}{M\!pc}\right) +25\quad ,$$ with the relations for the GCGM described above. Here, as in [@liang2010; @liang11], by using only linear interpolating we have the 27 GRBs at $z \leq 1.4$ from the Union SNe Ia data and the 42 GRBs at $z > 1.4$ obtained with the five relations $(\tau_{lag}-L, V-L, L-E_p, E_{\gamma}-E_p, \tau_{RT}-L)$ calibrated with the sample at $z \leq 1.4$ that uses also the linear interpolation method. It is assumed that the GRBs luminosity relations do not evolve with redshift, so we could get the luminosity $(L)$ or energy $(E_{\gamma})$ of each burst at high redshift $(z > 1.4)$. The weighted average distance modulus from the five relations for each GRB is $\mu = (\sum_i \mu_i /\sigma^2_{\mu_i} )/(\sum_i \sigma^{-2}_{\mu_i})$, with its uncertainty $\mu_i = (\sum_i \sigma^{-2}_{\mu_i})^{-1/2}$, where the summations run from $1$ to $5$ over the five relations described above. Considering a set of free parameters $\left\{{\bf p}\right\}$ the agreement between theory and observation is measured by minimizing the quantity, $$\chi^{2}\left({\bf p}\right)=\sum_{i=1}^{42}\frac{\left[\mu^{th}_{i}({\bf p}) - \mu^{obs}_{i}({\bf p})\right]^{2}}{\sigma_{i}^{2}},$$ where $\mu^{th}$ and $\mu^{obs}$ are the theoretical value and the observed value of the luminosity distance for our model, respectively, and $\sigma$ means the error for each data point. We use Bayesian analysis to obtain the parameters estimations through the probability distribution function (PDF) $$P = \mathcal{B} \,e^{-\frac{\chi^{2}(\bf{p})}{2}},$$ where $\mathcal{B}$ is a normalization constant. A full Bayesian analysis is made by considering all free parameters of the model. However, we will study some particular Chaplygin configurations before a detailed analysis with 5 free parameters. With this strategy we hope to gain some intuition about the GRB data from the partial outcomes. Below, we will describe different Chaplygin-based cosmologies investigated in the present work. We show our results in Table 1-2 and in Figures 1-7. Our first step is to study the Chaplygin gas ($\alpha=1$). We remenber that this equation of state, as cited above, has also raised interest in particle physics thanks to its connection with string theory and its supersymmetric extension [@string]. We shall consider the prior information: $0 \leq \bar{A} \leq 1$, $0 \leq \Omega_{dm} \leq 0.957$ and $0 \leq H_{0} \leq 100$. The curves in Fig. \[GRBc1\] represent the 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood after the first marginalization, i.e, integration of the likelihood function over the non requerid parameter. However, the most robust parameter estimation is the central value for the parameter obtained in the maximum of the one-dimensional PDF as in Fig. \[GRBc2\]. From Fig. \[GRBc2\] we can obtain the final parameter estimation. We found $\Omega_{dm}=0.04^{+0.59}_{-0.04}, \bar{A}=0.96^{+0.04}_{-0.61}$ and $H_{0}=51.3^{+9.5}_{-5.8}$ at 1$\sigma$ level. However, the dispersion in the GRB data is quite high. We compute these same estimatives using the Supernovae Constitution sample [@constitution] in order to compare the dispersion of these two samples. For the SN we found $\Omega_{dm}=0.00^{+0.40}_{-0.00}, \bar{A}=0.99^{+0.01}_{-0.41}$ and $H_{0}=59.7^{+2.1}_{-1.5}$ at 1$\sigma$ level. Some constraints on the Generalized Chaplygin gas have been placed using the Constitution data set [@Xu]. This allows a comparison between some of our results and the ones from Supernovae. In general, GRBs recover the results from SNe but with a high dispersion. In our next analysis we relax the prior information about the Hubble parameter and leave it free to vary. We show the two-dimensional PDFs in the Fig. (\[GRB3\]). In Fig. (\[GRB4\]) the solid lines are the corresponding one-dimensional probabilities. The above choice for the priors in the parameter $\alpha$ is conservative. With this choice we want to avoid a super luminal propagation in the sound speed formula. However, as argued in [@staro] the formula $v^{2}_{s}=\alpha \bar{A}$ represents the group sound velocity. Actually, in order to violate causality the wavefront velocity should exceed $1$ [@bril]. Considering this possibility we assume now $\alpha\geq0$ and compute the one-dimensional PDF as showed in dashed lines in Fig. \[GRB4\]. Until now we have considered a flat Universe in our analysis. In order to have a more general statistical analysis we allow a non-vanishing curvature in our model. The complete five-dimensional analysis is computationally hard but still feasible. We assume as prior information that our Universe deviates slightly from the flat model assuming $\Omega_{k}$ to vary between \[-0.6,0.6\]. For this case, we show the results in Fig. \[5pm\]. $ \mbox{Case} $ $\alpha$ $\bar{A}$ $\Omega_{dm0}$ $H_0$ $\Omega_{k0}$ -------------------------------------------------------------------- ----------------------- ------------------------ ------------------------ ----------------------- ------------------------- CGM $(\alpha=1) \rightarrow\mbox{Fig. 2} $ 1 $0.96^{+0.04}_{-0.61}$ $0.04^{+0.59}_{-0.04}$ $51.3^{+9.5}_{-5.8}$ $0$ GCGM $(h=0.72)\rightarrow\mbox{Fig. 4}$ $<<0$ $0.98^{+0.02}_{-0.59}$ $0.01^{+0.56}_{-0.01}$ $72.0$ $0$ GCGM $(0\leq\alpha\leq1)\rightarrow\mbox{Fig. 6} $ $-4.3^{+4.8}_{-15.2}$ $0.88^{+0.12}_{-0.54}$ $0.10^{+0.52}_{-0.10}$ $51.9^{+9.8}_{-5.6}$ 0 GCGM $(\alpha\geq0)\rightarrow\mbox{Fig. 6} $ $-4.3^{+4.8}_{-15.2}$ $1.00^{+0.0}_{-0.34}$ $0.00^{+0.61}_{-0.00}$ $48.2^{+9.2}_{-5.3}$ 0 GCGM $\,\,\Omega_k \neq0~(0\leq\alpha<1)\rightarrow\mbox{Fig. 7} $ $1.2^{+5.9}_{-7.4}$ $0.64^{+0.24}_{-0.25}$ $0.31^{+0.44}_{-0.20}$ $56.2^{+10.1}_{-6.5}$ $-0.26^{+0.25}_{-0.26}$ GCGM $\,\,\Omega_k\neq0~(\alpha\geq0)\rightarrow\mbox{Fig. 7}$ $1.2^{+5.6}_{-7.3}$ $1.00^{+0.00}_{0.61}$ $0.00^{+0.51}_{-0.00}$ $52.3^{+8.9}_{-6.0}$ $-0.53^{+0.29}_{-0.28}$ : For the different Chaplygin-based cosmologies in the first column we show the final 1D estimation for the free parameters. The errors are computed at $1\sigma$ level. $ \mbox{Figure} $ $\Omega_{dm} \times H_0$ $\bar{A} \times H_0$ $\bar{A} \times \Omega_{dm0}$ $\bar{A} \times \alpha$ $\Omega_{dm} \times \alpha$ $H_0 \times \alpha$ ------------------- -------------------------- ---------------------- ------------------------------- ------------------------- ----------------------------- --------------------- 1 (1.0 , 47.9) (0,48.1) (0.86 , 0.27) - - - 3 - - (0.86 , 0.27) (0.86 , 0.25) (0.23 , -20) - 5 (49.8 , 1.0) (0 , 48.1) (0.86 , 0.26) (0.86, 0.2) (0.23 , -12.0) (50, +10.0) : Best fit values for the 2D PDFs. ![Two-dimensional probability distribution function (PDF) for the free parameters in the CGM. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRBc1"}](OmegaxH0CGM "fig:"){width="32.00000%"} ![Two-dimensional probability distribution function (PDF) for the free parameters in the CGM. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRBc1"}](AxH0CGM "fig:"){width="32.00000%"} ![Two-dimensional probability distribution function (PDF) for the free parameters in the CGM. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRBc1"}](AxOmegaCGM "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the CGM.[]{data-label="GRBc2"}](OmegaCGM "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the CGM.[]{data-label="GRBc2"}](H0CGM "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the CGM.[]{data-label="GRBc2"}](ACGM "fig:"){width="32.00000%"} ![Two-dimensional PDFs for the GCGM fixing $H_{0}=72~km~s^{-1} ~Mpc^{-1}$. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRB1"}](Axalpha "fig:"){width="32.00000%"} ![Two-dimensional PDFs for the GCGM fixing $H_{0}=72~km~s^{-1} ~Mpc^{-1}$. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRB1"}](AxOmega "fig:"){width="32.00000%"} ![Two-dimensional PDFs for the GCGM fixing $H_{0}=72~km~s^{-1} ~Mpc^{-1}$. The curves represent 99.73$\%$, 95.45$\%$ and 68.27$\%$ contours of maximum likelihood. The darker the region, the smaller the probability.[]{data-label="GRB1"}](Omegaxalpha "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the GCGM when $H_{0}=72~km~s^{-1} ~Mpc^{-1}$.[]{data-label="GRB2"}](alpha3 "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the GCGM when $H_{0}=72~km~s^{-1} ~Mpc^{-1}$.[]{data-label="GRB2"}](A3 "fig:"){width="32.00000%"} ![One-dimensional PDFs for the three free parameters of the GCGM when $H_{0}=72~km~s^{-1} ~Mpc^{-1}$.[]{data-label="GRB2"}](Omega3 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](Axalpha4 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](AxOmega4 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](Omegaxalpha4 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](H0xOmega4 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](AxH04 "fig:"){width="32.00000%"} ![The same as Fig. (1) but considering four free parameters for the GCGM and the prior $0\leq\alpha\leq1$.[]{data-label="GRB3"}](alphaxH04 "fig:"){width="32.00000%"} ![One-dimensional PDF for the GCGM free parameters when $H_0$ is free to vary. The solid lines correspond to the prior choice $0\leq\alpha\leq1$ while dashed lines correspond to the prior $\alpha\geq 0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="GRB4"}](H0 "fig:"){width="24.00000%"} ![One-dimensional PDF for the GCGM free parameters when $H_0$ is free to vary. The solid lines correspond to the prior choice $0\leq\alpha\leq1$ while dashed lines correspond to the prior $\alpha\geq 0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="GRB4"}](A "fig:"){width="24.00000%"} ![One-dimensional PDF for the GCGM free parameters when $H_0$ is free to vary. The solid lines correspond to the prior choice $0\leq\alpha\leq1$ while dashed lines correspond to the prior $\alpha\geq 0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="GRB4"}](Omega "fig:"){width="24.00000%"} ![One-dimensional PDF for the GCGM free parameters when $H_0$ is free to vary. The solid lines correspond to the prior choice $0\leq\alpha\leq1$ while dashed lines correspond to the prior $\alpha\geq 0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="GRB4"}](alpha4 "fig:"){width="24.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](H05pm "fig:"){width="25.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](A5pm "fig:"){width="25.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](Omega5pm "fig:"){width="25.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](alpha5pm "fig:"){width="25.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](OmegaK5pm "fig:"){width="25.00000%"} ![One-dimensional PDF for the GCGM free parameters when we allow a non-vanishing curvature. The solid lines correspond to the prior choice $0\leq\alpha<1$ while dashed lines correspond to the prior $\alpha\geq0$. The final estimation for the parameter $\alpha$ does not depend on its prior information.[]{data-label="5pm"}](OmegaK5pm1 "fig:"){width="25.00000%"} Discussion and Conclusions ========================== In this study we have analyzed the Chaplygin gas model with a sample of 42 GRBs. Although the use of GRBs as a cosmological tool is a promising way to probe cosmology at high redshifts we have verified that the available data is still insufficient to impose precise constraints in cosmological models. As observed in our analysis, the dispersion is still high when compared with others observational data sets. However we hope that with the future data from the final Swift BAT Catalog we will be able to put strong constraints on the dark energy/matter properties. In our analysis, the unification scenario was not imposed from the beginning. This means that we allow an extra dark matter contribution ($\Omega_{dm}$) in our calculations in order to probe whether the unification scenario is favoured. In our first analysis the free parameters ($\bar{A},\Omega_{dm}$ and $H_{0}$) of the Chaplygin gas ($\alpha=1$) were well constrained. Our results are in agreement with the Supernova results [@fabris03]. The only difference is that we find a lower value for the Hubble parameter, $H_{0}=51.3^{+9.2}_{-5.7}$ (1$\sigma$). However, it is possible to find in the literature similar results for the parameter $H_{0}$ [@H0]. In our second analysis, in order to check the behaviour of the model when $H_{0}=72~km~s^{-1} Mpc^{-1}$ we leave $\alpha$ free, that is the so called Generalized Chaplygin Gas Model. From Figs. \[GRB1\] and \[GRB2\] the unification scenario is again favoured. However, the uncertainties are still high. The parameter $\alpha$ assumes a large negative value. There is no any peak in the parameter $\alpha$ distribution and the probability remains constant for negative values. For the background dynamics the region ($\alpha<-1$) represents a behavior different from the matter dominated phase when structures start to form. On the other hand, negative values for $\alpha$ imply an imaginary sound velocity, leading to small scale instabilities at the perturbative level. Rigourously, the general situation is more complex: such instabilities for fluids with negative pressure may disappear if the hydrodynamical approach is replaced by a more fundamental description using, e.g., scalar fields. However, this is not true for the Chaplygin gas: even in a fundamental approach, using for example the Born-Infeld action, the sound speed may be negative if $\alpha<0$. Perhaps the restriction $\alpha\geq0$ must be imposed for all observational tests. We work also with a set of four free parameters. Varying all four parameters, the preceding results are confirmed. Leaving the parameter $H_{0}$ free to vary, we confirm that the hypersurface $H_{0}=72~km~s^{-1} ~Mpc^{-1}$ doesn’t represent the maximum probability in the 4-D parameters phase space. Chaplygin gas models show values lower than $H_{0}=72~km~s^{-1} Mpc^{-1}$ [@fabris03]. We observe also that there is a significant difference in the final parameter estimation when we consider the prior $\alpha\geq0$, instead of $0\leq\alpha<1$. For instance, the unification scenario ($\Omega_{dm}= 0$) is favoured only with the choice $\alpha\geq0$. Moreover, there is now a peak in the $\alpha$ distribution at $\alpha=-4.3^{+4.8}_{-15.2}$ but with a high dispersion. Again, negative values for $\alpha$ are favored, despite the two-dimensional PDF ($\alpha$ x $H_{0}$) in Fig.\[GRB3\] indicates a high probability for $\alpha>6$. Such a contradiction seems to be an artifact of the marginalization process as can be seen also in the two-dimensional PDFs ($\Omega_{dm}$ x $H_{0}$) and ($\bar{A}$ x $H_{0}$) in Fig. \[GRB3\]. These plots confirm that the final 1D estimation can be very different from the partial 2D ones. This diferrence is due to the integration of the probability function over the adopted prior values of the remmaning parameters. The analysis with five free parameters confirm some of the previous results. Negative curvature is prefered as well as Sn data data [@fabris03]. Also, the parameter $\alpha$ is now estimated with a positive value, in constrast with the previous results. The Chaplygin gas parameters have been estimated in many papers, considering different analysis and several observational data sets. Constraints critically depend on whether one treats the Chaplygin gas as true quartessence (replacing both dark matter and dark energy) or if one allows it to coexist with a normal dark matter component. The former situation is widely considered in the literature. As we leave the density parameter $\Omega_{dm}$ free to vary in all the cases analysed here, it is not possible to directly compare our results with unified Chaplygin cosmologies unless we assume the prior $\Omega_{dm}=0$. This case has been studied using GRBs and other probes in reference [@liang11]. For a comparison with this reference, figure \[UnifGRB\] shows the two-dimmensional probability for the free parameter of the unified ($\Omega_{dm}=0$) GCG model. The best fit occurs at ($\alpha=0.15, \bar{A}=0.75$). This result agrees (at 1$\sigma$) with the joint analysis showed in [@liang11]. ![Constrainst on the free parameters of the unified ($\Omega_{dm}=0$) GCG model.[]{data-label="UnifGRB"}](unificationGCG){width="32.00000%"} The analysis of section 3 can be compared with [@fabris03], where the influence of a free $\Omega_{dm}$ parameter on the final estimations was taken into account. Our results have high confidence with the results obtained in [@fabris03]. Finally, we remark that, perturbative analysis of Chaplygin models, for instance, reveals a large positive value ($\alpha>>200$) for the parameter $\alpha$ [@fabris04] while kinematic tests show values negatives or close to zero. At the background level, the crossing of different data sets (including for example SNe, CMB, BAO, H(z) data and galaxy cluster mass fraction) will provide a more accurate scenario for each Chaplygin-based cosmology studied in this work. We leave this analysis, including the perturbative study, for a future work. Acknowledgements {#acknowledgements .unnumbered} ================ R.C.F., S.V.B.G. and H.E.S.V. thank DAAD (Germany) and CNPq, CAPES and FAPES (Brazil) for partial financial support. S.V.B.G. thanks the Laboratoire d’Annecy-le-Vieux de Physique Theorique (France) and H.E.S.V. thanks the Fakultät für Physik, Universität Bielefeld (Germany) for kind hospitality during part of the elaboration of this work. We thank Winfried Zimdahl and Christian Byrnes for the comments and suggestions. [99]{} M. Persic, P. Salucci and F. Stel, Mon. Not. Roy. Astron. Soc. [**281**]{}, 27 (1996); A. Borriello and P. Salucci, Mon. Not. Roy. Astron. Soc. [**323**]{}, 285 (2001). C. S. Frenk, A. E. Evrard, S. D .M. White and F. J. Summers, Astrophys.J. [**472**]{}, 460 (1996). , J. Primack, Proceedings of the Princeton 250th Anniversary conference, June 1996, Critical Dialogues in Cosmology, ed. N. Turok (World Scientific), \[arXiv:astro-ph/9610078\]. A. G. Riess [*et al.*]{}, Astron. J. [**116**]{}, 1009(1998); S. Perlmutter [*et al.*]{}, Nature, [**391**]{}, 51(1998); J.L. Tonry [*et al.*]{}, Astrophys. J. [**594**]{}, 1(2003). T. Padmanabhan, Gen. Rel. Grav. [**40**]{}, 529 (2008). S. Capozziello, S. Nojiri, S. D. Odintsov and A. Troisi, Phys. Lett. [**B 639**]{}, 135 (2006), \[arXiv:astro-ph/0604431\]; S. Nojiri and S. D. Odintsov, Phys. Rev. [**D 74**]{}, 086005 (2006), \[arXiv:hep-th/0608008\]; S. Nojiri and S. D. Odintsov, Int. J. Geom. Meth. Mod. Phys. [**4**]{}, 115 (2007), \[arXiv:hep-th/0601213\]; L. Amendola, R. Gannouji, D. Polarski and Shinji Tsujikawa, Phys. Rev. [**D 75**]{}, 083504 (2007), \[arXiv:gr-qc/0612180\]. T. Buchert, M. Kerscher and C. Sicka, Phys. Rev. [**D62**]{}, 043525 (2000), \[arXiv:astro-ph/9912347\]; S. Rasanen, JCAP [**0402**]{}, 003 (2004), \[arXiv:astro-ph/0311257\]. . E. P. S. Shellard and R. A. Battye, Phys. Rept. [**307**]{}, 227 (1998), \[arXiv:astro-ph/9808220\]. A. Falvard [*et al.*]{}, Astropart. Phys. [**20**]{}, 467 (2004), \[arXiv:astro-ph/0210184\]. A. Bottino, F. Donato, N. Fornengo and P. Salati, Phys. Rev. [**D 72**]{}, 083518 (2005), \[arXiv:hep-ph/0507086\]. T. Appelquist, H. C. Cheng, and B. A. Dobrescu, Phys. Rev. [**D 64**]{}, 035002 (2001), \[arXiv:hep-ph/0012100\]. L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{}, 3370 (1999), \[arXiv:hep-ph/9905221\]. S. Weinberg, Rev. Mod. Phys. [**61**]{}, 1 (1989); S. M. Carroll, Living Rev. Rel. [**4**]{}, 1 (2001), \[arXiv:astro-ph/0004075\]; B. Ratra and P. J. E. Peebles, Phys. Rev. [**D 37**]{}, 3406 (1988); R. R. Caldwell, R. Dave, and P. J. Steinhardt, Phys. Rev. Lett. [**80**]{}, 1582 (1998), \[arXiv:astro-ph/9708069\]. C. Armendariz-Picon, V. Mukhanov, and P. J. Steinhardt, Phys. Rev. Lett. [**85**]{}, 4438 (2000), \[arXiv:astro-ph/0004134\]; M. Malquarti, E. J. Copeland, A. R. Liddle, and M. Trodden, Phys. Rev. [**D 67**]{}, 123503 (2003), \[arXiv:astro-ph/0302279\]. A. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. [**B511**]{}, 265 (2001), \[arXiv:gr-qc/0103004\]. M. Bordemann and J. Hoppe, Phys. Lett. [**B317**]{}, 315 (1993), \[arXiv:hep-th/9307036\]; N. Ogawa, Phys. Rev. [**D 62**]{}, 085023 (2000), \[arXiv:hep-th/0003288\]; R. Jackiw and A. P. Polychronakos: Supersymmetric fluid mechanics. Phys. Rev. D 62, 085019 (2000) M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. [**D 66**]{}, 043507 (2002), \[arXiv:gr-qc/0202064\]. L. Amendola, F. Finelli, C. Burigana and D. Carturan, JCAP, [**07**]{}, 005 (2003). Puxum Wu and Hongwei Yu, ApJ, [**658**]{}, 663 (2007). T. Giannantonio and A. Melchiorri, Classical and Quantum Gravity, [**23**]{}, 12 (2006). M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. [**D 70**]{}, 083519 (2004), \[astro-ph/0407239\]; N. Bilic, R. J. Lindebaum, G. B. Tupper and Raoul D. Viollier, JCAP [**0411**]{}, 008 (2004), \[astro-ph/0307214\]; J. C. Fabris, S. V. B. Gonçalves and R. de Sá Ribeiro, Gen. Rel. Grav. [**36**]{}, 211 (2004), \[arXiv:astro-ph/0307028\]. P. T. Silva and O. Bertolami, Astrophys. J. [**599**]{}, 829 (2003), \[arXiv:astro-ph/0303353\]. J. V. Cunha, J. S. Alcaniz and J. A. S. Lima, Phys. Rev. [**D 69**]{}, 083501 (2004), \[arXiv:astro-ph/0306319\]. J. S. Alcaniz, D. Jain and A. Dev, Phys. Rev. [**D 67**]{}, 043514 (2003), \[arXiv:astro-ph/0210476\]. S. del Campo and J. Villanueva, IJMPD [**18**]{}, 2007 (2009); Ch-G. Park, Jai-chan Hwang, J. Park and H. Noh, arXiv:0910.4202. A. V. Filippenko, [*White Dwarfs: Comsological and galactic probes*]{} vol. 332, Springer (2005) \[arXiv:astro-ph/0410609\]. J.C. Fabris, S. V. B. Gonçalves and P. E. de Souza \[arXiv:astro-ph/0207430\]; R. Colistete, J. C. Fabris, S.V.B. Gonçalves and P.E. de Souza, Int. J. Mod. Phys. [ **D13**]{}, 669 (2004); P. T. Silva and O. Bertolami, Astrophys. J. [ **599**]{}, 829 (2003). J. Berian James, T. M. Davis, B. P. Schmidt and A. G. Kim, Mon. Not .Roy. Astron. Soc. [**370**]{}, 933 (2006), \[arXiv:astro-ph/0605147\]. D. Q. Lamb and D. E. Reichart, Proceed. Rome Workshop on Gamma-ray Bursts in the Afterglow Era \[arXiv:astro-ph/0108099\]; V. Bromm and A. Loeb, Apj 575, 11 (2002) \[arXiv: astro-ph 0201400\]. R. W. Klebesadel, I. B. Strong and R. A. Olson, Ap. J. Lett. [**182**]{}, L85 (1973). C. A. Meegan [*et al.*]{}, Nature [**355**]{}, 143 (1992); E. Costa [*et al.*]{}, Nature [**387**]{}, 783 (1997). Z. G. Dai, E. W. Liang and D. Xu, Astrophys. J. [**612**]{}, L101 (2004), \[arXiv:astro-ph/0407497\]; D. Xu, Z. G. Dai and E. W. Liang, Astrophys. J. [**633**]{}, 603 (2005), \[arXiv:astro-ph/0501458\]. B. Schaefer et. al., Astrophys. J. [ **598**]{}, 102 (2003); C. Firmani, G. Ghisellini, G. Ghirlanda and V. Avila-Reese, Mon. Not. Roy. Astron. Soc. [**360**]{}, L1 (2005); B. E. Schaefer, Astrophys. J. [**660**]{}, 16 (2007); L. Amati et. al., Mon. Not. Roy. Astron. Soc. [**391**]{}, 577 (2008). C. Firmani, G. Ghisellini, G. Ghirlanda and V. Avila-Reese, Mon. Not. Roy. Astron. Soc. [**360**]{}, L1 (2005), \[arXiv:astro-ph/0501395\]. Vahe Petrosian, Aurelien Bouvier, Felix Ryde, \[arXiv:0909.5051\]. N. Liang, W. K. Xiao, Y. Liu and S. N. Zhang, Astrophys. J. [**685**]{}, 354 (2008), \[arXiv:0802.4262\]. N. Liang, P. Wu and Z. H. Zhu, (2010), \[arXiv:1006.1105v1\]. N. Liang, L. Xu and ZH. Zhu, Astronomy & Astrophysics [**527**]{}, A11 (2011), \[arXiv:1009.6059v3\]. T. M. Davies et al. Astrophys. J., [**666**]{}, 716 (2007). L. Amanullah et al, Astrophys. J., [**716**]{}, 712 (2010) \[arXiv:1004.1711\]. S. Capozziello and L. Izzo, Astron. Astrophys. [ **490**]{}, 31 (2008); L. Izzo, S. Capozziello, G. Covone and M. Capaccioli, Astron. Astrophys. [**508**]{}, 63 (2009); V.F. Cardone, S. Capozziello and M.G. Dainotti, Mon. Not. R. Astron. Soc. 400, [**775**]{} (2009); N. Liang, P. Wu, S. N. Zhang, Phys. Rev. D [**81**]{}, 083518 (2010); H. Wei, JCAP [ **1008**]{}, 020 (2010); M. Demianski, E. Piedipalumbo and C. Rubano, (2010)\[arXiv:1010.0855v1\]. F. Y. Wang, Z. G. Dai and S. Qi, Astron. Astrophys. [**507**]{}, 53 (2009); N. Liang, L. Xu, Z. H. Zhu, Astron. Astrophys. [ **527**]{}, (2011). O. Bertolami, P. T. Silva, Mon. Not. Roy. Astron. Soc. [**365**]{}, 1149 (2006), \[arXiv:astro-ph/0507192v1\]. H. J. Mosquera Cuesta, M. H. Dumet, R. Turcati, C. A. Bonilla Quintero, C. Furlanetto and J. Morais, \[arXiv:astro-ph/0610796v1\]. J. C. Fabris and J. Martin, Phys. Rev. [**D 55**]{}, 5205 (1997). M. Hicken [*et al*]{}., Astrophys. J. [**700**]{}, 1097 (2009). \[arXiv:astro-ph/0901.4804\]. L. Xu and J. Lu, JCAP 1003, 25 (2010) \[arXiv:1004.3344\]. V. Gorini, A. Y. Kamenshchik, U. Moschella, O. F. Piattella and A. A. Starobinsky, JCAP [**02**]{}, 016 (2008), \[arXiv:0711.4242\]. L. Brillouin, [*Wave Propagation and Group Velocity*]{}, 1960 (Academic Press). R. Colistete Jr. and J. C. Fabris, Class. Quant. Grav [**22**]{}, 2813 (2005), \[arXiv:astro-ph/0501519\]. Verkhodanov, O. V., Parijskij, Yu. N. and Starobinsky, A. A., Bull. Spec. Astrophys. obs., [**58**]{}, 5-15 (2005); Arp, H., Astrophys J. [**571**]{}, 615-618 (2002). J. C. Fabris, S. V. B. Gonçalves, H. E. S. Velten and W. Zimdahl, Phys. Rev. D[**78**]{}, 103523 (2008); O.F. Piattella, JCAP 1003:012 (2010); J. C. Fabris, H. E. S. Velten and W. Zimdahl, Phys.Rev.D81:087303 (2010). [^1]: e-mail: rc\_freitas@terra.com.br [^2]: e-mail: sergio.vitorino@pq.cnpq.br [^3]: e-mail: velten@cce.ufes.br
--- abstract: 'A wide range of mechanisms have been proposed to supply the energy for gamma-ray bursts (GRB) at cosmological distances. It is a common misconception that some of these, notably NS-NS mergers, cannot meet the energy requirements suggested by recent observations. We show here that GRB energies, even at the most distant redshifts detected, are compatible with current binary merger or collapse scenarios involving compact objects. This is especially so if, as expected, there is a moderate amount of beaming, since current observations constrain the energy per solid angle much more strongly and directly than the total energy. All plausible progenitors, ranging from NS-NS mergers to various hypernova-like scenarios, eventually lead to the formation of a black hole with a debris torus around it, so that the extractable energy is of the same order, $10^{54}$ ergs, in all cases. MHD conversion of gravitational into kinetic and radiation energy can significantly increase the probability of observing large photon fluxes, although significant collimation may achieve the same effect with neutrino annihilation in short bursts. The lifetime of the debris torus is dictated by a variety of physical processes, such as viscous accretion and various instabilities; these mechanisms dominate at different stages in the evolution of the torus and provide for a range of gamma-ray burst lifetimes.' author: - ', P.$^1$, Rees, M.J.$^2$ & Wijers, R.A.M.J.$^{2,3}$' title: ' Energetics and Beaming of Gamma Ray Burst Triggers [^1] ' --- 52[E\_[52]{}]{} 13[r\_[13]{}]{} 2[\_2]{} 2[\^2]{} 5[t\_5]{} 3[\_3]{} $^1$Dpt. of Astronomy & Astrophysics, Pennsylvania State University, University Park, PA 16803\ $^2$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, U.K.\ $^3$Dpt. of Physics & Astronomy, SUNY, Stony Brook, NY 11794-3800\ Date :  [ 9/03/98]{} Introduction ============ The discovery of afterglows in the last year has moved the investigation of gamma-ray bursts (GRB) to a new plane. It not only has opened the field to new wavelengths and extended observations to longer time scales, making the identification of counterparts possible, but also provided confirmation for much of the earlier work on the fireball shock model of GRB, in which the $\gamma$-ray emission arises at radii of $10^{13}-10^{15}$ cm (Rees & 1992, 1994, & Rees 1993, & Xu 1994, Katz 1994, Sari & Piran 1995). In particular, this model led to the prediction of the quantitative nature of the signatures of afterglows, in substantial agreement with subsequent observations ( & Rees 1997a, Costa et al. 1997, Vietri 1997a, Tavani 1997, Waxman 1997; Reichart 1997, Wijers et al. 1997). More recently, significant interest was aroused by the report of an afterglow for the burst GRB971214 at a redshift $z=3.4$, whose fluence corresponds to a $\gamma$-ray energy of $10^{53.5} (\Omega_\gamma /4\pi)$ erg (Kulkarni et al. 1998). There is also possible evidence that some fraction of the detected afterglows may arise in relatively dense gaseous environments. This is suggested, e.g. by evidence for dust in GRB970508 (Reichart 1998), the absence of an optical afterglow and presence of strong soft X-ray absorption in GRB 970828 (Groot et al. 1997, Murakami et al. 1997), the lack an an optical afterglow in the (radio-detected) afterglow of GRB980329 (Taylor et al. 1998), etc. This has led to the suggestion that “hypernova" models ( 1998, Fryer & Woosley 1998) may be responsible, since hypernovae are thought to involve the collapse of a massive star or its merger with a compact companion, both of which would occur on time scale short enough to imply a burst within the star forming region. By contrast, neutron star - neutron star (NS-NS) or neutron star - black hole (NS-BH) mergers would lead to a similar BH plus debris torus system and roughly the same total energies (a point not generally appreciated), but the mean distance traveled from birth is of order several kpc (Bloom, Sigurdsson & Pols 1998), leading to a burst presumably in a less dense environment. The fits of Wijers & Galama (1998) to the observational data on GRB 970508 and GRB 971214 in fact suggest external densities in the range of 0.04–0.4 cm$^{-1}$, which would be more typical of a tenuous interstellar medium. In any case, while it is at present unclear which, if any, of these progenitors is responsible for the bulk of GRB, or whether perhaps different progenitors represent different subclasses of GRB, there is general agreement that they all would be expected to lead to the generic fireball shock scenario mentioned above. Trigger Mechanisms and Black Hole/Debris Torus Systems ======================================================= The first detailed investigations of the disruption of a NS in a merger with another NS or a BH were carried out by Lattimer & Schramm (1976), and the significance of this work for GRB has only recently started to be appreciated. It has become increasingly apparent in the last few years that [*all*]{} plausible GRB progenitors suggested so far (e.g. NS-NS or NS-BH mergers, Helium core - black hole \[He/BH\] or white dwarf - black hole \[WD-BH\] mergers, and a wide category labeled as hypernova or collapsars including failed supernova Ib \[SNe Ib\], single or binary Wolf-Rayet \[WR\] collapse, etc.) are expected to lead to a BH plus debris torus system. An important point is that the overall energetics from these various progenitors do not differ by more than about one order of magnitude. Two large reservoirs of energy are available in principle: the binding energy of the orbiting debris, and the spin energy of the black hole (& Rees, 1997b). The first can provide up to 42% of the rest mass energy of the disk, for a maximally rotating black hole, while the second can provide up to 29% of the rest mass of the black hole itself. The $\nu\bar\nu \to e^+ e^-$ process (Eichler et al. 1989) can tap the thermal energy of the torus produced by viscous dissipation. For this mechanism to be efficient, the neutrinos must escape before being advected into the hole; on the other hand, the efficiency of conversion into pairs (which scales with the square of the neutrino density) is low if the neutrino production is too gradual. Typical estimates suggest a fireball of $\siml 10^{51}$ erg (Ruffert et al 1997, Popham, Woosley & Fryer 1998), except perhaps in the “collapsar" or failed SN Ib case where Popham et al. (1998) estimate $10^{52.3}$ ergs for optimum parameters. If the fireball is collimated into a solid angle $\Omj$ then of course the apparent “isotropized" energy would be larger by a factor $(4\pi/\Omj)$ , but unless $\Omj$ is $\siml 10^{-2} -10^{-3}$ this may fail to satisfy the apparent isotropized energy of $10^{53.5}$ ergs implied by a redshift $z=3.4$ for GRB 971214. An alternative way to tap the torus energy is through dissipation of magnetic fields generated by the differential rotation in the torus ( 1991, Narayan, & Piran 1992, & Rees 1997b, Katz 1997). Even before the BH forms, a NS-NS merging system might lead to winding up of the fields and dissipation in the last stages before the merger (& Rees 1992, Vietri 1997a). The above mechanisms tap the energy available in the debris torus or disk. However, a hole formed from a coalescing compact binary is guaranteed to be rapidly spinning, and, being more massive, could contain more energy than the torus; the energy extractable in principle through MHD coupling to the rotation of the hole by the Blandford & Znajek (1977) effect could then be even larger than that contained in the orbiting debris (& Rees 1997b,  1998). Collectively, any such MHD outflows have been referred to as Poynting jets. The various progenitors differ only slightly in the mass of the BH and that of the debris torus they produce, and they may differ more markedly in the amount of rotational energy contained in the BH. Strong magnetic fields, of order $10^{15}$ G, are needed needed to carry away the rotational or gravitational energy in a time scale of tens of seconds (Usov 1994, Thompson 1994). If the magnetic fields do not thread the BH, then a Poynting outflow can at most carry the gravitational binding energy of the torus. For a maximally rotating and for a non-rotating BH this is 0.42 and 0.06 of the torus rest mass, respectively. The torus or disk mass in a NS-NS merger is $M_d\sim 0.1\msun$ (Ruffert & Janka 1998), and for a NS-BH, a He-BH, WD-BH merger or a binary WR collapse it may be estimated at $M_d \sim 1\msun$ ( 1998, Fryer & Woosley 1998). In the HeWD-BH merger and WR collapse the mass of the disk is uncertain due to lack of calculations on continued accretion from the envelope, so $1\msun$ is just a rough estimate. The largest energy reservoir is therefore, ‘prima facie’, associated with NS-BH, HeWD-BH or binary WR collapse, which have larger disks and fast rotation, the maximum energy being $\sim 8 \times 10^{53} \eps (M_d/\msun)$ ergs; for the failed SNe Ib (which is a slow rotator) it is $\sim 1.2\times 10^{53}\eps (M_d/\msun)$ ergs, and for the (fast rotating) NS-NS merger it is $\sim 0.8\times 10^{53} \eps (M_d/0.1 \msun) $ ergs, where $\eps$ is the efficiency in converting gravitational into MHD jet energy. Conditions for the efficient escape of a high-$\Gamma$ jet may, however, be less propitious if the “engine" is surrounded by an extensive envelope. If the magnetic fields in the torus thread the BH, the rotational energy of the BH can be extracted via the B-Z (Blandford & Znajek 1977) mechanism (& Rees 1997b). The extractable energy is $\eps f(a)\Mbh c^2$, where $\eps$ is the MHD efficiency factor and $a = Jc/G M^2$ is the rotation parameter, which equals 1 for a maximally rotating black hole. $f(a)=1-\sqrt{\frac{1}{2}[1+\sqrt{1-a^2}]}$ is small unless $a$ is close to 1, where it sharply rises to its maximum value $f(1)=0.29$, so the main requirement is a rapidly rotating black hole, $a \simg 0.5$. For a maximally rotating BH, the extractable energy is therefore $0.29 \eps \Mbh c^2 \sim 5\times 10^{53} \eps (\Mbh/\msun)$ ergs. Rapid rotation is essentially guaranteed in a NS-NS merger, since the radius (especially for a soft equation of state) is close to that of a black hole and the final orbital spin period is close to the required maximal spin rotation period. Since the central BH will have a mass of about $2.5 \msun$ (Ruffert & Janka 1998), the NS-NS system can thus power a jet of up to $\sim 1.3 \times 10^{54} \eps (\Mbh/2.5\msun)$ ergs. The scenarios less likely to produce a fast rotating BH are the NS-BH merger (where the rotation parameter could be limited to $a \leq M_{ns}/\Mbh$, unless the BH is already fast-rotating) and the failed SNe Ib (where the last material to fall in would have maximum angular momentum, but the material that was initially close to the hole has less angular momentum). A maximal rotation rate may also be possible in a He-BH merger, depending on what fraction of the He core gets accreted along the rotation axis as opposed to along the equator (Fryer & Woosley 1998), and the same should apply to the binary fast-rotating WR scenario, which probably does not differ much in its final details from the He-BH merger. For a fast rotating BH of $3\msun$ threaded by the magnetic field, the maximal energy carried out by the jet is then $\sim 1.6\times 10^{54} \eps (\Mbh/3\msun)$ ergs. Thus in the accretion powered jet case the total energetics between the various models differs at most by a factor 20, whereas in the rotationally (B-Z) powered cases they differ by at most a factor of a few, depending on the rotation parameter. For instance, even allowing for low total efficiency (say 30%), a NS-NS merger whose jet is powered by the torus binding energy would only require a modest beaming of the $\gamma$-rays by a factor $(4\pi/\Omj)\sim 20$, or no beaming if the jet is powered by the B-Z mechanism, to produce the equivalent of an isotropic energy of $10^{53.5}$ ergs. The beaming requirements of BH-NS and some of the other progenitor scenarios are even less constraining. Intrinsic Time scales ======================= A question which has remained largely unanswered so far is what determines the characteristic duration of bursts, which can extend to tens, or even hundreds, of seconds. This is of course very long in comparison with the dynamical or orbital time scale for the “triggers" described in section 2. While bursts lasting hundreds of seconds can easily be derived from a very short, impulsive energy input, this is generally unable to account for a large fraction of bursts which show complicated light curves. This hints at the desirability for a “central engine" lasting much longer than a typical dynamical time scale. Observationally (Kouveliotou et al. 1993) the short ($\siml 2$ s) and long ($\simg 2$ s) bursts appear to represent two distinct subclasses, and one early proposal to explain this was that accretion induced collapse (AIC) of a white dwarf (WD) into a NS plus debris might be a candidate for the long bursts, while NS-NS mergers could provide the short ones (Katz & Canel 1996). As indicated by Ruffert et al. (1997), $\nu\bar\nu$ annihilation will generally tend to produce short bursts $\siml 1$ s in NS-NS systems, requiring collimation by $10^{-1}-10^{-2}$, while Popham, Woosley & Fryer (1998) argued that in collapsars and WD/He-BH systems longer $\nu\bar\nu$ bursts may be possible. Longer bursts however imply lower $e^\pm$ conversion efficiency, so the observed fluxes could then be explained only if the jets were extremely collimated, by at least $10^{-3}-10^{-4}$. We outline here several possible mechanisms, within the context of the basic compact merger or collapse scenario leading to a BH plus debris torus, which can lead to an adequate energy release on such time scales. If the trigger of a long-duration burst involves a black hole, then an acceptable model requires that the surrounding torus should not completely drain into the hole, or be otherwise dispersed, on too short a time scale. There have been some discussions in the literature of possible ’runaway instabilities’ in relativistic tori (Nishida et al.1996, Abramowicz, Karas & Lanza 1997, Daigne & Mochkovitch 1997): these are analogous to the runaway Roche lobe overflow predicted, under some conditions, in binary systems. These instabilities can be virulent in a torus where the specific angular momentum is uniform throughout, but are inhibited by a spread in angular momentum. In a torus that was massive and/or thin enough to be self-gravitating, bar-mode gravitational instabilities could lead to further redistribution of angular momentum and/or to energy loss by gravitational radiation within only a few orbits. Whether a torus of given mass is dynamically unstable depends on its thickness and stratification, which in turn depends on internal viscous dissipation and neutrino cooling. The disruption of a neutron star (or any analogous process) is almost certain to lead to a situation where violent instabilities redistribute mass and angular momentum within a few dynamical time scales (i.e. in much less than a second). A key issue for gamma ray burst models is the nature of the surviving debris after these violent processes are over: what is the maximum mass of a remnant disc/torus which is immune to very violent instabilities, and which can therefore in principle survive for long enough to power the bursts? Magnetic torques and viscosity ------------------------------ Differential rotation may amplify magnetic fields until magnetic viscosity dominates neutrino viscosity. Moreover, the torques associated with a large scale magnetic field may also extract energy and angular momentum by driving a relativistic outflow. If the trigger is to generate the burst energy, over a period 10–100 sec, via Poynting flux — either through a relativistic wind ’spun off’ the torus or via the Blandford-Znajek mechanism — the required field is a few times $10^{15}$G. A weaker field would extract inadequate power; on the other hand, if the large-scale field were even stronger, then the energy would be dumped too fast to account for the longer complex bursts. How plausible are fields of this strength? and Ruderman (1998) point out that, starting with $10^{12}$ G, it only takes of order a second for simple winding to amplify the field to $10^{15}$ G; they argue further that magnetic stresses would then be strong enough for flares to break out. But amplification in a newly-formed torus could well occur more rapidly, for instance via convective instabilities, as in a newly formed neutron star (cf. Duncan & Thompson 1992, Thompson 1994). Such fields can build up on very short time scales, or order $\sim$ few ms; however, convective overturning motions should stop after the disk has cooled by neutrino emission below a few MeV. The latter is generally estimated to be of order a few seconds (Ruffert et al, 1997). But azimuthal magnetic fields can also be generated via the Balbus-Hawley mechanism. The nonlinear evolution and/or reconnection of such fields as they become buoyant can then lead to poloidal components at least of order $\simg 10^{15}$ G. Indeed, it is not obvious why the fields cannot become even higher. Note that the virial limit is $B_v \sim 10^{17}$ G. After magnetic fields have built up to some fraction of the equipartition value with the shear motion, a magnetic viscosity develops. Assuming that $B_rB_\phi\sim B^2$, it can be characterized in the usual way by the parameter $\alpha\sim B^2/(4\pi \rho v_s^2 ) \sim 10^{-1} B_{15}^2 \rho_{13}^{-1} T_9^{-1}$. This viscosity continues operating also after cooling has led to the disappearance of neutrino viscosity. Assuming a value of $\alpha=0.1$, a BH mass 3 $\msun$ and outer disk radius equal to the Roche lobe size, Popham et al. (1998) estimate “viscous" life times of 0.1 s for NS/BH-NS, 10–20 s for a collapsar (failed SN Ib or rotating WR), and 15–150 s for WD-BH and He-BH systems (although fields of $10^{15}$ G may be more difficult to support in He-BH systems). A magnetic field configuration capable of powering the bursts is likely to have a large scale structure. Flares and instabilities occurring on the characteristic (millisecond) dynamical time scale would cause substantial irregularity or intermittency in the overall outflow that would manifest itself in internal shocks (Rees & , 1994) There is thus no problem in principle in accounting for sporadic large-amplitude variability, on all time scales down to a millisecond, even in the most long-lived bursts. Note also that it only takes a residual cold disk of $10^{-3}\msun$ to confine a field of $10^{15}$ G, which can extract energy from the black hole via the Blandford-Znajek mechanism. Even if the evolution time scale for the bulk of the debris were no more than a second, enough may remain to catalyse the extraction of enough energy from the hole to power a long-lived burst. Double peaked bursts -------------------- There are at least two mechanisms which might lead to a delayed “second" burst (or a double humped burst). One possibility is that a merger leads to a central NS, temporarily stabilized by its fast rotation, with a disrupted debris torus around it, which produces a burst powered by the accretion energy and the magnetic fields generated by the shear motions. After the NS has radiated enough of its angular momentum, and accreted enough matter to overcome its centrifugal support, it collapses to a BH, leading to a second burst, and second cycle of energy extraction (either from the disk or from the BH via B-Z). In both cases, the time scale between bursts should be between a few to few tens of seconds. The other possibility for a delayed second burst may arise in merging NS of very unequal masses. As the smaller one fills its Roche lobe and losses mass, the larger NS (which may also collapse to a BH) is surrounded by the gas acquired from its companion, producing a burst as above. Eventually the less massive donor comes under the critical mass for deleptonization, and this leads to an explosion (e.g. Eichler et al. 1989). Starting from a configuration with about $0.1\msun$ which losses mass to its companion, Sumiyoshi et al.. (1998) (see also & Lee 1998, Portegies Zwart 1998) find that the explosion occurs in a time scale of about 20 s. The importance of this process depends on the poorly known distribution of NS-NS binary mass ratios, and on whether the mass transfer between neutron stars of nearly equal mass can be stable. Isotropic or Beamed Outflows? =============================== [*Conversion into relativistic outflow.* ]{} Even if the outflow is not narrowly beamed, the energy of a fireball would be channeled preferentially along the rotation axis. Moreover, we would expect baryon contamination to be lowest near the axis, because angular momentum flings material away from the axis, and any gravitationally-bound material with low angular momentum falls into the hole. In hypernova and SNe Ib cases without a binary companion, however, the envelope is rotating only slowly and thus would not initially have a marked centrifugal funnel; a funnel might however develop after low angular momentum matter falls into the hole along the axis on a free-fall time scale measured from the outer radius of the envelope, $t\sim 10^4-10^5$ s.\ The dynamics are complex. Computer simulations of compact object mergers and black hole formation can address the fate of the bulk of the matter, but there are some key questions that they cannot yet tackle. In particular, high resolution of the outer layers is needed because even a tiny mass fraction of baryons loading down the outflow severely limits the attainable Lorentz factor — for instance a Poynting flux of $10^{52}$ ergs could not accelerate an outflow to $\Gamma > 100$ if it had to drag more than $\sim 10^{-4}$ solar masses of baryons with it. Further 2D numerical simulations of the merger and collapse scenarios are under way (Fryer & Woosley 1998, Eberl, Ruffert & Janka 1998, McFayden & Woosley 1998), largely using Newtonian dynamics, and the numerical difficulties are daunting. There may well be a broad spread of Lorentz factors in the outflow — close to the rotation axis $\Gamma$ may be very high; at larger angles away from the axis, there may be an increasing degree of entrainment, with a corresponding decrease in $\Gamma$. This picture suggests, indeed, that the variety of burst phenomenology could be largely attributable to a standard type of event being viewed from different orientations. As discussed in the last section, a variety of progenitors can lead to a very similar end result, whose energetics are within one order of magnitude from each other. [*Basic spherical afterglow model.*]{} Just as we can interpret supernova remnants even without fully understanding the initiating explosion, so we may hope to understand the afterglows of gamma ray bursts, despite the uncertainties recounted in the previous section. The simplest hypothesis is that the afterglow is due to a relativistic expanding blast wave. The complex time structure of some bursts suggests that the central trigger may continue for up to 100 seconds. However, at much later times all memory of the initial time structure would be lost: essentially all that matters is how much energy and momentum has been injected; the injection can be regarded as instantaneous in the context of the much longer afterglow. The simplest spherical afterglow model has been remarkably successful at explaining the gross features of the GRB 970228, GRB970508 and other afterglows (e.g. Wijers et al. 1997). This has led to the temptation to take the assumed sphericity for granted. For instance, the lack of a break in the light curve of GRB 970508 prompted Kulkarni et al. (1998a) to infer that all afterglows are essentially isotropic, leading to their very large (isotropic) energy estimate of $10^{53.5}$ ergs in GRB 971214. The multi-wavelength data analysis has in fact advanced to the point where one can use observed light curves at different times and derive, via parametric fitting, physical parameters of the burst and environment, such as the total energy $E$, the magnetic and electron-proton coupling parameters ${\eps}_B$ and ${\eps}_e$ and the external density $n$ (Waxman 1997, Wijers & Galama 1998). However, as emphasized by Wijers & Galama, 1998, what these fits constrain is only the energy per unit solid angle $\calE= (E/\Omj)$. [*Properties of a Jet Outflow.*]{} An argument for sphericity that has been invoked by observers is that, if the blast wave energy were channeled into a solid angle $\Omj$ then, as correctly argued by Rhoads (1997, 1998), one expects a faster decay of $\Gamma$ after it drops below $\Omj^{-1/2}$. A simple calculation using the usual scaling laws leads then to a steepening of the flux power law in time. The lack of such an observed afterglow downturn in the optical has been interpreted as further supporting the sphericity of the entire fireball. There are several important caveats, however. The first one is that the above argument assumes a simple, impulsive energy input (lasting $\siml$ than the observed $\gamma$-ray pulse duration), characterized by a single energy and bulk Lorentz factor value. Estimates for the time needed to reach the non-relativistic regime, or $\Gamma < \Omega_j^{-1/2} \siml$ few, could then be under a month (Vietri 1997, Huang, Dai & Lu 1998), especially if an initial radiative regime with $\Gamma\propto r^{-3}$ prevails. It is unclear whether, even when electron radiative time scales are shorter than the expansion time, such a regime applies, as it would require strong electron-proton coupling (, Rees & Wijers 1998). Waxman, et al. (1998) have also argued on observational grounds that the longer lasting $\Gamma \propto r^{-3/2}$ (adiabatic regime) is more appropriate. Furthermore, even the simplest reasonable departures from a top-hat approximation (e.g. having more energy emitted with lower Lorentz factors at later times, which still do not exceed the gamma-ray pulse duration) would drastically extend the afterglow lifetime in the relativistic regime, by providing a late “energy refreshment" to the blast wave on time scales comparable to the afterglow time scale (Rees & 1998). The transition to the $\Gamma < \Omega_j^{-1/2}$ regime occurring at $\Gamma\sim$ few could then occur as late as six months to more than a year after the outburst, depending on details of the brief energy input. Even in a simple top-hat model, more detailed calculations show that the transition to the non-relativistic regime is very gradual ($\delta t/t \simg 2$) in the light curve. Also, even though the flux from the head-on part of the remnant decreases faster, this is more than compensated by the increased emission measure from sweeping up external matter over a larger angle, and by the fact that the extra radiation, which arises at larger angles, arrives later and re-fills the steeper light curve. The sideways expansion thus actually can slow down the flux decay (Panaitescu & 1998), rather than making for a faster decay. As already noted by Katz & Piran (1997), the ratio $L_\gamma/L_{opt}$ (or $L_\gamma / L_x$) can be quite different from burst to burst. The fit of Wijers & Galama for GRB 970508 indicates an afterglow (X-ray energies or softer) energy per solid angle ${\cal E}_{52} =3.7$, while at $z=0.835$ with $h_{70}=1$ the corresponding $\gamma$-ray ${\cal E}_{52\gamma} =0.63$. On the other hand for GRB 971214, at $z=3.4$, the numbers are ${\cal E}_{52} = 0.68$ and ${\cal E}_{52\gamma}=20$. The bursts themselves require ejecta with $\Gamma > 100$. The gamma-rays we receive come only from material whose motion is directed within one degree of our line of sight. They therefore provide no information about the ejecta in other directions: the outflow could be isotropic, or concentrated in a cone of angle (say) 20 degrees (provided that the line of sight lay inside the cone). At observer times of more than a week, the blast wave would be decelerated to a moderate Lorentz factor, irrespective of the initial value. The beaming and aberration effects are less extreme so we observe afterglow emission not just from material moving almost directly towards us, but from a wider range of angles. The afterglow is thus a probe for the geometry of the ejecta — at late stages, if the outflow is beamed, we expect a spherically-symmetric assumption to be inadequate; the deviations from the predictions of such a model would then tell us about the ejection in directions away from our line of sight. It is quite possible, for instance, that there is relativistic outflow with lower $\Gamma$ (heavier loading of baryons) in other directions (e.g. Wijers, Rees & 1997); this slower matter could even carry most of the energy (, 1997). This hypothesis is, if anything, further reinforced by the fits of Wijers & Galama (1998) mentioned above. [*Observational constraints on beaming.*]{} As discussed above, anisotropy in the burst outflow and emission affects the light curve at the time when the inverse of the bulk Lorentz factor equals the opening angle of the outflow. If the critical Lorentz factor is less than 3 or so (i.e. the opening angle exceeds 20$^\circ$) such a transition might be masked by the transition from ultrarelativistic to mildly relativistic flow, so quite generically it would difficult to limit the late-time afterglow opening angle in this way if it exceeds 20$^\circ$. Since some afterglows are unbroken power laws for over 100 days (e.g. GRB970228), if the energy input were indeed just a a simple impulsive top-hat the opening angle of the late-time afterglow at long wavelengths is probably greater than 1/3, i.e. $\Omega_{\rm opt}\simg 0.4$. However, even this still means that the energy estimates from the afterglow assuming isotropy could be 30 times too high. The gamma-ray beaming is much harder to constrain directly. The ratio of $\Omega_\gamma /\Omega_x$ has been considered by Grindlay (1998) using data from Ariel V and HEAO-A1/A2 surveys, who did not find evidence for a significant difference between the deduced gamma-ray and X-ray rates, and concluded that higher sensitivity surveys would be needed to provide significant constraints. More promising for the immediate future, the ratio $\Omega_\gamma/\Omega_{\rm opt}$ can also be investigated observationally (see also Rhoads 1997). The rate of GRB with peak fluxes above 1 phcm$^{-2}$s$^{-1}$ as determined by BATSE is about 300/yr, i.e. 0.01/sq. deg/yr. According to Wijers et al. (1998) this flux corresponds to a redshift of 3. If the gamma rays were much more narrowly beamed than the optical afterglow there should be many ‘homeless’ afterglows, i.e. ones without a GRB preceding them. The transient sky at faint magnitudes is poorly known, but there are two major efforts under way to find supernovae down to about $R=23$ (Garnavich et al. 1998, Perlmutter et al. 1998). These searches have by now covered a few tens of square degree years of exposure and would be sensitive to afterglows of the brightness levels thus far observed. It therefore appears that the afterglow rate is not more than a few times 0.1/sq. deg/yr. Since the magnitude limit of these searches allows detection of optical counterparts of GRB brighter than 1 ph cm$^{-2}$ s$^{-1}$ it is fair to conclude that the ratio of homeless afterglows to GRB is at most a few tens, say 20. It then follows that $\Omega_\gamma>0.05\Omega_{\rm opt}$, which combined with our limit to $\Omega_{\rm opt}$ yields $\Omega_\gamma>0.02$. The true rate of events that give rise to GRB is therefore at most 600 times the observed GRB rate, and the opening angle of the ultrarelativistic, gamma-ray emitting material is no less than $5^\circ$. Combined with the most energetic bursts, this begins to pose a problem for the neutrino annihilation type of GRB energy source. Obviously, the above calculation is only sketchy and should be taken as an order of magnitude estimate at present. However, with the current knowledge of afterglows a detailed calculation of the sensitivity of the high-redshift supernova searches to GRB afterglows is feasible, and a precise limit can be set by such a study. Conclusions and Prospects ========================== Simple blast wave models seem able to accommodate the present data on afterglows. However we can at present only infer the energy per solid angle; as yet the constraints on the angle-integrated $\gamma$-ray energy are not strong. We must also remain aware of other possibilities. For instance, we may be wrong in supposing that the central object becomes dormant after the gamma-ray burst itself. It could be that the accretion-induced collapse of a white dwarf, or (for some equations of state) the merger of two neutron stars, could give rise to a rapidly-spinning, temporarily rotationally stabilized pulsar. The afterglow could then, at least in part, be due to a pulsar’s continuing power output. It could also be that mergers of unequal mass neutron stars, or neutron stars with other compact companions, lead to the delayed formation of a black hole. Such events might also lead to repeating episodes of accretion and orbit separation, or to the eventual explosion of a neutron star which has dropped below the critical mass, all of which would provide a longer time scale, episodic energy output. We need to be open minded, yet also not too sanguine, about the possibility of there being more subclasses of classical GRB than just short ones and long ones. For instance, GRB with no high energy pulses (NHE) appear to have a different (but still isotropic) spatial distribution than those with high energy (HE) pulses (Pendleton et al. 1996). Some caution is needed in interpreting this, since selection effects could lead to a bias against detecting HE emission in dim bursts (Norris, 1998). Then, there is the apparent coincidence of GRB 980425 with the SN Ib/Ic 1998bw (Galama et al. 1998). A simple but radical interpretation (Wang & Wheeler 1998) is that all GRB may be associated with SNe Ib/Ic and differences arise only from different viewing angles relative to a very narrow jet. The difficulties with this are that it would require extreme collimations by factors $10^{-3}-10^{-4}$, and that the statistical association of any subgroup of GRB with SNe Ib/Ic (or any other class of objects, for that matter) is so far not significant (Kippen et al. 1998). If however the GRB 980425/1998bw association is real, as argued by Woosley, Eastman & Schmidt (1998), Iwamoto et al. (1998) and Bloom et al. (1998), then we may be in the presence of a new subclass of GRB with lower energy $E_\gamma \sim 10^{48} (\Omj /4\pi )$ erg, which is only rarely observable even though its comoving volume density could be substantial. In this, more likely interpretation, the great majority of the observed GRB would have the energies $E_\gamma \sim 10^{54}(\Omj/4\pi)$ ergs as inferred from high redshift observations. Much progress has been made in understanding how gamma-rays can arise in fireballs produced by brief events depositing a large amount of energy in a small volume, and in deriving the generic properties of the long wavelength afterglows that follow from this (Rees 1998). There still remain a number of mysteries, especially concerning the identity of their progenitors, the nature of the triggering mechanism, the transport of the energy and the time scales involved. Nevertheless, even if we do not yet understand the intrinsic gamma-ray burst central engine, they may be the most powerful beacons for probing the high redshift ($z > 5$) universe. Even if their total energy is reduced by beaming to a “modest" $\sim 10^{52}-10^{52.5}$ ergs in photons, they are the most extreme phenomena that we know about in high energy astrophysics. The modeling of the burst itself — the trigger, the formation of the ultrarelativistic outflow, and the radiation processes — is a formidable challenge to theorists and to computational techniques. It is, also, a formidable challenge for observers, in their quest for detecting minute details in extremely faint and distant sources. And if the class of models that we have advocated here turns out to be irrelevant, the explanation of gamma-ray bursts will surely turn out to be even more remarkable and fascinating. Abramowicz, M.A., Karas, V., and Lanza, A., 1998 A&A (in press) (astro-ph/9712245) Blandford, R.D. & Znajek, R.L., 1977, MNRAS, 179, 433 Bloom, J., Sigurdsson, S. & Pols, O., 1998, MNRAS in press (astro-ph/9805222) Bloom, J., et al., 1998, ApJ(Letters), subm (astro-ph/9807050) Costa, E., et al., 1997, Nature, 387, 783 Daigne, F., and Mochkovitch, R., 1997, MNRAS, 285, L15 Fryer, C., & Woosley, S., 1998, subm to Ap.J.(Lett.), (astro-ph/9804167) Galama, T. et al., 1998, Nature, in press (astro-ph/9806175) Garnavich, P., et al., 1998, ApJ, 493, L53 Grindlay, J., 1998, preprint Groot, P. et al., 1997; in [*Gamma-Ray Bursts*]{}, Meegan, C., Preece, R. & Koshut, T., eds., 1997 (AIP: New York), p. 557 Huang, Y.F., Dai, Z.G. & Lu, T., 1998, A&A in press (astro-ph/9807061) Iwamoto, K. et al., 1998, Nature, in press (astro-ph/9806382) Katz, J., & Canel., 1996, ApJ, 471, 915 Katz, J., 1994, ApJ, 422, 248 Katz, J. & Piran, T., 1997, Ap.J., 490, 772 Kippen, R.M. et al., 1998, ApJ subm (astro-ph/9806364) , W. & Ruderman, M., 1998, preprint , W. & Lee, W, ApJ(Letters), 494, L53 Kouveliotou, C. et al., 1993, ApJ, 413, L101 Kulkarni, S., et al., 1998, Nature, 393, 35 Lattimer, J.M. & Schramm, D.N., 1976, ApJ, 210, 549 , P. and Rees, 1992, M.J., ApJ, 397, 570 , P. and Rees, M.J., 1993, ApJ, 405, 278 , P. & Rees, M.J., 1997a, ApJ, 476, 232 , P. & Rees, M.J., 1997b, ApJ, 482, L29 in press (astro-ph/9804119) , P., Rees, M.J. & Wijers, R.A.M.J., 1998, ApJ, 499, 301 (astro-ph/9709273) Metzger, M. et al., 1997, Nature, 387, 878 Murakami, T. et al., 1997, in [Gamma-Ray Bursts]{}, Meegan, C., Preece, R. & Koshut, T., eds., 1997 (AIP: New York), p. 435 Narayan, R., Paczyński, B. & Piran, T. 1992, ApJ, 395, L83 Nishida, S., Lanza, A., Eriguchi, Y., and Abramawicz, M.A., 1996, MNRAS 278, L41 Norris, J, 1998, private comm. Paczyński, B., 1991, Acta Astron. 41, 257 Paczyński, B. & Xu, G., 1994, ApJ, 427, 708 Paczyński, B., 1997, in [Gamma-Ray Bursts]{}, Meegan, C., Preece, R. & Koshut, T., eds., 1997 (AIP: New York), p. 783 Paczyński, B., 1998, ApJ, 494, L45 Panaitescu, A. & , P., 1998, ApJ, subm. (astro-ph/9806016) Perlmutter, S, et al., 1998, Nature, 391, 51 Pendleton, G, et al., 1996, Ap\[J, 464, 606 Popham, R., Woosley, S. & Fryer, C., 1998, ApJ, subm. (astro-ph/9807028) Portegies-Zwart, S., 1998, ApJ, Letters, in press (astro-ph/9804296) Rees, M.J. 1998, Nucl. Phys. B Sup., in press Rees, M.J. & , P., 1992, MNRAS, 258, 41P Rees, M.J. & , P., 1994, ApJ, 430, L93 Rees, M.J. & , P., 1998, ApJ, Letters, in press (astro-ph/9712252) Reichart, D., 1997, ApJ, 485, L57 Reichart, D., 1998, ApJ, 495, L99 Rhoads, J.E., 1997, ApJ, 487, L1 Ruffert, M., Janka, H.-T., Takahashi, K., Schaefer, G., 1997, A& A, 319, 122 Ruffert, M.& Janka, H.-T., 1998, preprint Sari, R. & Piran, T., 1995, ApJ, 455, L143 Sumiyoshi, K., Yamada, S., Suzuki, H. & Hillebrandt, W., 1998, A& A, 334, 159 Tavani, M., 1997, ApJ, 483, L87 Taylor, G.B., et al., 1997, Nature, 389, 263 Thompson, C., 1994, MNRAS, 270, 480 Thompson, C., Duncan, R.C., 1993, ApJ, 408, 194 Usov, V.V., 1994, MNRAS, 267, 1035 Vietri, M., 1997a, ApJ, 471, L95 Vietri, M., 1997b, ApJ, 488, L105 Wang, L. & Wheeler, J.C., 1998, ApJ subm (astro-ph/9806212) Waxman, E., 1997, ApJ, 485, L5 Waxman, E., Kulkarni, S. & Frail, D., 1998, ApJ, 497, 288 Wijers, R.A.M.J., Rees, M.J. & , P., 1997, MNRAS, 288, L51 Wijers, R.A.M.J. & Galama, T., 1998, ApJ, subm. (astro-ph/9805341) Wijers, R.A.M.J., Bloom, J.S., Bagla, J.S., and Natarajan, P., 1998, MNRAS, 294, L13 Woosley, S., Eastman, R. & Schmidt, B., 1998, ApJ, subm (astro-ph/9806299) [^1]: Manuscript prepared for New Astronomy, David N. Schramm memorial volume
--- abstract: | Choosing an encoding over binary strings for input/output to/by a Turing Machine is usually straightforward and/or inessential for discrete data (like graphs), but delicate — heavily affecting computability and even more computational complexity — already regarding real numbers, not to mention more advanced (e.g. Sobolev) spaces. For a general theory of computational complexity over continuous data we introduce and justify ‘quantitative admissibility’ as requirement for sensible encodings of arbitrary compact metric spaces, a refinement of qualitative ‘admissibility’ due to \[Kreitz&Weihrauch’85\]: An *admissible* representation of a T$_0$ space $X$ is a (i) *continuous* partial surjective mapping from the Cantor space of infinite binary sequences which is (ii) maximal w.r.t. *continuous* reduction. By the Kreitz-Weihrauch (aka “Main”) Theorem of computability theory for continuous data, a function $f:X\to Y$ with admissible representations of co/domain is *continuous*  iff  it admits *continuous* a code-translating mapping on Cantor space, a so-called *realizer*. We require a *linearly/polynomially* admissible representation of a compact metric space $(X,d)$ to have (i) asymptotically *optimal* modulus of continuity, namely close to the entropy of $X$, and (ii) be maximal w.r.t. reduction having *optimal* modulus of continuity in a similar sense. Careful constructions show the category of such representations to be Cartesian closed, and non-empty: every compact $(X,d)$ admits a linearly-admissible representation. Moreover such representations give rise to a tight quantitative correspondence between the modulus of continuity of a function $f:X\to Y$ on the one hand and on the other hand that of its realizer: a “Main Theorem” of computational *complexity*. This suggests (how) to take into account the entropies of the spaces under consideration when measuring/defining algorithmic cost over continuous data; and to follow \[Kawamura&Cook’12\] considering and classifying *generalized* representations with domains ‘larger’ than Cantor space for (e.g. function) spaces of exponential entropy. author: - | Akitoshi Kawamura$^1$, Donghyun Lim$^2$, Svetlana Selivanova$^2$, Martin Ziegler$^2$\ `kawamura@inf.kyushu-u.ac.jp`, `{klimdhn,sseliv,ziegler}@kaist.ac.kr` bibliography: - 'cca.bib' - 'signdigit.bib' date: '[[**Keywords:**]{} Computational Complexity of Continuous Data]{}' title: | Representation Theory of Compact Metric Spaces\ and Computational Complexity of Continuous Data[^1] --- Table of Contents {#table-of-contents .unnumbered} ================= Motivation, Background, and Summary of Contribution =================================================== Arguably most computational problems in Science and Engineering are concerned with continuous rather than with discrete data [@BC06; @Bra13]. Here the Theory of Computability exhibits new topological — and continuous complexity theory furthermore metric — aspects that trivialize, and are thus invisible, in the discrete realm. In particular input and output require rather careful a choice of the underlying encoding as sequences of bits to be read, processed, and written by a Turing machine. For example, - encoding real numbers via their binary expansion $x=\sum_{n=0}^\infty b_n2^{-n-1}$, and thus operating on the Cantor space of infinite binary sequences $\bar b=(b_0,b_1,\ldots b_n,\ldots)$, renders arithmetic averaging $[0;1]^2\ni(x,y)\mapsto (x+y)/2\in[0;1]$ *dis*continuous and *un*computable [@Turing37],[@Wei00 Exercise 7.2.7]. - Encoding real numbers via a sequence of (numerators and denominators, in binary, of) rational approximations up to absolute error $\leq2^{-n}$ does render averaging computable [@Wei00 Theorem 4.3.2], but admits no worst-case bound on computational cost [@Wei00 Examples 7.2.1+7.2.3]. - The *dyadic* representation encodes $x\in[0;1]$ as any integer sequence $a_n\in\{0,\ldots 2^n\}$ (in binary without leading 0) s.t. $|x-a_n/2^n|\leq2^{-n}$; and similarly encode $y\in[0;1]$ as $(b_n)$. Then the/an integer $c_n$ closest to $(a_{n+1}+b_{n+1})/4$ satisfies $\big|(x+y)/2-c_n/2^n\big|\leq2^{-n}$, and is easily computed — but requires first reading/writing $a_m,b_m,c_m$ for all $m<n$: a total of $\Theta(n^2)$ bits. - Encoding $x\in[0;1]$ as *signed* binary expansion $x=\sum_{n\geq0} (2b_n+b_{n+1}-1)\cdot2^{-n-1}$ with $b_n\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ s.t. $2b_n+b_{n+1}\in\{-1,0,1\}$, and similarly $y$, renders averaging computable in linear time ${\mathcal{O}}(n)$ [@Wei00 Theorem 7.3.1]. The signed binary expansion is thus asymptotically ‘optimal’ up to a constant factor, the dyadic representation is still optimal up to a quadratic polynomial, the rational representation is ‘unbounded’, and the binary expansion is unsuitable. But how to choose and quantitatively assess complexity-theoretically appropriate encodings of spaces $X$ other than $[0;1]$, such as those common in the analysis and solution theory of PDEs [@Triebel]? The present work refines the existing classification of encodings from the computability theory of general continuous data while guided by and generalizing the well-established theory of computational complexity over real numbers. There, the binary expansion is known to violate the technical condition of *admissibility*; and we introduce and investigate quantitative strengthenings *linear* admissibility (satisfied by the signed binary, but neither by the dyadic nor by the rational representation) and *polynomial* admissibility (satisfied by the signed binary and by the dyadic, but not by the rational representation). Computability over Continuous Data, Complexity in Real Computation ------------------------------------------------------------------ Here we review established notions and properties of computability and complexity theory over the reals, as well as notions and properties of computability theory over more general abstract spaces: as guideline to the sensible complexity theory of more general abstract spaces developed in the sequel. \[d:Type2\] A *Type-2 Machine* ${\mathcal{M}}$ is a Turing machine with dedicated one-way output tape and infinite read-only input tape [[@Wei00 Definitions 2.1.1+2.1.2]]{}. Naturally operating on infinite sequences of bits, ${\mathcal{M}}$ *computes* a partial function $F:\subseteq{\mathcal{C}}\to{\mathcal{C}}$ on the Cantor space ${\mathcal{C}}=\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^\omega$ of infinite binary sequences if, when run with any input $\bar b\in{\operatorname{dom}}(F)$ on its tape, ${\mathcal{M}}$ keeps printing the symbols of $F(\bar b)$ one by one; while its behaviour on other inputs may be arbitrary. ${\mathcal{M}}$ computes $F$ in *time* $t:{\mathbb{N}}\to{\mathbb{N}}$ if it prints the $n$-th symbol of $F(\bar b)$ after at most $t(n)$ steps regardless of $\bar b\in{\operatorname{dom}}(F)$. For a fixed predicate $\varphi:{\mathcal{C}}\to\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$, a Type-2 Machine with *oracle* $\varphi$ can repeatedly query $\varphi(\vec z)\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ for finite strings $\vec z$ during its computation. Concerning topological spaces $X$ of continuum cardinality beyond real numbers, the *Type-2 Computability Theory* systematically studies and compares encodings, formalized as follows [@Wei00 §3]: \[d:TTE\] 1. A *representation* of a set $X$ is a partial surjective mapping $\xi:\subseteq{\mathcal{C}}:=\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{{\mathbb{N}}}\twoheadrightarrow X$ on the Cantor space of infinite streams of bits. 2. The *product* of representations $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ and $\upsilon:\subseteq{\mathcal{C}}\twoheadrightarrow Y$ is $\xi\times\upsilon:\subseteq{\mathcal{C}}\ni (b_0,b_1,\ldots b_n,\ldots)\mapsto \big(\xi(b_0,b_2,b_4,\ldots),\upsilon(b_1,b_3,\ldots)\big)\in X\times Y$. 3. For representations $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ and $\upsilon:\subseteq{\mathcal{C}}\twoheadrightarrow Y$, a $(\xi,\upsilon)$-*realizer* of a function $f:X\to Y$ is a partial function $F:{\operatorname{dom}}(\xi)\to{\operatorname{dom}}(\upsilon)\subseteq{\mathcal{C}}$ on Cantor space such that $f\circ\xi=\upsilon\circ F$ holds; see Figure \[f:commdiag\]. 4. $(\xi,\upsilon)$-*computing* $f$ means to compute some $(\xi,\upsilon)$-realizer $F$ of $f$ in the sense of Definition \[d:Type2\]. 5. A *reduction* from representation $\xi\twoheadrightarrow X$ to $\xi'\twoheadrightarrow X$ is a $(\xi,\xi')$-realizer of the identity ${\operatorname{id}}:X\to X$; that is, a partial function $F:{\operatorname{dom}}(\xi)\to{\operatorname{dom}}(\xi')$ on Cantor space such that $\xi=\xi'\circ F$. We write $\xi{\preccurlyeq_{\rm T}}\xi'$ to express that a *continuous* reduction exists, where ${\mathcal{C}}$ is equipped with the Cantor space metric ${d_{{\mathcal{C}}}}(\bar b,\bar a)=2^{-\min\{n:b_n\neq a_n\}}$. Examples \[x:Binary\], \[x:Rational\], \[x:Dyadic\], and \[x:SignedDigit\] below formalize the above binary, rational, dyadic, and signed encodings of the reals as representations $\beta$, $\rho$, $\delta$, and $\sigma$, respectively. It is well-known that the latter three, but not $\beta$, are pairwise continuously reducible [@Wei00 Theorem 7.2.5] and thus equivalent with respect to the notions of computability they induce on reals; but only $\delta$ and $\sigma$ admit mutual reductions with *polynomial* modulus of continuity. Recall [[@Wei03 §6]]{} that a *modulus[^2] continuity* of a function $f:X\to Y$ between compact metric spaces $(X,d)$ and $(Y,e)$ is a non-decreasing mapping $\mu:{\mathbb{N}}\to{\mathbb{N}}$ such that $d(x,x')\leq2^{-\mu(n)}$ implies $e\big(f(x),f(x')\big)\leq2^{-n}$.\ Every uniformly continuous function has a (pointwise minimal) modulus of continuity; Lipschitz-continuity corresponds to moduli $\mu(n)=n+{\mathcal{O}}(1)$, and Hölder-continuity to linear moduli $\mu(n)={\mathcal{O}}(n)$: see Fact \[f:Topology\]c). According to the sometimes so-called *Main Theorem* of Computable Analysis, a real function $f$ is continuous iff $f$ is computable by some oracle Type-2 Machine w.r.t. $\rho$ and/or $\delta$ and/or $\sigma$ [@Wei00 Definitions 2.1.1+2.1.2]. For spaces beyond the reals, Kreitz and Weihrauch [@KW85] have identified *admissibility* as central condition on ‘sensible’ representations in that these make the Main Theorem generalize [@Wei00 Theorem 3.2.11]: \[f:Main\] Let $X$ and $Y$ denote second-countable T$_0$ spaces equipped with *admissible* representations $\xi$ and $\upsilon$, respectively. A function $f:X\to Y$ is continuous  iff  it admits a continuous $(\xi,\upsilon)$-*realizer*. Recall [[@Wei00 Theorem 3.2.9.2]]{} that a representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ is *admissible* iff (i) it is continuous and (ii) every continuous representation $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X$ satisfies $\zeta{\preccurlyeq_{\rm T}}\xi$. Computability-theoretically ‘sensible’ representations $\xi$ are thus those maximal, among the continuous ones, with respect to continuous reducibility. The present work refines these considerations and notions from qualitative computability to complexity. For representations satisfying our proposed strengthening of *admissibility*, Theorem \[t:Main2\] below asymptotically optimally translates quantitative continuity between functions $f:X\to Y$ and their realizers $F$. Such translations heavily depend on the co/domains $X,Y$ under consideration: \[x:Max\] 1. A function $f:[0;1]\to[0;1]$ has polynomial modulus of continuity  iff  it has a $(\delta,\delta)$-realizer w.r.t. the dyadic and/or signed binary expansion (itself having a polynomial modulus of continuity)  iff  $f$ has a $(\sigma,\sigma)$-realizer of polynomial modulus of continuity; cmp. [[@Ko91 Theorem 2.19],[@Wei00 Exercise 7.1.7],[@DBLP:conf/lics/KawamuraS016 Theorem 14]]{}. 2. For the compact space $[0;1]'_1:={\operatorname{Lip}}_1([0;1],[0;1])$ of non-expansive (=Lipschitz-continuous with constant 1) $f:[0;1]\to[0;1]$ equipped with the supremum norm, the application function*al* $[0;1]'_1\times[0;1]\ni (f,r)\mapsto f(r)\in[0;1]$ is non-expansive, yet admits *no* realizer with *sub-*exponential modulus of continuity for *any* product representation (Definition \[d:TTE\]b) of $[0;1]'_1\times[0;1]$ [[@KMRZ15 Example 6g]]{}. ![\[f:commdiag\]Realizers of $f:X\to Y$ with respect to different representations.](commdiag){width="50.00000%"} It is naturally desirable that, like in the discrete setting, every computation of a total function $f:X\to Y$ admit some (possibly fast growing, but pointwise finite) worst-case complexity bound $t=t(n)$; however already for the real unit interval $X=[0;1]=Y$ this requires the representation $\xi$ of $X$ to be chosen with care, avoiding both binary $\beta$ and rational $\rho$ [@Wei00 Example 7.2.3]. Specifically, one wants its domain ${\operatorname{dom}}(\xi)\subseteq{\mathcal{C}}$ to be compact [@Sch95; @Wei03; @Sch04]: cmp. Example \[x:Rational\]. \[f:Proper\] 1. Suppose Type-2 Machine ${\mathcal{M}}$ (with/out some fixed oracle) computes a function $F:\subseteq{\mathcal{C}}\to{\mathcal{C}}$ with compact ${\operatorname{dom}}(F)$. Then ${\mathcal{M}}$ admits a bound $t(n)\in{\mathbb{N}}=\{0,1,2,\ldots\}$ on the number of steps it takes to print the first $n$ output symbols of the value $F(\bar b)$ *regardless* of the argument $\bar b\in{\operatorname{dom}}(F)$; see [[@Wei00 Exercise 7.1.2]]{}. 2. If $F:\subseteq{\mathcal{C}}\to{\mathcal{C}}$ is computable (with/out some fixed oracle) in time $t$, then $n\mapsto t(n)$ constitutes a modulus of continuity of $F$. 3. Conversely to every continuous $F:\subseteq{\mathcal{C}}\to{\mathcal{C}}$ with modulus of continuity $\mu$ there exists an oracle[^3] $\varphi$ and Type-2 Machine ${\mathcal{M}}^\varphi$ computing $F$ in time ${\mathcal{O}}\big(n+\mu(n)\big)$; cmp. [[@Wei00 Theorem 2.3.7.2]]{}. Item b) expresses that, on Cantor space, quantitative continuity basically coincides with time-bounded relativized computability. Item a) requires a continuous representation $\xi$ to map closed subsets of Cantor space to closed subsets of $X$ — hence one cannot expect $\xi$ to be an *open* mapping, as popularly posited in computability [@Wei00 Lemma 3.2.5.(2+3+5)] and ingredient to (the proof of) the aforementioned Main Theorem; cmp. [@Wei00 Theorem 3.2.9]. Summary of Contribution ----------------------- We establish a quantitative refinement of the Main Theorem for arbitrary compact metric spaces, tightly relating moduli of continuity of functions $f:X\to Y$ to those of their realizers $F$ *relative* to the entropies of co/domains $X$ and $Y$: Recall [[@Kolmogorov],[@Wei03 §6]]{} that the *entropy*[^4] of a compact metric space $(X,d)$ is the mapping $\eta:{\mathbb{N}}\to{\mathbb{N}}$ such that $X$ can by covered by $2^{\eta(n)}$, but not by $2^{\eta(n)-1}$, closed balls of radius $2^{-n}$. The real unit interval $[0;1]$ has entropy $\eta(n)=n-1$; whereas $[0;1]'_1={\operatorname{Lip}}_1([0;1],[0;1])$ has entropy $\eta'_1(n)=\Theta(2^n)$; see Example \[x:Entropy\] for further spaces. \[r:Admissible\] By Example \[x:Entropy\]f), for any modulus of continuity $\kappa$ of a representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$, the space $X$ has entropy $\eta\leq\kappa$; and we require a *linearly admissible* $\xi$ to (i) have modulus of continuity $\kappa(n)\leq{\mathcal{O}}\Big(\eta\big(n+{\mathcal{O}}(1)\big)\Big)$ almost optimal: permitting asymptotic ‘slack’ a constant factor in value and constant shift in argument. Moreover a linearly admissible $\xi$ must satisfy that, (ii) to every representation $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X$ with modulus of continuity $\nu$ there exists a mapping $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ with modulus of continuity $\mu$ such that $\zeta=\xi\circ F$ *and* $\big(\mu\circ\kappa\big)(n)\leq \nu\big({\mathcal{O}}(n)\big)$. This new Condition (ii) strengthens previous qualitative continuous reducibility “$\zeta{\preccurlyeq_{\rm T}}\xi$” to what we call *linear* metric reducibility “$\zeta{\preccurlyeq_{\rm O}}\xi$”, requiring a $(\zeta,\xi)$-realizer $F$ with almost optimal modulus of continuity: For functions $\varphi:X\to Y$ and $\psi:Y\to Z$ with respective moduli of continuity $\mu$ and $\kappa$, their composition $\psi\circ\varphi:X\to Z$ is easily seen to have modulus of continuity $\mu\circ\kappa$. Abbreviating ${\operatorname{lin}}(\nu):={\mathcal{O}}\Big(\nu\big({\mathcal{O}}(n)\big)\Big)$ and with the semi-inverse ${{\nu}^{\underline{-1}}}(n):=\min\{ m\::\: \nu(m)\geq n\}$, our results are summarized as follows: 1. Let $(X,d)$ and $(Y,e)$ denote infinite compact metric spaces with entropies $\eta$ and $\theta$ and equipped with *linearly* admissible representations $\xi$ and $\upsilon$. If $f:X\to Y$ has modulus of continuity $\mu$, it admits a realizer $F$ with modulus of continuity ${\operatorname{lin}}(\eta)\circ \mu\circ{\operatorname{lin}}\big({{\theta}^{\underline{-1}}}\big)$. Conversely if $F$ is a realizer of $f$ with modulus $\nu$, then $f$ has modulus ${\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ\nu\circ{\operatorname{lin}}(\theta)$. 2. Every compact metric space $(X,d)$ admits a linearly admissible representation $\xi$. For ‘popular’ spaces $X,Y$ having linear/polynomial entropy $\eta,\theta$, the moduli of continuity of functions and their realizers are thus linearly/polynomially related; yet according to Examples \[x:Entropy\]d+e) there exist both spaces of entropy growing arbitrarily slow and arbitrarily fast. Still, estimates (a) are asymptotically tight in a sense explained in Remark \[r:Tight\]. 3. The category of quantitatively admissible representations is Cartesian closed: Given linearly admissible representations for spaces $X_j$ ($j\in{\mathbb{N}}$), we construct one for the product space $\prod_j X_j$ w.r.t. the ‘Hilbert Cube’ metric ${d_{{\mathcal{H}}}}=\sup_j d_j/2^j$ (cmp. Example \[x:Entropy\]a) such that the canonical projections $(x_0,\ldots x_j,\ldots)\mapsto x_j$ and embeddings $x_j\mapsto (x_0,\ldots x_j,\ldots)$ admit realizers with optimal modulus of continuity in the sense of (a). For the compact space ${\mathcal{K}}(X)$ of non-empty compact subsets w.r.t. Hausdorff Distance we construct a canonical *polynomially* admissible representation; and for the compact space $X'_\mu:={\mathcal{C}}_\mu(X,[0;1])$ of functions $f:X\to[0;1]$ having modulus of continuity $\mu$, equipped with the supremum norm, one such that the application functional $X'_\mu\times X\ni (f,x)\mapsto f(x)$ has a realizer with optimal modulus of continuity in the sense of (a). See Theorems \[t:Main2\] and \[t:Admissible\] and \[t:Cartesian\] and \[t:Functions\] for the precise statements. Example \[x:SignedDigit\] verifies that the signed digit expansion of the interval $[0;1]$ is linearly admissible; hence our quantitative “Main Theorem” (a) indeed generalizes the real case as well as quantitatively refining the qualitative Fact \[f:Main\]. Note in (a) the typical form of transition maps, similar for instance to change-of-basis in linear algebra or change-of-chart in differential geometry. It thus captures the information-theoretic ‘external’ influence of the co/domain according to Fact \[f:Proper\]b), and allows to separate that from the ‘inherent’ recursion-theoretic complexity of a computational problem: Informally speaking, an algorithm operating on continuous data is not to be ‘blamed’ for incurring large cost if the underlying domain has large entropy, as in Example \[x:Max\]b): see Remark \[r:SecondOrder\] below. Previous and Related Work, Current Ideas and Methods {#ss:Related} ---------------------------------------------------- Computability theory of real numbers was initiated by Alan Turing (1936), then generalized to real functions by Grzegorzcyk (1957), to Euclidean subsets by Kreisel and Lacombe (1957), to Banach Spaces by Pour-El and Richards (1989), to topological T$_0$ spaces by Weihrauch [@Wei00 §3.2], and furthermore to so-called *QCB* spaces by Schröder [@Sch02]: These works had introduced the fundamental notions which in turn have enabled the abundance of investigations that constitute the contemporary computability theory over continuous data. Computational (bit-)complexity theory of real numbers and functions was introduced by Harvey Friedman and Ker-I Ko [@KF82; @Bra05e]. It differs from the discrete setting for instance by measuring computational cost in dependence on the output approximation error $2^{-n}$. Some effort, a careful choice of representation, and the hypothesis of a compact domain is needed to prove that any total computable real function actually admits a finite runtime bound depending only on $n$ [@Sch04]. It took even more effort, as well as guidance from discrete Implicit Complexity Theory [@DBLP:journals/siamcomp/KapronC96; @Lam06], to proceed from this Complexity Theory of real functions [@Ko91] to a suitable definition of computational complexity for real operators [@KC12]. The latter involves a modified model of computation discussed in Subsection \[ss:Hyper\]. Again, only the notions introduced in the above works have enabled the present plethora of investigations and rigorous classifications of common numerical problems, such as [@Ko91; @RW02; @BBY06; @BBY07a; @BGP11; @KMRZ15; @Poisson]. And their sensible further generalization to abstract function (e.g. Sobolev) spaces common in analysis is still in under development and debate [@KSZ16; @DBLP:journals/lmcs/Steinberg17]. Indeed the real co/domain is special in that it has linear entropy; hence the impact of co/domain on the computational complexity of problems had been hidden before our quantitative “Main Theorem” (a). In [@DBLP:conf/lics/KawamuraS016] we had picked up from [@Wei03] towards a general theory of computational complexity for compact metric spaces $(X,d)$: exhibiting its entropy $\eta$ as a lower bound on the bit-cost of real 1-Lipschitz functions $f:X\to[0;1]$, and constructing a generic representation with modulus of continuity $\kappa(n)\leq{\mathcal{O}}\big(n\cdot\eta(n)\big)$ that allows an appropriate (oracle) Type-2 Machine to compute any fixed such $f$ in time polynomial in $\eta$. The present work generalizes and extends this as follows: - Theorem \[t:Linear\] constructs a generic representation $\xi$ with (i) modulus of continuity $\kappa\leq{\operatorname{lin}}(\eta)$ *linear* in the entropy $\eta$ - and (ii) establishes said $\xi$ *maximal*/complete w.r.t. linear metric reduction “$\zeta{\preccurlyeq_{\rm O}}\xi$” among all uniformly continuous representations $\zeta$ of $X$. - We propose (i) and (ii) as axioms and quantitative strengthening of classical, qualitative admissibility for *complexity-*theoretically sensible representations. - Theorem \[t:Main2\] strengthens the classical, qualitative Main Theorem by *quantitatively* characterizing (moduli of) continuous functions $f:X\to Y$ in terms of (moduli of) their $(\xi,\upsilon)$-realizers and the entropies of co/domain $X,Y$. - Theorem \[t:Admissible\]d) confirms the classical categorical binary Cartesian product of representations [@Wei00 Definition 3.3.3.1] to maintain *linear admissibility* (i) and (ii). - The classical categorical countable Cartesian product of representations [@Wei00 Definition 3.3.3.2] does *not* maintain linear (nor polynomial) admissibility; but our modified construction exhibited in Theorem \[t:Cartesian\] does. In particular (ii) it is *maximal*/complete w.r.t. linear metric reductions. - Moreover, as opposed to some linearly admissible representation constructed from ‘scratch’ by invoking Theorem \[t:Linear\], the representation from Theorem \[t:Cartesian\] additionally guarantees the canonical componentwise projections and embeddings to admit continuous realizers with optimal moduli of continuity. - Theorem \[t:Functions\] constructs from any linearly admissible representation $\xi$ of compact metric $(X,d)$ a *polynomially* admissible representation $\xi'_\mu$ of $X'_\mu={\mathcal{C}}_\mu(X,[0;1])$. In particular (ii) it is *maximal*/complete w.r.t. polynomial metric reductions. - Moreover said representation $\xi'_\mu$ guarantees the application functional $X'_\mu\times X\ni (f,x)\mapsto f(x)\in[0;1]$ to have a continuous realizer with optimal modulus of continuity. Revolving around notions like entropy and modulus of continuity, our considerations and methods are mostly information-theoretic: carefully constructing representations and realizers, analyzing the dependence of their value on the argument, and comparing thus obtained bounds on their modulus of continuity to bounds on the entropy of the space under consideration, estimated from above by constructing coverings with ‘few’ balls of given radius $2^{-n}$ as well as bounded from below by constructing subsets of points of pairwise distance $>2^{-n}$. Overview -------- Section \[s:Admissible\] formally introduces our conception of quantitatively (polynomially/linearly) admissible representations. Subsection \[ss:Real\] re-analyzes the above three representations of $[0;1]$ from this new perspective. And Subsection \[ss:Topology\] collects metric properties of other popular spaces; including new ones constructed via binary and countable Cartesian products (w.r.t. Hilbert Cube metric), the hyperspace of non-empty compact subsets w.r.t. Hausdorff metric, and function spaces. Section \[s:Standard\] recalls and rephrases our previous results [@DBLP:conf/lics/KawamuraS016 §3] from this new perspective: constructing a generic polynomially-admissible representation for any compact metric space $(X,d)$. And Subsection \[ss:Linear\] improves that to linear admissibility. Section \[s:Category\] finally formally states our complexity-theoretic Main Theorem \[t:Main2\] in quantitative detail; and presents categorical constructions for new quantitatively admissible representations from given ones, paralleling the considerations from Subsection \[ss:Topology\]: binary and countable Cartesian products, Hausdorff hyperspace of non-empty compact subsets, and function spaces. Section \[s:Conclusion\] collects some questions about possible refinements/strengthenings, such as improving/dropping constant factors in our results. Subsection \[ss:Hyper\] finally extends our considerations to generalized representations for higher types in the sense of [@KC12]; and Subsection \[ss:Future\] puts them into the larger context of quantitatively-universal compact metric spaces. Intuition and Definition of Quantitative Admissibility {#s:Admissible} ====================================================== In order to refine computability to a sensible theory of computational complexity we propose in this section two quantitative refinements of qualitative *admissibility* formalized in Definition \[d:Admissible\] below. But first let us briefly illustrate how a reasonable representation can be turned into an unreasonable one, and how that affects the computational complexity of a function: to get an impression of what quantitative admissibility should prohibit. Consider ‘padding’ a given representation $\xi$ with some fixed strictly increasing $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ in order to obtain a new representation $\xi_\varphi$ defined by $\xi_\varphi(\bar b):=\xi\big((b_{\varphi(n)})_{_n}\big)$ for $\bar b=(b_n)_{_n}\in{\mathcal{C}}$. Then ${\operatorname{dom}}(\xi_\varphi)$ is compact whenever ${\operatorname{dom}}(\xi)$ is; but computing some $(\xi_\varphi,\upsilon)$-realizer now may require ‘skipping’ over $\varphi(n)$ bits of any given $\xi_\varphi$-name before reaching/collecting the same information as contained in only the first $n$ bits of a given $\xi$-name when computing a $(\xi,\upsilon)$-realizer: possibly increasing the time complexity from $t$ to $t\circ\varphi$, definitely increasing its optimal modulus of continuity. On the other end computing a $(\xi,\upsilon_\varphi)$-realizer might become easier, as now as many as $\varphi(n)$ bits of the padded output can be produced from only $n$ bits of the unpadded one: possibly decreasing the time complexity from $t$ to $t\circ{{\varphi}^{\overline{-1}}}$, see Definition \[d:Admissible\]c) below. \[d:Discrete\] 1. We abbreviate $\bar x|_{<n}:=(x_0,\ldots x_{n-1})$ and $$(x_0,\ldots x_{n-1})\circ Z^{\mathbb{N}}\;=\; \big\{(x_0,\ldots x_{n_1},z_n,z_{n+1} \ldots):z_n,z_{n+1} \ldots\in Z\big\} \enspace .$$ $$\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n \ni \vec x = (x_0,\ldots x_{n-1}) \mapsto \langle \vec x\rangle := ({\textup{\texttt{0}}\xspace}\,x_0\,{\textup{\texttt{0}}\xspace}\,x_1\,{\textup{\texttt{0}}\xspace}x_2\ldots\,{\textup{\texttt{0}}\xspace}\,x_{n-1}\,{\textup{\texttt{1}}\xspace})\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{2n+1}$$ denotes a self-delimiting encoding of finite binary strings. Let $\lfloor r\rceil\in{\mathbb{Z}}$ mean the integer closest to $r\in{\mathbb{R}}$ with ties broken towards 0: $\big\lfloor \pm n+\tfrac{1}{2}\big\rceil=\pm n$ for $n\in{\mathbb{N}}$. 2. Let ${\operatorname{Reg}}$ denote the set of all non-decreasing unbounded mappings $\nu:{\mathbb{N}}\to{\mathbb{N}}$. The *lower* and *upper semi-inverse* of $\nu\in{\operatorname{Reg}}$ are $${{\nu}^{\underline{-1}}}(n) \;:=\; \min\{ m\::\: \nu(m)\geq n\} , \quad {{\nu}^{\overline{-1}}}(n) \;:=\; \min\{m \::\: \nu(m+1)>n \} \enspace .$$ 3. Extend Landau’s class of asymptotic growth $$\begin{aligned} \nu \leq {\mathcal{O}}(\mu) &\Leftrightarrow& \exists C\in{\mathbb{N}}\; \forall n\in{\mathbb{N}}: \; \nu(n)\leq C\cdot\mu(n)+C \\ \nu \leq {\mathcal{P}}(\mu) &:=& {\mathcal{O}}\big(\mu(n)\big)^{{\mathcal{O}}(1)} \Leftrightarrow\; \exists C\;\forall n:\; \nu(n)\leq \big(C+C\cdot \mu(n)\big)^C \\ \nu \leq {\mathcal{S}}(\mu) &:=& \mu+{\mathcal{O}}(1) \Leftrightarrow\; \exists C\;\forall n:\; \nu(n)\leq \mu(n)+C \\ \nu\leq {\mathcal{\scriptscriptstyle O}}(\mu) &:=& \mu\circ{\mathcal{O}}({\operatorname{id}}) \quad\Leftrightarrow\quad \exists C\;\forall n:\; \nu(n)\leq \mu(C\cdot n+C)\\ \nu \leq {\mathcal{\scriptscriptstyle P}}(\mu) &:=& \mu\circ{\mathcal{P}}({\operatorname{id}}) \quad\Leftrightarrow\quad \exists C\;\forall n:\; \nu(n)\leq \mu(C\cdot n^C+C)\\ \nu \leq {\mathcal{\scriptscriptstyle S}}(\mu) &:=& \mu\circ{\mathcal{S}}({\operatorname{id}}) \quad\Leftrightarrow\quad \exists C\;\forall n:\; \nu(n)\leq \mu(n+C)\\ \nu \leq {\operatorname{lin}}(\mu) &:=& {\mathcal{O}}\big({\mathcal{\scriptscriptstyle O}}(\mu)\big) = {\mathcal{\scriptscriptstyle O}}\big({\mathcal{O}}(\mu)\big) \;\Leftrightarrow\;\; \exists C\;\forall n:\; \nu(n)\leq C+C\cdot\mu(C+C\cdot n) \\ \nu \leq {\operatorname{poly}}(\mu) &:=& {\mathcal{P}}\big({\mathcal{\scriptscriptstyle P}}(\mu)\big) \;\Leftrightarrow\; \exists C\;\forall n:\; \nu(n)\leq \big(n+C+C\cdot \mu(C\cdot n^C+C)\big)^C $$ to denote sequences bounded linearly/polynomially/additively by, and/or after linearly/polynomially/additively growing the argument to, $\mu$. Here ${\operatorname{id}}:{\mathbb{N}}\to{\mathbb{N}}$ is the identity mapping. In Item c), classes ${\operatorname{lin}}(\mu)\leq{\operatorname{poly}}(\mu)$ capture ‘relative’ asymptotics in increasing granularity. They are transitive and compositional in the following sense: \[o:Growth\] 1. $\mu\leq{\mathcal{O}}(\nu)$ and $\nu\leq{\mathcal{O}}(\kappa)$ implies $\mu\leq{\mathcal{O}}(\kappa)$.\ $\mu\leq{\mathcal{P}}(\nu)$ and $\nu\leq{\mathcal{P}}(\kappa)$ implies $\mu\leq{\mathcal{P}}(\kappa)$.\ $\mu\leq{\mathcal{S}}(\nu)$ and $\nu\leq{\mathcal{S}}(\kappa)$ implies $\mu\leq{\mathcal{S}}(\kappa)$. 2. $\mu\leq{\mathcal{\scriptscriptstyle O}}(\nu)$ and $\nu\leq{\mathcal{\scriptscriptstyle O}}(\kappa)$ implies $\mu\leq{\mathcal{\scriptscriptstyle O}}(\kappa)$.\ $\mu\leq{\mathcal{\scriptscriptstyle P}}(\nu)$ and $\nu\leq{\mathcal{\scriptscriptstyle P}}(\kappa)$ implies $\mu\leq{\mathcal{\scriptscriptstyle P}}(\kappa)$.\ $\mu\leq{\mathcal{\scriptscriptstyle S}}(\nu)$ and $\nu\leq{\mathcal{\scriptscriptstyle S}}(\kappa)$ implies $\mu\leq{\mathcal{\scriptscriptstyle S}}(\kappa)$. 3. $\mu\leq{\mathcal{O}}(\nu)$ implies $\mu\circ\kappa \leq {\mathcal{O}}(\nu\circ\kappa)$.\ $\mu\leq{\mathcal{P}}(\nu)$ implies $\mu\circ\kappa \leq {\mathcal{P}}(\nu\circ\kappa)$.\ $\mu\leq{\mathcal{S}}(\nu)$ implies $\mu\circ\kappa \leq {\mathcal{S}}(\nu\circ\kappa)$. 4. $\mu\leq{\mathcal{\scriptscriptstyle O}}(\nu)$ implies $\kappa\circ\mu \leq {\mathcal{\scriptscriptstyle O}}(\kappa\circ\nu)$.\ $\mu\leq{\mathcal{\scriptscriptstyle P}}(\nu)$ implies $\kappa\circ\mu \leq {\mathcal{\scriptscriptstyle P}}(\kappa\circ\nu)$.\ $\mu\leq{\mathcal{\scriptscriptstyle S}}(\nu)$ implies $\kappa\circ\mu \leq {\mathcal{\scriptscriptstyle S}}(\kappa\circ\nu)$. 5. ${\mathcal{\scriptscriptstyle O}}(\mu)\circ{\mathcal{O}}(\nu) \;=\; {\mathcal{\scriptscriptstyle O}}(\mu)\circ\nu \;=\; \mu\circ{\mathcal{O}}(\nu)$.\ ${\mathcal{\scriptscriptstyle P}}(\mu)\circ{\mathcal{P}}(\nu) \;=\; {\mathcal{\scriptscriptstyle P}}(\mu)\circ\nu \;=\; \mu\circ{\mathcal{P}}(\nu)$.\ ${\mathcal{\scriptscriptstyle S}}(\mu)\circ{\mathcal{S}}(\nu) \;=\; {\mathcal{\scriptscriptstyle S}}(\mu)\circ\nu \;=\; \mu\circ{\mathcal{S}}(\nu)$. In particular polynomial ‘absolute’ growth means ${\operatorname{poly}}({\operatorname{id}})={\mathcal{P}}({\operatorname{id}})={\mathcal{\scriptscriptstyle P}}({\operatorname{id}})$; and linear means ${\operatorname{lin}}({\operatorname{id}})={\mathcal{O}}({\operatorname{id}})={\mathcal{\scriptscriptstyle O}}({\operatorname{id}})$. We also collect some properties of the semi-inverses: \[l:seminv\] 1. For $\mu\in{\operatorname{Reg}}$, ${{\mu}^{\underline{-1}}}$ and ${{\mu}^{\overline{-1}}}$ are again in ${\operatorname{Reg}}$ and ${{\mu}^{\underline{-1}}}\leq{{\mu}^{\overline{-1}}}$. 2. Every $\mu\in{\operatorname{Reg}}$ satisfies ${{\mu}^{\underline{-1}}}\circ\mu\;\leq\;{\operatorname{id}}\;\leq\;{{\mu}^{\overline{-1}}}\circ\mu$,\ with equality in case $\mu$ is injective (necessarily growing at least linearly);\ and every $\mu\in{\operatorname{Reg}}$ satisfies $\mu\circ{{\mu}^{\overline{-1}}}\;\leq\;{\operatorname{id}}\;\leq\mu\circ{{\mu}^{\underline{-1}}}$,\ with equality in case $\mu$ is surjective (necessarily growing at most linearly). 3. For $\nu,\kappa\in{\operatorname{Reg}}$ it holds $$\mu\circ\nu\;\leq\;\kappa \;\;\Leftrightarrow\;\; \nu\;\leq\;{{\mu}^{\overline{-1}}}\circ\kappa \;\;\Leftrightarrow\;\; \mu\;\leq\;\kappa\circ{{\nu}^{\underline{-1}}}$$ 4. Suppose $a,b,c,d\in{\mathbb{N}}$ with $b,d\geq1$ and $\mu,\nu\in{\operatorname{Reg}}$ satisfy $\forall n: \;\nu(n)\leq a+b\cdot \mu(c+d\cdot n)$. Then it holds $\forall m: \;{{\mu}^{\underline{-1}}}(m)\leq c+d\cdot{{\nu}^{\underline{-1}}}(a+b\cdot m)$ and ${{\mu}^{\overline{-1}}}(m)\leq d\cdot{{\nu}^{\overline{-1}}}(a+b\cdot m)+c+d-1$. In particular,\ ${\mathcal{O}}\big({{\mu}^{\overline{-1}}}\big)={{{\mathcal{\scriptscriptstyle O}}(\mu)}^{\overline{-1}}}$ and ${\mathcal{\scriptscriptstyle O}}\big({{\mu}^{\overline{-1}}}\big)={{{\mathcal{O}}(\mu)}^{\overline{-1}}}$ and ${\operatorname{lin}}\big({{\mu}^{\overline{-1}}}\big)={{{\operatorname{lin}}(\mu)}^{\overline{-1}}}$; similarly ${\mathcal{O}}\big({{\mu}^{\underline{-1}}}\big)={{{\mathcal{\scriptscriptstyle O}}(\mu)}^{\underline{-1}}}$ and ${\mathcal{\scriptscriptstyle O}}\big({{\mu}^{\underline{-1}}}\big)={{{\mathcal{O}}(\mu)}^{\underline{-1}}}$ and ${\operatorname{lin}}\big({{\mu}^{\underline{-1}}}\big)={{{\operatorname{lin}}(\mu)}^{\underline{-1}}}$. 5. There are $\mu,\nu\in{\operatorname{Reg}}$ s.t. $\mu\leq{\operatorname{poly}}(\nu)$ but ${{\nu}^{\underline{-1}}}\not\leq{\operatorname{poly}}\big({{\mu}^{\underline{-1}}}\big)$ and ${{\nu}^{\overline{-1}}}\not\leq{\operatorname{poly}}\big({{\mu}^{\overline{-1}}}\big)$. There are $\mu,\nu\in{\operatorname{Reg}}$ s.t. $\mu\circ{{\mu}^{\underline{-1}}}\not\in{\operatorname{poly}}({\operatorname{id}})$ and ${{\nu}^{\overline{-1}}}\circ\nu\not\in{\operatorname{poly}}({\operatorname{id}})$ and ${{\nu}^{\underline{-1}}}\circ{\mathcal{O}}(\nu)\not\in{\operatorname{poly}}({\operatorname{id}})$ and $\mu\circ{\mathcal{O}}\big({{\mu}^{\overline{-1}}}\big)\not\in{\operatorname{poly}}({\operatorname{id}})$. 6. $\max\big\{{{\mu}^{\overline{-1}}},{{\nu}^{\overline{-1}}}\big\} ={{\min\{\mu,\nu\}}^{\overline{-1}}}$ and $\min\big\{{{\mu}^{\overline{-1}}},{{\nu}^{\overline{-1}}}\big\} ={{\max\{\mu,\nu\}}^{\overline{-1}}}$ and $\max\big\{{{\mu}^{\underline{-1}}},{{\nu}^{\underline{-1}}}\big\} ={{\min\{\mu,\nu\}}^{\underline{-1}}}$ and $\min\big\{{{\mu}^{\underline{-1}}},{{\nu}^{\underline{-1}}}\big\} ={{\max\{\mu,\nu\}}^{\underline{-1}}}$. Here finally comes our formal definition of quantitative admissibility: \[d:Admissible\] 1. Consider a compact subset $K$ of a metric space $(X,d)$. Its *relative entropy* is the non-decreasing integer mapping $\eta=\eta_{X,K}:{\mathbb{N}}\to{\mathbb{N}}$ such that some $2^{\eta(n)}$, but no $2^{\eta(n)-1}$, closed balls ${\overline{\operatorname{B}}}(x,r)$ of radius $r:=2^{-n}$ with centers $x\in X$ can cover $K$. If $X$ itself is compact, we write $\eta_X:=\eta_{X,X}$ for its (intrinsic) entropy. 2. Consider a uniformly continuous representation $\xi$ of the compact metric space $(X,d)$ and uniformly continuous mapping $\zeta:\subseteq{\mathcal{C}}\to X$. Refining Definition \[d:TTE\]e), call a reduction $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ *polynomial* (“$\zeta{\preccurlyeq_{\rm P}}\xi$”) if it has a modulus of continuity $\mu$ and $\xi$ has a modulus $\kappa$ satisfying $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle P}}(\nu)$ for every modulus $\nu$ of $\zeta$.\ $F$ is *linear* (“$\zeta{\preccurlyeq_{\rm O}}\xi$”) if it holds $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle O}}(\nu)$. 3. A representation $\xi$ of the compact metric space $(X,d)$ is *polynomially admissible* iff (i) it has a modulus of continuity $\kappa\leq{\mathcal{P}}{\mathcal{\scriptscriptstyle O}}(\eta)$, i.e., bounded polynomially in the entropy with linearly transformed argument, and (ii) every uniformly continuous representation $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X$ satisfies $\zeta{\preccurlyeq_{\rm P}}\xi$ in the sense of (b). 4. Representation $\xi$ of $(X,d)$ is *linearly admissible* iff (i) it has a modulus of continuity $\kappa\leq{\mathcal{O}}{\mathcal{\scriptscriptstyle S}}(\eta)$, i.e., not exceeding the entropy by more than a constant factor in value and constant shift in argument, and (ii) every uniformly continuous representation $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X$ satisfies $\zeta{\preccurlyeq_{\rm O}}\xi$. According to Example \[x:Entropy\]f) below, any representation’s modulus of continuity satisfies $\kappa\geq\eta$, i.e., is bounded from below by the entropy; and Condition (i) in Definition \[d:Admissible\]c+d) requires a complexity-theoretically appropriate representation to be close to that optimum — which itself can be arbitrarily small/large according to Example \[x:Entropy\]d+e). The converse Condition (ii) in Definition \[d:Admissible\]c+d) similarly requires that $\mu\circ\kappa$, a modulus of continuity of $\xi\circ F=\zeta$ be ‘close’ to that of $\zeta$. Note that linear admissible representations may (i) exceed the entropy by a constant factor in value and by an additive constant in the argument while (ii) linear reduction only allows for the latter: because (i) is what we can achieve in Theorem \[t:Linear\] while (ii) guarantees transitivity; similarly for the polynomial case. \[r:Transitive\] We record that relations “${\preccurlyeq_{\rm P}}$” and “${\preccurlyeq_{\rm O}}$” are transitive: Fix $\alpha$ with modulus of continuity $\lambda$, $\beta$ with modulus $\mu$, and $\gamma$ with $\nu$; and linear reduction $F:{\operatorname{dom}}(\alpha)\to{\operatorname{dom}}(\beta)$ with modulus of continuity $\iota$ such that $\alpha=\beta\circ F$ and $\iota\circ\mu\leq{\mathcal{\scriptscriptstyle O}}(\lambda)$; as well as linear reduction $G:{\operatorname{dom}}(\beta)\to{\operatorname{dom}}(\gamma)$ with modulus of continuity $\kappa$ such that $\beta=\gamma\circ G$ and $\kappa\circ\nu\leq{\mathcal{\scriptscriptstyle O}}(\mu)$. Then $\alpha=\gamma\circ G\circ F$, where reduction $G\circ F:{\operatorname{dom}}(\alpha)\to{\operatorname{dom}}(\gamma)$ has modulus $\iota\circ\kappa$ satisfying $(\iota\circ\kappa)\circ \nu\leq \iota\circ {\mathcal{\scriptscriptstyle O}}(\mu)\leq{\mathcal{\scriptscriptstyle O}}(\lambda)$ by Observation \[o:Growth\]e).\ Also note that, according to Lemma \[l:seminv\]d), condition $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle O}}(\nu)$ is equivalent to $\mu\leq\nu\circ{\mathcal{O}}\big({{\kappa}^{\underline{-1}}}\big)={\mathcal{\scriptscriptstyle O}}(\nu)\circ{{\kappa}^{\underline{-1}}}$; similarly for $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle P}}(\nu)$. Definition \[d:Admissible\]c) coincides with [@DBLP:conf/lics/KawamuraS016 Definition 18] and strengthens the computability-theoretically common qualitative notion of *admissibility* from [@Wei00 Definitions 2.1.1+2.1.2]; while Definition \[d:Admissible\]d) in turn strengthens (c) from polynomial to linear. Most claims are immediate, we thus only expand on Item d): 1. First suppose $c=0$ and $d=1$. Then $$\begin{aligned} {{\mu}^{\underline{-1}}}(m) &=& \min\underbrace{\{n:a+b\cdot \mu(n)\geq a+b\cdot m\}}_{ \rotatebox[origin=c]{270}{$\supseteq$}} \\ &\leq& \min\overbrace{\{n:\nu(n)\geq a+b\cdot m\}} \;=\; {{\nu}^{\underline{-1}}}(a+b\cdot m) \quad\text{ and} \\[1ex] {{\mu}^{\overline{-1}}}(m) &=& \min\underbrace{\{n:a+b\cdot \mu(n+1)> a+b\cdot m\}}_{ \rotatebox[origin=c]{270}{$\subseteq$}} \\ &\leq& \min\{n:\nu(n+1)> a+b\cdot m\} \;=\; {{\nu}^{\overline{-1}}}(a+b\cdot m) \enspace . \\[1ex] {{\mu}^{\underline{-1}}}(m) &=& \min\underbrace{\{n:\mu(n)\geq m\}}_{ \rotatebox[origin=c]{270}{$\supseteq$}} \\ &\leq& \min\overbrace{\{c+d\cdot n':\mu(c+d\cdot n')\geq m\}} \;\leq\; \min\{c+d\cdot n':\nu(n')\geq m\} \end{aligned}$$ $=\; c+d\cdot{{\nu}^{\underline{-1}}}(m)$ in case $a=0$ and $b=1$. If additionally $d=1$, then $$\begin{aligned} {{\mu}^{\overline{-1}}}(m)&=& \min\big\{n:\mu(n+1)> m\big\} \;\leq\; \min\big\{c+n:\mu(c+n+1)> m\} \\ &\leq& \min\big\{c+n:\nu(n+1)> m\} \;=\; c+{{\nu}^{\overline{-1}}}(m) \end{aligned}$$ Finally in case $a=0=c$ and $b=1\neq d$, $$\begin{aligned} {{\mu}^{\overline{-1}}}(m)&=& \min\big\{n:\mu(n+1)> m\big\} \;\leq\; \min\big\{d\cdot \lceil n/d\rceil :\mu(n+1)> m\big\} \\ &\leq& \min\big\{d\cdot n' + (d-1) :\mu(c\cdot n'+1)> m\big\} \\ &\leq& \min\big\{d\cdot n' + (d-1) :\nu(n'+1)> m\big\} \end{aligned}$$ $=\; d\cdot{{\nu}^{\overline{-1}}}(m)+(d-1)$. Real Examples {#ss:Real} ------------- Here we formally recall, and analyze from the perspective of admissibility, the three representations of the real unit interval mentioned in the introduction: binary, dyadic and signed binary. Let us record that the real unit interval $[0;1]$ has entropy $\eta_{[0;1]}(n)=n-1$ for all integers $n\geq1$. \[x:Binary\] The binary representation of the real unit interval $$\beta: \; {\mathcal{C}}\;\ni\; \bar b \;\mapsto\; \sum\nolimits_n b_n2^{-n+1} \;\in\; [0;1]$$ is surjective and 1-Lipschitz, i.e., has the identity ${\operatorname{id}}:{\mathbb{N}}\ni n\mapsto n\in{\mathbb{N}}$ as modulus of continuity: coinciding with the entropy up to shift 1, i.e., optimal! However it is not (even qualitatively) admissible [[@Wei00 Theorem 4.1.13.6]]{}, does not admit a continuous realizer of, e.g., the continuous mapping $[0;1/3]\ni x\mapsto 3x\in[0;1]$; cmp. [[@Wei00 Example 2.1.4.7]]{}. \[x:Rational\] Consider the binary encoding of non-negative integers without leading ${\textup{\texttt{0}}\xspace}$: $$\label{e:Binary} {\mathrm{bin}}\;:\; {\mathbb{N}}\;\ni\; 2^n-1+\sum\nolimits_{0\leq j<n} b_j2^j \;\mapsto\; (b_0,\ldots b_{n-1})\; \in\; \{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n \enspace .$$ The *rational* representation of $[0;1]$ is the mapping $\rho:\subseteq{\mathcal{C}}\twoheadrightarrow[0;1]$ with $$\begin{gathered} \big( \langle{\mathrm{bin}}(a_0)\rangle\: \langle{\mathrm{bin}}(c_0)\rangle\; \langle{\mathrm{bin}}(a_1)\rangle\: \langle{\mathrm{bin}}(c_1)\rangle\; \ldots \langle{\mathrm{bin}}(a_n)\rangle\: \langle{\mathrm{bin}}(c_n)\rangle\; \ldots \big) \;\mapsto\; \lim\nolimits_j a_j/c_j , \\[0.5ex] {\operatorname{dom}}(\rho) \:=\: \big\{ \big( \ldots\: \langle{\mathrm{bin}}(a_n)\:\langle{\mathrm{bin}}(c_n)\;\rangle\:\ldots\big) \::\: \exists x\in[0;1] \: |a_n/c_n-x|\leq 2^{-n}\big\} \enspace .\end{gathered}$$ Representation $\rho$ is continuous, but not uniformly continuous (its domain is not compact) and thus has no modulus of continuity. Consider a $\rho$-name of $r=1/2$ starting with any $a_0\in{\mathbb{N}}$ and $b_0:=2a_0$. Increasingly long $a_0$ thus give rise to a sequence of $\rho$-names of $r$ with no converging subsequence. Moreover $a_0/b_0=1/2$ fixes $r$ up to error $2^{-n}$ only for $n:=0$; but requires ‘knowing’ the first $\mu(0)\geq\log_2(a_0)\to\infty$ bits of its $\rho$-name. \[x:Dyadic\] The *dyadic representation* of the real unit interval $[0;1]$ $$\begin{gathered} \delta:\subseteq{\mathcal{C}}\;\ni\;\big( \langle{\mathrm{bin}}(a_0)\rangle\: \langle{\mathrm{bin}}(a_1)\rangle\: \langle{\mathrm{bin}}(a_2)\rangle\: \ldots \langle{\mathrm{bin}}(a_n)\rangle\: \ldots \big) \;\mapsto\; \lim\nolimits_j a_j/2^j , \\[0.5ex] {\operatorname{dom}}(\delta) \:=\: \big\{ \big( \ldots\: \langle{\mathrm{bin}}(a_n)\rangle\:\ldots\big) \::\: 2^n \geq a_n\in{\mathbb{N}}, \; |a_n/2^n-a_m/2^m|\leq 2^{-n}+2^{-m}\big\}\end{gathered}$$ 1. has a quadratic modulus of continuity $\kappa(n):=2\cdot(n+1)\cdot(n+2)$ but no sub-quadratic one and in particular is not Hölder-continuous. 2. To every partial function $\zeta:\subseteq{\mathcal{C}}\to [0;1]$ with modulus of continuity $\nu$ there exists a mapping $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\delta)$ with modulus of continuity $\nu$ such that $\zeta=\delta\circ F$ holds.\ In particular $\delta$ is polynomially admissible. 3. To every $m\in{\mathbb{N}}$ and every $r,r'\in[0;1]$ with $|r-r'|\leq2^{-m-1}$, there exist $\delta$-names $\bar y_r$ and $\bar y'_{r'}$ of $r=\delta(\bar y_r)$ and $r'=\delta(\bar y'_{r'})$ with ${d_{{\mathcal{C}}}}(\bar y_r,\bar y'_{r'})\leq2^{-m-1}$. 4. If $(Y,e)$ is a compact metric space and $f:[0;1]\to Y$ such that $f\circ\delta:{\operatorname{dom}}(\delta)\subseteq{\mathcal{C}}\to Y$ has modulus of continuity $\nu$, then $f$ has modulus of continuity $\nu$. Here and as opposed to Definition \[d:Admissible\], (ii) applies also to non-surjective $\zeta$. 1. Record that $0\leq a_n \leq 2^n$ implies $\langle{\mathrm{bin}}(a_n)\rangle\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^*$ to have length between 1 and $2n+1$; and $\langle{\mathrm{bin}}(a_0)\rangle\: \ldots \langle{\mathrm{bin}}(a_{n})\rangle$ has binary length between $n+1$ and $\kappa(n)$. Therefore perturbing a $\delta$-name $\bar y$ of some $r\in[0;1]$ to $\bar y'$ with ${d_{{\mathcal{C}}}}(\bar y,\bar y')\leq2^{-\kappa(n)}$ will keep (the binary expansions of) $a_0,\ldots a_{n}$ unmodified; and thus satisfies $|\delta(\bar y)-\delta(\bar y')|\leq 2^{-n}$. Hence $\kappa(n)$ is a modulus of continuity. On the other hand consider the $\delta$-name $\bar y$ of $r:=3/4$ with $a_n:=3\cdot 2^{n-2}$ for $n\neq2$ has $\langle{\mathrm{bin}}(a_n)\rangle$ of length $2n-1$ and starts at bit position $\sum_{m<n} (2m-1)\geq\Omega(n^2)$; yet changing $a'_{m}:\equiv3\cdot 2^{m-2}+2^{m-n}$ for all $m>n$ turns it into a $\delta$-name $\bar y'$ of $r':=r+2^{-n}$ with ${d_{{\mathcal{C}}}}(\bar y,\bar y')\leq2^{{\mathcal{O}}(n^2)}$. So $\delta$ has no sub-quadratic modulus of continuity. 2. We construct $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\delta)$ as limit $F(\bar x)=\lim_n F_n\big(\bar x|_{<\nu(n+1)}\big)$ of a sequence of partial functions $F_n:\subseteq\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\nu(n+1)}\to\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{<\kappa(n)}$ which is monotone in that $F_{n+1}\big(\bar x|_{<\nu(n+2)}\big)$ contains $F_{n}\big(\bar x|_{<\nu(n+1)}\big)$ as initial segment. To every $n$ and each (of the finitely many) $\bar x|_{<\nu(n+1)}$ with $\bar x\in{\operatorname{dom}}(\zeta)$, fix some $r_n=r_n\big(\bar x|_{<\nu(n+1)}\big)\in\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}]\subseteq[0;1]$. Then given $\bar x\in{\operatorname{dom}}(\delta)$ and iteratively for $n=0,1,\ldots$ let $$F_{n}\big(\bar x|_{<\nu(n+1)}\big) \;:=\; F_{n-1}\big(\bar x|_{<\nu(n)}\big) \:\circ\: \langle{\mathrm{bin}}(a_n)\rangle, \quad a_n:=\lfloor r_n\cdot2^n\rceil\in\{0,\ldots 2^n\} \enspace .$$ Since $\nu$ is a modulus of continuity of $\zeta$, it follows $$\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big] \;\subseteq\; \big[r_n-2^{-n-1};r_n+2^{-n-1}\big] \;\subseteq\; \big[\tfrac{a_n}{2^n}-2^{-n};\tfrac{a_n}{2^n}+2^{-n}\big]$$ as $|r_n-a_n/2^n|\leq2^{-n-1}$. Thus it holds $|a_n/2^n-r|\leq2^{-n}$ for $r:=\zeta(\bar x)$ since $|r-r_n|\leq2^{-n-1}$; $F(\bar x) = \lim\nolimits_n F_n(\bar x_n) = \big(\langle{\mathrm{bin}}(a_0)\rangle\:\ldots\: \langle{\mathrm{bin}}(a_n)\rangle\: \ldots \big)$ is a $\delta$-name of $r$; and fixing the first $\nu(n+1)$ symbols of $\bar x\in{\operatorname{dom}}(\zeta)$ fixes $\vec y:=F_n\big(\bar x|_{<\nu(n+1)}\big)$ as well as $(a_0,\ldots a_n)$ and therefore also (at least) the first $n+1$ symbols of $\bar y:=F(\bar x)$: hence $F$ has modulus of continuity $\nu(n)$.\ As recorded above, $[0;1]$ has entropy $\eta(n)=n-1$; hence (i) and (ii) imply polynomial admissibility according to Definition \[d:Admissible\]c). 3. xTo $r\in [0;1]$ consider the $\delta$-name $\bar y_r:=\big(\ldots \langle{\mathrm{bin}}(a_n)\rangle\:\ldots \big)\in{\mathcal{C}}$ of $r$ with $a_n:=\lfloor r\cdot2^n\rceil$. For every $m\in{\mathbb{N}}$, its initial segment $\big( \langle{\mathrm{bin}}(a_0)\rangle\: \ldots \langle{\mathrm{bin}}(a_m)\rangle\big)$ has binary length between $m+1$ and $\kappa(m)$; and, for every $r'\in[0;1]$ with $|r-r'|\leq2^{-m-1}$, can be extended to a $\delta$-name $\bar y'_{r'}$ via $a'_{m'}:=\lfloor r'\cdot2^{m'}\rceil$ for all $m'>m$. 4. Applying (iii) to $m:=\nu(n)$, the hypothesis implies $e\big(f(r),f(r')\big)=e\big(f\circ\delta(\bar y_r),f\circ\delta(\bar y'_{r'})\big)\leq 2^{-n}$. The dyadic representation in Example \[x:Dyadic\] has modulus of continuity quadratic (i.e. polynomial), but not linear, in the entropy. This overhead comes from the ‘redundancy’ of the precision-$n$ approximation $a_n/2^n$ of binary length ${\mathcal{O}}(n)$ superseding all previous $a_m/2^m$, $m<n$. The signed binary representation on the other hand achieves precision $2^{-n}$ by appending one ‘signed’ digit $\tilde b_{n-2}\in\{-1,0,1\}$, encoded as two binary digits $(b_{2n-4},b_{2n-3})\in\{{\textup{\texttt{0}}\xspace}{\textup{\texttt{0}}\xspace},{\textup{\texttt{0}}\xspace}{\textup{\texttt{1}}\xspace},{\textup{\texttt{1}}\xspace}{\textup{\texttt{0}}\xspace}\}$ via $\tilde b_{n-2}=2b_{2n-4}+b_{2n-3}-1$, to the previous approximation up to error $2^{-n+1}$, yielding a modulus of continuity linear in the entropy: \[x:SignedDigit\] The signed binary representation, considered as total mapping $$\label{e:SignedDigit} \sigma:\subseteq\{{\textup{\texttt{0}}\xspace}{\textup{\texttt{0}}\xspace},{\textup{\texttt{0}}\xspace}{\textup{\texttt{1}}\xspace},{\textup{\texttt{1}}\xspace}{\textup{\texttt{0}}\xspace}\}^{\mathbb{N}}\subseteq{\mathcal{C}}\;\ni \bar b \mapsto\; \tfrac{1}{2}+\sum\limits_{m=0}^{\infty} (2b_{2m}+b_{2m+1}-1) \cdot 2^{-m-2} \;{\in}\; [0;1]$$ 1. is surjective and has modulus of continuity $\kappa(n)=2n$, i.e., is Hölder-continuous. 2. To every partial function $\zeta:\subseteq{\mathcal{C}}\to [0;1]$ with modulus of continuity $\nu$ there exists a mapping $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\sigma)$ with modulus of continuity $\kappa:2m\mapsto\nu(m+1)$ such that $\zeta=\sigma\circ F$ holds.\ In particular $\sigma$ is linearly admissible. 3. To every $n\in{\mathbb{N}}$ and every $r,r'\in[0;1]$ with $|r-r'|\leq2^{-n}$, there exist $\sigma$-names $\bar y_r$ and $\bar y'_{r'}$ of $r=\sigma(\bar y_r)$ and $r'=\sigma(\bar y'_{r'})$ with ${d_{{\mathcal{C}}}}(\bar y_r,\bar y'_{r'})\leq2^{-2n}$. 4. If $(Y,e)$ is a compact metric space and $f:[0;1]\to Y$ such that $f\circ\sigma:{\operatorname{dom}}(\sigma)\subseteq{\mathcal{C}}\to Y$ has modulus of continuity $2\nu$, then $f$ has modulus of continuity $\nu$. <!-- --> 1. Note that appropriate choice of $\tilde b_{0},\tilde b_{1},\ldots\in\{-1,0,1\}$ yields precisely any possible value $[-1/2;+1/2]\ni\sum_{m=0}^\infty \tilde b_{m-2} \cdot 2^{-m}$; hence $\sigma$ is total and surjective. In one worst case, changing $(b_{2m},b_{2m+1})={\textup{\texttt{0}}\xspace}{\textup{\texttt{0}}\xspace}$ (encoding the signed digit $-1$) to $(b'_{2m},b'_{2m+1})={\textup{\texttt{1}}\xspace}{\textup{\texttt{0}}\xspace}$ (encoding $+1$) for all $m\geq n$ changes the real number $r$ from Equation (\[e:SignedDigit\]) to $r'=r+\sum\nolimits_{m\geq n} (2) \cdot 2^{-m-2}=r+2^{-n-1}$; while $\bar b'$ agrees with $\bar b$ up to position $2n-1$: This asserts $\kappa(n)=2n$ to be a modulus of continuity and, in view of $[0;1]$ having entropy $\eta(n)=n-1$, establishes Condition (i) of Definition \[d:Admissible\]b). 2. Similarly to the proof of Example \[x:Dyadic\], for every $n$ and every $\bar x|_{<\nu(n+1)}$ with $\bar x\in{\operatorname{dom}}(\zeta)$, consider the compact set $\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]\subseteq[0;1]$: Having diameter $\leq2^{-(n+1)}$ by the definition of $\nu$, it is contained in $[r_n-2^{-n-2};r_n+2^{-n-2}]$ for $r_n:=\big(\min\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]+\max\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]\big)/2\in[0;1]$; hence $|r_{n+1}-r_n|\leq2^{-n-2}$. Now let $r'_1:=\tfrac{1}{2}$ capture the constant term $\tilde b_{-1}:=-1$ in Equation (\[e:SignedDigit\]) such that $|r_1-r'_1|\leq\tfrac{1}{4}$ with $r_1\in\big(\tfrac{1}{4};\tfrac{3}{4}\big)$; and for $n=2,3,\ldots$ inductively append one additional signed digit $$2b_{2n-4}+b_{2n-3}-1\;=\;\tilde b_{n-1}\;:=\;\lfloor 2^{n}\cdot(\underbrace{r_n-r'_{n-1}}_{\leq\pm3\cdot2^{-n-1}} )\rceil\in\{-1,0,+1\}$$ such that $r'_n\;:=\;\tfrac{1}{2}+\sum\nolimits_{m=1}^{n} \tilde b_{m-2} \cdot 2^{-m}$ again satisfies $|r_n-r'_n|\leq2^{-n-1}$ and $|r_{n+1}-r'_n|\leq|r_{n+1}-r_n|+|r_n-r'_n|\leq3\cdot2^{-n-2}$ and $|r-r'_n|\leq|r-r_n|+|r_n-r'_n|\leq2^{-n}$ for $r=\zeta(\bar x)\in\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]$: Hence $F(\bar x):=(b_0,\ldots b_{2n-4},b_{2n-3},\ldots)$ is a $\sigma$-name of $r$, and the thus defined function $F$ has modulus of continuity $2n\mapsto\nu(n+1)$. 3. To $r\in [0;1]$ and $n\in{\mathbb{N}}$ consider signed digits $\tilde b_0,\ldots \tilde b_{n-2}\in\{-1,0,1\}$ and $r'_n\;:=\;\tfrac{1}{2}+\sum\nolimits_{m=1}^{n} \tilde b_{m-2} \cdot 2^{-m}$ with $|r-r'_n|\leq2^{-n-1}$ as in (ii). As in (i), appropriate choice of $\tilde b_{n-1},\tilde b_{n},\ldots\in\{-1,0,1\}$ yields any possible value $[-2^{-n};+2^{-n}]\ni\sum_{m=n-1}^\infty \tilde b_{m-2} \cdot 2^{-m}$; hence every $r'\in[0;1]$ with $|r-r'|\leq2^{-n}$ admits a signed binary expansion $r'=\tfrac{1}{2}+\sum_{m=1}^{\infty} \tilde b_{m-2}\cdot2^{-m}$ extending $(\tilde b_0,\ldots \tilde b_{n-2})$, and $\sigma$-name $\bar y_{r'}$ coinciding on the first $2n$ binary symbols. 4. Applying (iii) to $n:=\nu(m)$, the hypothesis implies $e\big(f(r),f(r')\big)=e\big(f\circ\sigma(\bar y_r),f\circ\sigma(\bar y'_r)\big)\leq 2^{-m}$. The signed binary representation renders real addition computable by a finite-state transducer: ![image](transducer){width="90.00000%"} Starting off in state [`C`]{}, in each round $\#n=0,1,\ldots$ it reads the next signed digits $a_{n},b_{n}\in\{-1,0,1\}$ in the respective expansions of real arguments $x=\sum_n a_n2^{-n}$ and $y=\sum_n b_n2^{-n}$, and follows that edge whose *first* label agrees with $a_n+b_n$ while outputting the *second* label $c_{n-2}\in\{-1,0,1\}$ of said edge such that $x+y=\sum_n c_n2^{-n}$. The transducer works by storing for each state the accumulated value from previous input except those already output. The signed-digit expansion’s modulus of continuity leaves a constant-factor gap to the entropy, attained by the binary expansion. One can trade between both, namely permit ${\textup{\texttt{\={1}}}\xspace}$ only at asymptotically fewer positions (i) while incurring possible ‘carry ripples’ between them over asymptotically longer ranges (ii): \[x:Gleb\] Fix a strictly increasing function $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ with $\varphi(0)=0$. Representation $\sigma_\varphi$ ‘interpolates’ between Examples \[x:Binary\] and \[x:SignedDigit\] by considering signed binary expansions $\sum\limits_{m=1}^{\infty} \tilde c_{m-1} \cdot 2^{-m}$ with $\tilde c_{m}\in\{{\textup{\texttt{\={1}}}\xspace},{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ for every $m\in{\operatorname{range}}\varphi=\varphi[{\mathbb{N}}]$ but $\tilde c_m\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ for all $m\in{\mathbb{N}}\setminus\varphi[{\mathbb{N}}]$. Each $\tilde c_m$ with $m\in{\mathbb{N}}\setminus\varphi[{\mathbb{N}}]$ is encoded as one bit, each $\tilde c_m$ with $m\in\varphi[{\mathbb{N}}]$ as two bits. Thus $\sigma_{{\operatorname{id}}}=\sigma$ recovers Example \[x:SignedDigit\]. 1. $\sigma_\varphi$ is surjective and has modulus of continuity $n\mapsto n+{{\varphi}^{\overline{-1}}}(n)$. 2. There exists a mapping $F_\varphi:{\operatorname{dom}}(\sigma)\to{\operatorname{dom}}(\sigma_\varphi)$ with modulus of continuity $2\varphi\circ{{({\operatorname{id}}+\varphi)}^{\overline{-1}}}$ such that $\sigma=\sigma_\varphi\circ F_\varphi$ holds. In particular $\sigma_{n^2}:=\sigma_{n\mapsto n^2}$ has modulus of continuity $n+{\mathcal{O}}(\sqrt{n})$; and to every partial function $\zeta:\subseteq{\mathcal{C}}\to [0;1]$ with modulus of continuity $\nu$ there exists a mapping $F_{n^2}:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\sigma_{n^2})$ with modulus of continuity $n\mapsto\nu(n+1)+{\mathcal{O}}\big(\sqrt{\nu(n+1)}\big)$ such that $\zeta=\sigma_{n^2}\circ F_{n^2}$. Note that $\varphi(n):=2^n$ has ${{({\operatorname{id}}+\varphi)}^{\overline{-1}}}(n)\geq \log_2(n)+1$ infinitely often and therefore $\varphi\circ{{({\operatorname{id}}+\varphi)}^{\overline{-1}}}(n)\geq 2\cdot n$: yielding a linear reduction $\zeta{\preccurlyeq_{\rm O}}\sigma_\varphi$, but no better. 1. Similarly to the proof of Example \[x:SignedDigit\]i), the first $n$ digits $\tilde c_0,\ldots \tilde c_{n-1}$ of an expansion fix the value up to absolute error $<2^{-n}$. Differing from Example \[x:SignedDigit\], this initial segment of the expansion occupies not $2n$ but $n+{{\varphi}^{\overline{-1}}}(n)$ bits since ‘signed’ digits (permitted) only at the ${{\varphi}^{\overline{-1}}}(n)$ positions $\varphi[{\mathbb{N}}]\cap\{0,\ldots n-1\}$. 2. We describe a transformation $F_\varphi$ converting a given signed-digit expansion $r=\tfrac{1}{2}+\sum\limits_{m=1}^{\infty} \tilde b_{m-2} \cdot 2^{-m}$ with $\tilde b_m\in\{{\textup{\texttt{\={1}}}\xspace},{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ to the required form $r=\sum\limits_{m=1}^{\infty} \tilde c_{m-1} \cdot 2^{-m}$ with $\tilde c_m\neq{\textup{\texttt{\={1}}}\xspace}$ except for positions $m\in\varphi[{\mathbb{N}}]$. Reflecting the constant term in Equation (\[e:SignedDigit\]), initially let $c_0:={\textup{\texttt{1}}\xspace}$, tentatively. Now iteratively for $k=1,2,\ldots$ re-code the signed integer $$\begin{aligned} -2^{\varphi(k)}+1 &=& 0 \:- 2^{\varphi(k)-1} \:- 2^{\varphi(k)-2} \:-\ldots\: -2 \:-1 \\ &\leq& 2^{\varphi(k)}\cdot c_{\varphi(k-1)}\:+\: 2^{\varphi(k)-1}\cdot \tilde b_{\varphi(k-1)} \:+\: 2^{\varphi(k)-2}\cdot \tilde b_{\varphi(k-1)+1} \:+\ldots \\ && \ldots+\: 2 \cdot \tilde b_{\varphi(k)-2} \:+\: \tilde b_{\varphi(k)-1} \\ &=:& 2^{\varphi(k)}\cdot \tilde c_{\varphi(k-1)} \:+\: 2^{\varphi(k)-1}\cdot \tilde c_{\varphi(k-1)+1} \:+\: 2^{\varphi(k)-2}\cdot \tilde c_{\varphi(k-1)+2} \:+\ldots \\ && \ldots+\: 2 \cdot \tilde c_{\varphi(k)-1} \:+\: c_{\varphi(k)}\end{aligned}$$ uniquely with $\tilde c_{\varphi(k-1)}\in\{{\textup{\texttt{\={1}}}\xspace},{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ and $\tilde c_{\varphi(k-1)+1},\ldots \tilde c_{\varphi(k)-1},c_{\varphi(k)}\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$: the latter again only tentatively. Thus the $\varphi(k)+k$ bits of $(\tilde c_0,\ldots \tilde c_{\varphi(k)-1})$ depend precisely on the $2\varphi(k)$ bits of $(\tilde b_0,\ldots \tilde b_{\varphi(k)-1})$: the transformation $F_\varphi$ on Cantor space thus has modulus of continuity $\varphi(k)+k\mapsto2\varphi(k)$ for all $k\in{\mathbb{N}}$, and $2\varphi\circ{{({\operatorname{id}}+\varphi)}^{\overline{-1}}}$ in general. Abstract Examples {#ss:Topology} ----------------- This subsection collects some properties, relations, and examples of moduli of continuity and entropies of spaces. \[f:Topology\] 1. Every compact metric space $(X,d)$ can be covered by finitely many open balls ${\operatorname{B}}(x,r)$; therefore its entropy $\eta$ is well-defined. If $X$ is infinite then $\eta\in{\operatorname{Reg}}$. 2. Every continuous function $f:X\to Y$ between compact metric spaces $(X,d)$ and $(Y,e)$ is uniformly continuous and therefore has a modulus $\mu$ of continuity. 3. Lipschitz-continuous functions have moduli of continuity $\mu(n)=n+{\mathcal{O}}(1)$, and vice versa. Hölder-continuous functions have linear moduli of continuity $\mu(n)={\mathcal{O}}(n)$, and vice versa. 4. Proceeding from metric $d$ on $X$ to a metric $d'$ with $d'\leq2^{-n}$ whenever $d\leq2^{-\nu(n)}$ changes the entropy $\eta$ to $\eta'\leq\eta\circ\nu$. Additionally proceeding from $e$ on $Y$ to $e'$ satisfying $e'\leq2^{-\kappa(n)}\Rightarrow e\leq2^{-n}$ turns a modulus of continuity $\mu$ of $f:X\to Y$ into $\mu'$ with $\mu\leq\nu\circ\mu'\circ\kappa$. \[x:Entropy\] 1. The real unit interval $[0;1]$ has entropy $\eta_{[0;1]}(n)=n-1$ for all integers $n\geq1$. Cantor space has entropy $\eta_{{\mathcal{C}}}={\operatorname{id}}$. The Hilbert Cube ${\mathcal{H}}=\prod_{j\geq0}[0;1]$ with metric ${d_{{\mathcal{H}}}}(\bar x,\bar y)=\sup_j |x_j-y_j|/2^j$ has entropy $\eta_{{\mathcal{H}}}(n)=\Theta(n^2)$. 2. Let compact $(X,d)$ and $(Y,e)$ have entropies $\eta_X$ and $\eta_Y$, respectively. Then the entropy $\eta_{X\times Y}$ of compact $\big(X\times Y,\max\{d,e\}\big)$ satisfies $$\forall n: \quad \eta_X(n)+\eta_Y(n) \;\leq\; \eta_{X\times Y}(n+1)+1 \;\leq\; \eta_X(n+1)+\eta_Y(n+1)+1 \enspace .$$ 3. Let compact $(X_j,d_j)$ all have diameter $\leq1$ and entropy $\eta_j$, $j\in{\mathbb{N}}$. Then $\big(\prod_jX_j,\sup_j d_j/2^j\big)$ is compact and has entropy $\eta$ satisfying $$\forall n: \quad \sum\nolimits_{j\leq n} \eta_j(n-j) \;\leq\; \eta(n+1)+\lceil n/2\rceil \;\leq\; \sum\nolimits_{j\leq n} \eta_j(n+1-j) +\lceil n/2\rceil \enspace .$$ For $X_j\equiv[0;1]$ this recovers the Hilbert Cube $\prod_j[0;2^{-j}]$. 4. Let $X$ be a compact space with metric $d\leq1$ and entropy $\eta$. Then $D(x, y) := 1/\big(\log_2 2/d(x,y)\big)\leq1$ constitutes a topologically equivalent metric yet inducing entropy $H(n) = \eta(2^n - 1)$. 5. Fix an arbitrary non-decreasing unbounded $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ and re-consider Cantor space, now equipped with $d_\varphi:(\bar a,\bar b)\mapsto 2^{-\varphi(\min\{m:a_m\neq b_m\})}\in[0;1]$. This constitutes a metric, topologically equivalent to ${d_{{\mathcal{C}}}}=d_{{\operatorname{id}}}$ but with entropy $\eta_\varphi={{\varphi}^{\underline{-1}}}$. 6. For $K$ a closed subset of compact $(X,d)$, it holds $\eta_{X,K}\leq\eta_{X,X}$ but not necessarily $\eta_K\leq\eta_X$. The image $Z:=f[X]\subseteq Y$ has entropy $\eta_{Z}\leq \eta_X\circ\mu$, where $\mu$ denotes a modulus of continuity of $f$. Every connected compact metric space $X$ has entropy at least linear $\eta(n)\geq n+\Omega(1)$. 7. Fix a compact metric space $(X,d)$ with entropy $\eta$. Let ${\mathcal{K}}(X)$ denote the set of non-empty closed subsets of $X$ and equip it with the Hausdorff metric $D(V,W)=\max\big\{\sup\{ d_V(w) :w\in W\},\sup\{d_W(v):v\in V\}\big\}$, where $d_V:X\ni x\mapsto\inf\{ d(x,v) : v\in V\}$ denotes the distance function. Then $\big({\mathcal{K}}(X),D\big)$ constitutes a compact metric space [@Wei00 Exercise 8.1.10]. It has entropy $H\leq2^{\eta}$ with $2^{\eta(n)-1}<H(n+1)$. 8. Fix a connected compact metric space $(X,d)$ with entropy $\eta$, and consider the convex metric space $X':={\mathcal{C}}(X,[0;1])$ of continuous real functions equipped with the supremum norm $|f|=\sup_{x\in X} |f(x)|$. Its subset $X'_1:={\operatorname{Lip}}_1(X,[0;1])$ of non-expansive functions $f:X\to[0;1]$ is compact by Arzelá-Ascoli; it has relative entropy $\eta'_1(n):=\eta_{X',X'_1}(n)=\Theta\big(2^{\eta(n\pm{\mathcal{O}}(1))}\big)$; more precisely: $2^{\eta(n-1)-1} \;<\; \eta'_1(n) \;\leq\;{\mathcal{O}}\big(2^{\eta(n+2)}\big)$. Item (d) yields spaces with asymptotically large entropy; and Item (e) does similarly for small entropy — of a totally disconnected space in view of Example \[x:Entropy\]f). The connection between modulus of continuity and entropy (Item f) had been observed in [@SteinbergDisse Lemma 3.1.13]. We emphasize that Item h) refers to $X'_1$ as subset of $X'$, not as metric space of its own: see also Question \[q:Future\]e) below. According to Item (d) of the following Lemma, analyses of function spaces ${\mathcal{C}}_\mu(X,[0;1])$ may indeed w.l.o.g. suppose $\mu={\operatorname{id}}$, i.e., the consider the non-expansive case. \[l:Modulus\] For non-decreasing unbounded $\mu:{\mathbb{N}}\to{\mathbb{N}}$ let $$\omega_\mu \;:\; [0;\infty) \;\ni\; t \;\mapsto\; \inf\Big\{ \sum\limits_{j=1}^J 2^{-n_j} \::\: J,n_1,\ldots n_j\in{\mathbb{N}}, \: t\leq \sum\limits_{j=1}^J 2^{-\mu(n_j)} \Big\} \;\in\; [0;\infty)$$ 1. It holds $\omega_\mu(0)=0$ and $\omega_\mu(t)>0$ for $t>0$. $\omega_\mu$ is subadditive: $\omega(s+t)\leq\omega(s)+\omega(t)$. $\omega_\mu$ has modulus of continuity $\mu$. 2. If $\mu$ is strictly increasing, then $\mu(n)=\min\big\{ m\in{\mathbb{N}}\::\: \omega_\mu(2^{-m})\leq2^{-n}\big\}$. 3. For a compact *convex* metric space $(X,d)$ and any $x,y\in X$, there exists an isometry $\imath:[0;d(x,y)]\to X$ with $\imath(0)=x$ and $\imath\big(d(x,y)\big)=y$. 4. If $(X,d)$ is compact convex and $\mu$ a modulus of continuity of $f:X\to{\mathbb{R}}$, then $|f(x)-f(x')|\leq \omega_\mu\big(d(x,x')\big)$ for all $x,x'\in X$.\ In particular ${\mathcal{C}}_\mu\big((X,d),{\mathbb{R}}\big)={\operatorname{Lip}}_1\big((X,\omega_\mu\circ d),{\mathbb{R}}\big)$ holds for every strictly increasing $\mu$. Recall that a (not necessarily linear) metric space $X$ is called *convex* if, to any distinct $x,y\in X$, there exists a $z\in X\setminus\{x,y\}$ with $d(x,y)=d(x,z)+d(z,y)$. Examples include compact convex subsets of Euclidean space with its inherited metric, but also connected compact subsets when equipped with the intrinsic (=shortest-path) distance, while Cantor space is not convex. \[x:Topology\] 1. The function $(0;1]\ni t\mapsto 1/\ln(e/t)\in(0;1]$ extends uniquely continuously to $0$ and has an exponential, but no polynomial, modulus of continuity. 2. Picking up on Example \[x:Entropy\]c), let $\xi_j:\subseteq{\mathcal{C}}\twoheadrightarrow X_j$ have modulus of continuity $\kappa_j$ and fix some injective ‘pairing’ function $${\mathbb{N}}\times{\mathbb{N}}\;\ni\; (n,m) \;\mapsto\;\langle n,m\rangle $$ such as Cantor’s $\langle n,m\rangle\;=\;(n+m)\cdot(n+m+1)/2+n$. Then $$\prod\nolimits_j x_j:\subseteq{\mathcal{C}}\;\ni\;\bar b\;\mapsto\; \Big(\xi_j\big(b_{\langle j,0\rangle}, \ldots b_{\langle j,n\rangle},\ldots\big)_j\Big) \;\in\;\prod\nolimits_j X_j$$ has modulus of continuity $n\mapsto\sup_{j<n} \langle j,\kappa_j(n-j)\rangle$. 3. If $\xi$ is a representation of $X$ with modulus of continuity $\mu$, then the following $2^\xi$ is a representation of ${\mathcal{K}}(X)$ with modulus of continuity $m\mapsto 2^{\mu(m)+1}-1$: $(b_0,b_1,\ldots b_m,\ldots)\in{\mathcal{C}}$ is a $2^\xi$-name of $A\in{\mathcal{K}}(X)$ iff, for every $n\in{\mathbb{N}}$ and every $\vec v\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{<\mu(n)}$ it holds: $$\begin{aligned} \label{e:Hausdorff} (\vec v\circ{\mathcal{C}})\cap{\operatorname{dom}}(\xi)\neq\emptyset \;\wedge\; b_{{\mathrm{bin}}(\vec v)}={\textup{\texttt{1}}\xspace}&\Rightarrow& \xi[\vec v\circ{\mathcal{C}}]\cap{\overline{\operatorname{B}}}(A,2^{-n})\neq\emptyset \\ \nonumber (\vec v\circ{\mathcal{C}})\cap{\operatorname{dom}}(\xi)\neq\emptyset \;\wedge\; b_{{\mathrm{bin}}(\vec v)}={\textup{\texttt{0}}\xspace}&\Rightarrow& \xi[\vec v\circ{\mathcal{C}}]\cap{\overline{\operatorname{B}}}(A,2^{-n-1})=\emptyset $$ where $\vec v\circ{\mathcal{C}}\;:=\; \{\vec v\bar w: \bar w\in{\mathcal{C}}\} \;\subseteq\; {\mathcal{C}}$ and ${\mathrm{bin}}(v_0,\ldots v_{n-1})=v_0+2v_1+4v_2+\cdots+2^{n-1}v_n+2^n-1$ and ${\overline{\operatorname{B}}}(A,r):=\bigcup_{a\in A}{\overline{\operatorname{B}}}(a,r)$. Note that $\mu\leq{\mathcal{O}}(\eta)$ implies $2^{\mu}\leq{\mathcal{P}}(2^\eta)$: reflected in Theorem \[t:Admissible\]d) and Theorem \[t:Functions\] below. According to Example \[x:Entropy\]c), Example \[x:Topology\]b) does *not* preserve linear admissibility already in case of spaces $X_j$ with quadratic entropy: a more sophisticated construction is needed in Theorem \[t:Cartesian\]. Proofs ------ 1. $2^\xi$ is a representation, as $A\in{\mathcal{K}}(X)$ can be recovered from any name $\bar b$: On the one hand, for every $\xi$-name $\bar v$ of $a\in A$ and every $n\in{\mathbb{N}}$, $\displaystyle b_{{\mathrm{bin}}(v_0,\ldots v_{\mu(n)-1})}={\textup{\texttt{1}}\xspace}$; on the other hand, for every $\xi$-name $\bar v$ of $a\not\in A$, ${\operatorname{B}}(a,2^{-n})\cap A=\emptyset$ implies $\displaystyle b_{{\mathrm{bin}}(v_0,\ldots v_{\mu(n)-1})}={\textup{\texttt{0}}\xspace}$. Since ${\mathrm{bin}}(v_0,\ldots v_{\mu(n)-1})<2^{\mu(n)+1}-1$, this also establishes $2^{\mu+1}-1$ as modulus of continuity of $2^\xi$. <!-- --> 1. Regarding subadditivity, $\omega_\mu(s+t)=$ $$\begin{aligned} &=& \inf \Big\{ \sum\limits_{j=1}^J 2^{-n_j} + \sum\limits_{k=1}^K 2^{-m_j} \::\: J,K,n_1,\ldots n_j,m_1,\ldots m_K\in{\mathbb{N}}, \\ && \phantom{\inf } \underbrace{\phantom{\Big\{} \qquad\qquad\qquad s+t\leq \sum\limits_{j=1}^J 2^{-\mu(n_j)} + \sum\limits_{k=1}^K 2^{-\mu(m_k)} \Big\}}_{\rotatebox[origin=c]{270}{$\supseteq$}} \\ &\leq& \inf\overbrace{\Big\{ \sum\limits_{j=1}^J 2^{-n_j} +\sum\limits_{k=1}^K 2^{-m_j} \::\: J,K,n_1,\ldots n_j,m_1,\ldots m_K\in{\mathbb{N}}, } \\ && \phantom{\inf \Big\{ } \qquad s\leq \sum\nolimits_{j=1}^J 2^{-\mu(n_j)} \;\wedge\; t\leq \sum\nolimits_{k=1}^K 2^{-\mu(m_k)} \Big\} \\ &=& \omega_\mu(s)\;+\; \omega_\mu(t)\end{aligned}$$ By definition ($J:=1$) it holds $0\leq\omega_\mu(t)\leq 2^{-n}$ for $t\leq2^{-n}$, and in particular $\omega_\mu(0)=0$. By subadditivity and whenever $0\leq\delta\leq2^{-\mu(n)}$, we have both $\omega_\mu(t)\leq\omega_\mu(t+\delta)\leq\omega_\mu(t)+\omega_\mu(\delta)\leq\omega_\mu(t)+2^{-n}$ and $\omega_\mu(t)-2^{-n}\leq\omega_\mu(t)-\omega_\mu(\delta)=\omega_\mu(t-\delta+\delta)-\omega_\mu(\delta) \leq\omega_\mu(t-\delta)\leq\omega_\mu(t)$. 2. For every $t\leq2^{-\mu(n)}$ it holds $\omega_\mu(t)\leq2^{-n}$ by definition, and hence $\tilde\mu(n):=\min\big\{ m\in{\mathbb{N}}\::\: \omega_\mu(2^{-m})\leq2^{-n}\big\}\leq \mu(n)$. Conversely for $m\leq n_1,\ldots,n_j\in{\mathbb{N}}$, $$\sum\nolimits_j 2^{-\mu(n_j)} \;\leq\; \sum\nolimits_j 2^{-\mu(m)-n_j+m} \;=\; 2^{-\mu(m)}\cdot 2^{m}\cdot\sum\nolimits_j 2^{-n_j}$$ from strict monotonicity $\mu(n_j)=\mu(m+n_j-m)\geq\mu(m)+(n_j-m)$ by induction. So $\sum_j 2^{-\mu(n_j)}\geq 2^{-\mu(m)}$ implies $\sum_j 2^{-n_j}\geq2^{-m}$ and $\omega_\mu\big(2^{-\mu(m)}\big)\geq2^{-m}$ and $\tilde\mu(n)\geq\mu(n)$. 3. Using transfinite induction and completeness, [@Kaplansky Exercise 5.1.17] constructs a $z\in X$ with $d(x,z)=d(z,y)=d(x,y)/2$. Now iterating with both $(x,z)$ and $(z,y)$ in place of $(x,y)$ yields a sequence of refinements $z_n\in X$, $n=0,\ldots,N=2^k$ $z_0=x$ and $z_N=y$ and $d(z_n,z_{n+1})=d(x,y)/N$. Again by completeness, $\imath_k(t):=z_{\min\{n: n/2^k\geq t\}}$ thus converges uniformly to the claimed isometry. 4. For $t:=d(x,x')$, and to any $J\in{\mathbb{N}}$ and $n_1,\ldots,n_j\in{\mathbb{N}}$ with $t\leq \sum\nolimits_{j=1}^J 2^{-\mu(n_j)}$, c) yields $x=:x_0,x_1,\ldots,x_J=x'\in X$ with $d\big(x_{j-1},x_j\big)\leq2^{-\mu(n_j)}$. Hence $$\big|f(x)-f(x')\big| \;\leq\; \sum\nolimits_{j=1}^J \big|f(x_{j-1})-f(x_j)\big| \;\leq\; \sum\nolimits_{j=1}^J 2^{-n_j} \;\leq\; \omega_\mu(t)$$ \[f:Extension\] Fix a compact metric space $(X,d)$, non-empty $Z\subseteq X$ and $L>0$. For $L$-Lipschitz $f:Z\to{\mathbb{R}}$, the functions $$\label{e:Extension} {{f}_*}:x\;\mapsto\; \sup\nolimits_{z\in Z} f(z)-L\cdot d(z,x), \quad {{f}^*}:x\;\mapsto\;\inf\nolimits_{z\in Z} f(z)+L\cdot d(z,x)$$ extend $f$ to $X$ while preserving $L$-Lipschitz continuity. Moreover every $L$-Lipschitz extension $\tilde f:X\to{\mathbb{R}}$ of $f$ to $X$ satisfies ${{f}_*}\leq\tilde f\leq{{f}^*}$, where ${{f}^*}-{{f}_*}\leq 2L |d_Z|=2L\sup_x d_Z(x)$. The extension operator ${\operatorname{Lip}}_L(Z,{\mathbb{R}})\ni f\mapsto {{f}^*_*}:=({{f}_*}+{{f}^*})/2\in{\operatorname{Lip}}_L(X,{\mathbb{R}})$ is a well-defined isometry of compact metric spaces w.r.t. the supremum norm. $\big({{f}_*},{{f}^*}\big)$ is known as *McShane-Whitney pair* [@Petrakis]. For the purpose of self-containment, we include a proof: W.l.o.g. $L=1$. For $x\in Z$, choosing $z:=x$ shows ${{f}_*}(x)\geq f(x)$ while $f(z)-d(z,x)\geq f(z)-|f(z)-f(x)|\geq f(x)$ implies ${{f}_*}(x)\leq f(x)$. Moreover, for every $z\in Z$, we have $$-\sup\big\{f(z')-d(z',x'):z'\in Z\big\} \;\leq\; -f(z)+d(z,x') \;\leq\; -f(z)+d(z,x)+d(x,x')$$ and hence $${{f}_*}(x)-{{f}_*}(x') \;\leq\; \sup\big\{ f(z)+d(z,x) - f(z)+d(z,x)+d(x,x'): z\in Z\big\} \;=\; d(x,x') .$$ The estimates for ${{f}^*}$ proceed similarly. For $x\in X$ and with $z,z',w,w'$ ranging over $Z$, $\displaystyle \big({{f}_*}(x)+{{f}^*}(x)\big)-\big({{g}_*}(x)+{{g}^*}(x)\big) \;=$ $$\begin{aligned} &=&\sup\limits_z f(z)\!-\!d(z,x) \:-\: \sup\limits_w g(w)\!-\!d(w,x) \:-\: \inf\limits_{w'} g(w')\!+\!d(w',x) \:+\: \inf\limits_{z'} f(z')\!+\!d(z',x) \\ &=& \sup_z f(z)\!-\!d(z,x) \:+ \inf_w -g(w)+d(w,x) \:+ \sup_{w'} -g(w')-d(w',x) \:+ \inf_{z'} f(z')+d(z',x) \\ &\leq& \sup\limits_z f(z)-d(z,x) \:-\: g(z)+d(z,x) \:+\: \sup\limits_{w'} -g(w')-d(w',x) \:+\: f(w')+d(w',x) \\ &=& 2\cdot \sup_z f(z)-g(z)\;\leq\; 2\cdot\sup\nolimits_z |g(z)-f(z)| \enspace . \qed \end{aligned}$$ Let ${\mathcal{H}}_X(n)$ denote the least number of closed balls of radius $2^{-n}$ covering $X$, so that $\eta_X(n)=\lceil\log_2{\mathcal{H}}(n)\rceil$. Let ${\mathcal{C}}_X(n)$ denote the largest number of points in $X$ of pairwise distance $>2^{-n}$, also known as *capacity*. (Since $X$ is not an integer function, there is not danger of confusion this notation with that of a space of continuous functions…) Then ${\mathcal{C}}_X(n)\leq{\mathcal{H}}_X(n+1)$: To cover $X$ requires covering the ${\mathcal{C}}_X(n)$ points as above; but any closed ball of radius $2^{-(n+1)}$ can contain at most one of those points having distance $>2^{-n}$. On the other hand ${\mathcal{H}}_X(n)\leq{\mathcal{C}}_X(n)$, since balls of radius $2^{-n}$ whose centers form a maximal set $X_n$ of pairwise distance $>2^{-n}$ cover $X$: if they missed a point, that had distance $>2^{-n}$ to all centers in $X_n$ and thus could be added to $X_n$: contradicting its maximality. 1. Cover $[0;1]$ by $2^{n-1}$ closed intervals $I_{n,j}:=\big[j\cdot2^{-(n-1)};(j+1)\cdot2^{-(n-1)}\big]$, $j=0,\ldots 2^{n-1}-1$, of radius $2^{-n}$ around centers $(2j+1)2^{-n}$: optimally.\ Cover ${\mathcal{C}}$ by $2^n$ closed balls $\vec x\circ{\mathcal{C}}$ of radius $2^{-n}$ around centers $\vec x\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n$: optimally. Cover ${\mathcal{H}}$ by $2^{n-1}\cdot2^{n-2}\cdots 2\cdot 1=2^{n(n+1)/2}$ closed balls $$I_{n,j_0}\times I_{n-1,j_1} \times \cdots \times I_{1,j_{n-1}} \times \prod\nolimits_{j\geq n} [0;1]$$ of radius $2^{-n}$ with indices ranging as follows: $0\leq j_0<2^{n-1}, \quad 0\leq j_1<2^{n-2}, \quad \cdots \quad j_{n-1}=0$. 2. Obviously ${\mathcal{H}}_{X\times Y}\leq {\mathcal{H}}_X\cdot {\mathcal{H}}_Y$ and ${\mathcal{C}}_{X\times Y}\geq{\mathcal{C}}_X\cdot{\mathcal{C}}_Y$: ${\mathcal{H}}_X(n)\cdot{\mathcal{H}}_Y(n) \leq$ $$\leq\; {\mathcal{C}}_X(n)\cdot{\mathcal{C}}_Y(n) \;\leq\; {\mathcal{C}}_{X\times Y}(n) \;\leq\; {\mathcal{H}}_{X\times Y}(n+1) \;\leq\; {\mathcal{H}}_X(n+1)\cdot{\mathcal{H}}_Y(n+1)$$ Also record $\lceil s\rceil+\lceil t\rceil \geq \lceil s+t\rceil\geq \lceil s\rceil+\lceil t\rceil-1$ for all $s,t>0$. 3. Abbreviating ${\mathcal{H}}_j:={\mathcal{H}}_{X_j}$ and ${\mathcal{C}}_j:={\mathcal{C}}_{X_j}$, we have ${\mathcal{H}}(n)\leq \prod_{j<n} {\mathcal{H}}_j(n-j)$ and ${\mathcal{C}}(n)\geq \prod_{j<n} {\mathcal{C}}_j(n-j)$: note that ${\mathcal{H}}_j(0)=1={\mathcal{C}}_j(0)$ as $X_j$ has diameter $\leq1$. Finally $\sum_{j=0}^{n-1} \lceil t_j\rceil \geq \lceil\sum_{j=0}^{n-1} t_j\rceil \geq \sum_{j=0}^{n-1} \lceil t_j\rceil -\lfloor n/2\rfloor$. 4. For a counterexample to $\eta_K\leq\eta_X$ consider a circle/hyper-/sphere with and without center.\ Regarding the lower bound for connected compact metric spaces, consider $N:=2^{\eta(n)}$ and $x_1,\ldots x_N\in X$ such that balls with centers $x_j$ and radius $2^{-n}$ cover $X$: $X\subseteq\bigcup_{n=1}^N {\overline{\operatorname{B}}}(x_n,2^{-n})$. Consider the finite undirected graph $G_n=(V_n,E_n)$ with vertices $V_n=\{1,\ldots N\}$ and edges $\{i,j\}\in E_n\Leftrightarrow{\operatorname{B}}(x_i,2^{-n+1})\cap{\operatorname{B}}(x_j,2^{-n+1})\neq\emptyset$ whenever the two open balls with centers $x_i,x_j$ and radius twice $2^{-n}$ intersect. This graph is connected: If $I,J\subseteq V_n$ were distinct connected components, then $\bigcup_{n\in I}{\operatorname{B}}(x_n,2^{-n+1})$ and $\bigcup_{n\not\in I}{\operatorname{B}}(x_n,2^{-n+1})$ were two disjoint open subsets covering $X$. Therefore any two vertices $i,j\in V_n$ are connected via $\leq N-1$ edges; and for every edge $\{a,b\}$, it holds $d(x_a,x_b)<2^{-n+2}$ by definition of $E_n$: Hence $x_i$ and $x_j$ have metric distance $d(x_i,x_j)$ at most $(N-1)\cdot 2^{-n+2}$; and any two $x,y\in X$ have $d(x,y)\leq N\cdot2^{-n+2}$: requiring $2^{\eta(n)}=N\geq d(x,y)\cdot 2^{n-2}$. 5. Obviously ${\mathcal{H}}_{{\mathcal{K}}(X)}\leq2^{{\mathcal{H}}_X}$ and ${\mathcal{C}}_{{\mathcal{K}}(X)}\geq2^{{\mathcal{C}}_X}$. 6. Fix $n\in{\mathbb{N}}$ and consider a maximal set $X_n\subseteq X$ of $N:={\mathcal{C}}_X(n)$ points of pairwise distance $>2^{-n}$. There are $2^{{\mathcal{C}}_X(n)}$ different $f:X_n\to\{0,2^{-n}\}$; each is 1-Lipschitz, and extends to ${{f}^*_*}:X\to[0;1]$; and, according to Fact \[f:Extension\], different such $f$ give rise to ${{f}^*_*}$ of mutual supremum distance $\geq2^{-n}$: Hence ${\mathcal{C}}_{X'_1}(n)\geq 2^{{\mathcal{C}}_X(n)}$, and $X'_1\subseteq X'$ has intrinsic entropy $\eta'_1(n)\geq \log_2 {\mathcal{C}}_{X'_1}(n-1) \geq {\mathcal{C}}_X(n-1)\geq {\mathcal{H}}_X(n-1)>2^{\eta(n-1)-1}$.\ Conversely, for any 1-Lipschitz $f:X\to[0;1]$, consider $f'_n:=\lfloor 2^n\cdot f\big|_{X_n}\rceil/2^n$: still $(1+1/2)$-Lipschitz since rounding affects the value by at most $2^{-n-1}$ on arguments of distance $>2^{-n}$. As argued before, maximality of $X_n$ implies that the closed balls around centers $x\in X_n$ of radius $2^{-n}$ cover $X$ (hence $d_{X_n}\leq 2^{-n}$); consequently so do the open balls with radius $2^{-n+1}$. Similarly to the proof of (f), consider the finite undirected and connected graph $G_n=(X_n,E_n)$ with edge $\{x,y\}\in E_n\;:\Leftrightarrow\;{\operatorname{B}}(x,2^{-n+1})\cap{\operatorname{B}}(y,2^{-n+1})\neq\emptyset$. Any vertex $y$ of $G_n$ adjacent to some $x$ has distance $d(x,y)<2^{-n+2}$; and since $f'_n:X_n\to{\mathbb{D}}_n$ is $\tfrac{3}{2}$-Lipschitz, this implies ${\mathbb{D}}_n\ni|f'_n(x)-f'_n(y)|\leq \tfrac{3}{2}\cdot2^{-n+2}$ leaving no more than 13 possible values for $f'_n(x)-f'_n(y)\in\big\{-6\cdot2^{-n},\ldots 0,\ldots +6\cdot2^{-n}\big\}$. Connectedness of $G_n$ with $N$ vertices thus limits the number of different $\tfrac{3}{2}$-Lipschitz $f'_n:X_n\to{\mathbb{D}}_n$ to $\leq(1+2^n)\cdot 13^{N-1}\leq 2^{{\mathcal{O}}({\mathcal{C}}_X(n))}$ in view of (f). And by Fact \[f:Extension\] each such $f'_n$ extends to some $\tfrac{3}{2}$-Lipschitz ${{f'_n}^*_*}:X\to[0;1]$. Moreover $|d_{X_n}|\leq 2^{-n}$ implies $\big|f-{{f_n}^*_*}\big|\leq \big|{{f_n}^*}-{{f_n}_*}\big|/2 \leq \tfrac{3}{2}\cdot 2^{-n}$ for the $\tfrac{3}{2}$-Lipschitz (!) extension of the restriction $f_n:=f\big|_{X_n}$. Since $g\mapsto{{g}^*_*}$ is an isometry, this implies $\big|f-{{f'_n}^*_*}\big| \leq \big|f-{{f_n}^*_*}\big|+\big|f_n-f'_n\big| \leq \tfrac{3}{2}\cdot2^{-n}+2^{-n-1}=2^{-n+1}$. The $2^{{\mathcal{O}}({\mathcal{C}}_X(n))}$ closed balls of radius $2^{-n+1}$ around centers ${{f'_n}^*_*}\in{\operatorname{Lip}}_{3/2}(X,[0;1])\subseteq X'$ thus cover ${\operatorname{Lip}}_1(X,[0;1])=X'_1$: $\eta'_1(n-1)=\eta_{X',X'_1}(n-1)\leq {\mathcal{O}}\big({\mathcal{C}}_X(n)\big) \leq {\mathcal{O}}\big({\mathcal{H}}_X(n+1)\big) \leq {\mathcal{O}}\big(2^{\eta(n+1)}\big)$. Concise Standard Representations {#s:Standard} ================================ [@Wei00 Definitions 3.2.2+3.2.7] introduce qualitative admissibility in terms of a *standard* representation which [@Wei00 Lemma 3.2.5] then shows to satisfy properties (i) and (ii) from Fact \[f:Main\]. Here we first recall from [@DBLP:conf/lics/KawamuraS016 Definition 15] the construction of a *concise* standard representation of any fixed compact metric space $(X,d)$ that generalizes Example \[x:Dyadic\]: For each $n$, fix a covering of $X$ by $\leq 2^{\eta(n)}$ balls of radius $2^{-n}$ according to the entropy; assign to each ball a binary string $\vec a_n$ of length $\eta(n)$; then every $x\in X$ can be approximated by the center of some of these balls; finally define a name of $x$ to be such a sequence $(\vec a_n)_{_n}$ of binary strings. Theorem \[t:Polynomial\] establishes that this representation is polynomially admissible, provided the balls’ radius is reduced to $2^{-n-1}$. Subsection \[ss:Linear\] improves the construction to yield a linearly admissible standard representation. \[d:Standard\] Let $(X,d)$ denote a compact metric space with entropy $\eta$. For each $n\in{\mathbb{N}}$ fix some partial mapping $\xi_n:\subseteq\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\eta(n+1)}\to X$ such that $X=\bigcup_{\vec a\in{\operatorname{dom}}(\xi_n)} {\overline{\operatorname{B}}}\big(\xi_n(\vec a),2^{-n\pmb{-1}}\big)$, where ${\overline{\operatorname{B}}}(x,r)=\{x'\in X:d(x,x')\leq r\}$ denotes the closed ball around $x$ of radius $r$. The *standard* representation of $X$ (with respect to the family $\xi_n$ of partial dense enumerations) is the mapping $$\begin{gathered} \label{e:Dyadic} \xi \; :\subseteq {\mathcal{C}}\;\ni\; \big( (\vec a_0) \: (\vec a_1) \: \ldots (\vec a_n) \: \ldots \big) \;\mapsto\; \lim\nolimits_n \xi_n(\vec a_n) \;\in\; X , \\[0.5ex] \nonumber {\operatorname{dom}}(\xi) :=\: \big\{ \big( \:\ldots\: (\vec a_n) \: \ldots\big) \::\: \vec a_n\in{\operatorname{dom}}(\xi_n), \; d\big(\xi_n(\vec a_n),\xi_m(\vec a_m)\big)\:\leq\: 2^{-n}\!+\!2^{-m}\big\}\end{gathered}$$ Fact \[f:Topology\]a) asserts such $\xi_n$ to exist. The real Example \[x:Dyadic\] is[^5] a special case of this definition with $\eta_{[0;1]}(n+1)=n$ according to Example \[x:Entropy\]a) and $$\delta'_n \;:\; \{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n \;\ni\; \vec a\;\mapsto\; \big(\tfrac{1}{2}+a_0+2a_1+4a_2+\cdots+2^{n-1}a_{n-1}\big)/2^n \enspace .$$ The covering balls’ radius being $2^{-n\pmb{-1}}$ instead of $2^{-n}$ is exploited in the following theorem: \[t:Polynomial\] 1. The standard representation $\xi$ of $(X,d)$ w.r.t. $(\xi_n)$ according to Definition \[d:Standard\] has modulus of continuity $\kappa(n):=\sum_{m=0}^{n} \eta(m+1)$. 2. To every partial function $\zeta:\subseteq{\mathcal{C}}\to X$ with modulus of continuity $\nu$ there exists a mapping $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ with modulus of continuity $\mu=\nu\circ\big(1+{{\kappa}^{\underline{-1}}}\big):\kappa(n)\mapsto\nu(n+1)$ such that $\zeta=\xi\circ F$ holds.\ In particular $\xi$ is polynomially admissible, provided that the entropy grows at least with some positive power $\eta(n)\geq\Omega(n^{\epsilon})$, $\epsilon>0$. 3. To every $m\in{\mathbb{N}}$ and every $x,x'\in X$ with $d(x,x')\leq2^{-m-1}$, there exist $\xi$-names $\bar y_x$ and $\bar y'_{x'}$ of $x=\xi(\bar y_x)$ and $x'=\xi(\bar y'_{x'})$ with ${d_{{\mathcal{C}}}}(\bar y_x,\bar y'_{x'})\leq2^{-\kappa(m)}$. 4. If $(Y,e)$ is a compact metric space and $f:X\to Y$ such that $f\circ\xi:{\operatorname{dom}}(\xi)\subseteq{\mathcal{C}}\to Y$ has modulus of continuity $\kappa\circ\nu$, then $f$ has modulus of continuity $\nu+1$. Again (ii) strengthens Definition \[d:Admissible\]c) in applying to not necessarily surjective $\zeta$. In view of Lemma \[l:seminv\]c), (iv) can be rephrased as follows: $f\circ\xi$ with modulus of continuity $\nu$ implies $f$ to have modulus of continuity $1+{{\kappa}^{\underline{-1}}}\circ\nu$. However (ii) is *not* saying that $\zeta$ with modulus of continuity $\nu\circ\mu$ yields $F$ with modulus of continuity $n\mapsto\nu(n+1)$. 1. First observe that $\xi$ is well-defined: as compact metric space, $X$ is complete and the dyadic sequence $\xi_n(\vec a_n)\in X$ therefore converges. Moreover $\xi$ is surjective: To every $x\in X$ and $n\in{\mathbb{N}}$ there exists by hypothesis some $\vec a_n\in{\operatorname{dom}}(\xi_n)$ with $x\in{\overline{\operatorname{B}}}\big(\xi_n(\vec a_n),2^{-n-1}\big)$; hence $d\big(\xi_n(\vec a_n),\xi_m(\vec a_m)\big)\leq 2^{-n}+2^{-m}$ and $\lim_n\xi_n(\vec a_n)=x$. Furthermore, $\vec a_n$ has binary length $\eta(n+1)$; hence $\big(\vec a_0 \: \ldots \vec a_{n}\big)$ has length $\kappa(n)$ as above; and fixing this initial segment of a $\xi$-name $\bar x$ implies $d\big(\xi(\bar x),\xi_n(\vec a_n)\big)\leq2^{-n}$ by Equation (\[e:Dyadic\]). 2. To every $n$ and each (of the finitely many) $\bar x|_{<\nu(n+1)}$ with $\bar x\in{\operatorname{dom}}(\zeta)$, fix some $\vec a_n=\vec a_n\big(\bar x|_{<\nu(n+1)}\big)\in{\operatorname{dom}}(\xi_n)$ such that $$\xi_n(\vec a_n)\;\in\;\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]\;\subseteq\;\bigcup\nolimits_{\vec a\in{\operatorname{dom}}(\xi_n)} {\overline{\operatorname{B}}}\big(\xi_n(\vec a),2^{-n-1}\big) \enspace .$$ Then iteratively for $n=0,1,\ldots$ let similarly to the proof of Example \[x:Dyadic\], $$F_{n}\big(\bar x|_{<\nu(n+1)}\big) \;:=\; F_{n-1}\big(\bar x|_{<\nu(n)}\big) \:\circ\: (\vec a_n) \;\in\; \{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\kappa(n-1)+\eta(n+1)=\kappa(n)} \enspace .$$ This makes $F(\bar x):=\lim_n F_n\big(\bar x|_{<\nu(n+1)}\big)\in{\mathcal{C}}$ well-defined with modulus of continuity $\kappa:\kappa(n)\mapsto\nu(n+1)$. Moreover it holds $F(\bar x)\in{\operatorname{dom}}(\xi)$ and $\xi\big(F(\bar x)\big)=\zeta(\bar x)$ since $\xi_n(\vec a_n),\xi(\bar x)\in\zeta\big[\bar x|_{<\nu(n+1)}\circ{\mathcal{C}}\big]\subseteq{\overline{\operatorname{B}}}\big(\xi_n(\vec a_n),2^{-n}\big)$ because $\nu$ is a modulus of continuity of $\zeta$.\ Finally observe $\eta(n+1)\leq\kappa(n)\leq(n+1)\cdot\eta(n+1)\in{\operatorname{poly}}(\eta)$; hence (i) and (ii) of Definition \[d:Admissible\]c) hold. 3. To $x\in X$ consider the $\xi$-name $\bar y_x:=\big( \ldots \: (\vec a_m) \: \ldots \big)$ of $x$ with $d\big(\xi_m(\vec a_m),x\big)\leq2^{-m-1}$. For $m\in{\mathbb{N}}$ its initial segment $\big( (\vec a_0)\: \ldots \: (\vec a_m)\big)$ has binary length $\kappa(m)$; and, for every $x'\in X$ with $d(x,x')\leq2^{-m-1}$, can be extended to a $\xi$-name $\bar y'_{x'}$. 4. Applying (iii) to $m:=\nu(n)$, the hypothesis implies $e\big(f(x),f(x')\big)=e\big(f\circ\xi(\bar y_x),f\circ\xi(\bar y'_{x'})\big)\leq 2^{-n}$. Improvement to Linear Admissibility {#ss:Linear} ----------------------------------- The generic representation $\xi$ of a compact metric space $(X,d)$ according to Definition \[d:Standard\] being ‘only’ polynomially admissible, this subsection improves the construction to achieve linear admissibility. Note that $\kappa(n)=\sum_{m=0}^{n} \eta(m+1)$ according to Theorem \[t:Polynomial\]a) already is in ${\mathcal{O}}\big(\eta(m+1)\big)$ whenever $\eta(m)\geq 2^{\Omega(m)}$ grows at least exponentially; hence we focus on spaces with sub-exponential entropy. To this end fix some unbounded non-decreasing $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ and define a representation $\xi^\varphi$ of $X$ (with respect to the family $\xi_n$ of partial dense enumerations) based on the *sub*sequence $\xi_{\varphi(n)}$ of $\xi_n$: $$\begin{gathered} \label{e:Dyadic2} \xi^\varphi \; :\subseteq {\mathcal{C}}\;\ni\; \big( \vec a_0 \: \ldots \vec a_n \: \ldots \big) \;\mapsto\; \lim\nolimits_n \xi_{\varphi(n)}\big(\vec a_n\big) \;\in\; X , \qquad {\operatorname{dom}}(\xi^\varphi):= \\[0.5ex] \big\{ \big( \vec a_0\: \ldots\: \vec a_n\:\ldots\big) \::\: \vec a_n\in{\operatorname{dom}}\big(\xi_{\varphi(n)}\big), \; d\big(\xi_{\varphi(n)}(\vec a_n),\xi_{\varphi(m)}(\vec a_m)\big)\leq 2^{-n}\!+\!2^{-m}\big\} \nonumber \end{gathered}$$ Intuitively, proceeding to a subsequence $\xi_{\varphi(n)}$ amounts to ‘skipping’ intermediate precisions/error bounds and ‘jumping’ directly from $2^{-\varphi(n-1)}$ to $2^{-\varphi(n)}$. It formalizes a strategy implemented for instance by the `iRRAM C++` library for Exact Real Computation [@Mue01] which starts with $\varphi(0)=50$ bits `double` precision and in phases $n=\#1,\#2,\ldots$ increases to $\varphi(n)=\lfloor \tfrac{6}{5}\cdot \varphi(n-1)\rceil+20$. The proof of Theorem \[t:Polynomial\] carries over literally to see: 1. Representation $\xi^\varphi$ has modulus of continuity $$\kappa^\varphi(n)\;:=\;\sum_{m=0}^{{{\varphi}^{\underline{-1}}}(n)} \eta\big(\varphi(m)+1\big)$$ 2. To every partial function $\zeta:\subseteq{\mathcal{C}}\to X$ with modulus of continuity $\nu$ there exists a mapping $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi^\varphi)$ with modulus of continuity $\nu\circ\big(1+{{\kappa^\varphi}^{\underline{-1}}}\big)$ such that $\zeta=\xi^\varphi\circ F$ holds. 3. To every $m\in{\mathbb{N}}$ and every $x,x'\in X$ with $d(x,x')\leq2^{-m-1}$, there exist $\xi^\varphi$-names $\bar y_x$ and $\bar y'_{x'}$ of $x=\xi^\varphi(\bar y_x)$ and $x'=\xi^\varphi(\bar y'_{x'})$ with ${d_{{\mathcal{C}}}}(\bar y_x,\bar y'_{x'})\leq2^{-\kappa^\varphi(m)}$. 4. If $(Y,e)$ is a compact metric space and $f:X\to Y$ such that $f\circ\xi^\varphi:{\operatorname{dom}}(\xi)\subseteq{\mathcal{C}}\to Y$ has modulus of continuity $\kappa^\varphi\circ\nu$, then $f$ has modulus of continuity $\nu+1$. \[t:Linear\] Let $(X,d)$ denote a compact metric space of entropy $\eta$, equipped with partial mappings $\xi_n:\subseteq\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\eta(n+1)}\to X$ such that $X=\!\!\!\!\!\bigcup\limits_{\vec a\in{\operatorname{dom}}(\xi_n)} \!\!\!\!\!{\overline{\operatorname{B}}}\big(\xi_n(\vec a),2^{-n-1}\big)$. There exists an unbounded non-decreasing $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ such that the representation $\xi^\varphi$ from Equation (\[e:Dyadic2\]) has modulus of continuity $\kappa^\varphi(n)\leq\tfrac{27}{4}\cdot\eta(n+1)$ and $\kappa^\varphi(n)\geq \eta(n+1)$. In particular $\xi^\varphi$ is linearly admissible. The proof of Theorem \[t:Linear\] follows immediately from Item d) of the following lemma, applied to $c:=3/2$ with $\eta(n+1)$ in place of $\eta(n)$. \[l:Donghyun\] Let $\eta:{\mathbb{N}}\to{\mathbb{N}}$ be unbounded and non-decreasing and fix $c>1$. Then there exists a strictly increasing mapping $\varphi:{\mathbb{N}}\to{\mathbb{N}}$ such that it holds 1. $\displaystyle\forall m\in{\mathbb{N}}: \quad \eta\big(\varphi(m+1)\big)\;\leq\; c^2\cdot \eta\big(\varphi(m)+1\big)$. 2. $\displaystyle\forall m\in{\mathbb{N}}: \quad c\cdot\eta\big(\varphi(m)\big)\;\leq\; \eta\big(\varphi(m+1)\big)$. 3. $\displaystyle\forall m\in{\mathbb{N}}: \quad c\cdot\eta\big(\varphi(m)+1\big)\;\leq\; \eta\big(\varphi(m+1)+1\big)$. 4. $\displaystyle \sum\nolimits_{m=0}^{{{\varphi}^{\underline{-1}}}(n)} \eta\big(\varphi(m)\big) \;\leq\; \tfrac{c^3}{c-1}\cdot\eta(n)$. Think of an infinite roll of toilet papers with numbers $\eta(0)$, $\eta(1)$, …printed on them. We shall cut this roll into appropriate *runs* from sheet $\#\varphi(m)$ to $\#\varphi(m+1)-1$. Item a) asserts that integers on sheets within the same run differ by no more than factor $c^2$. Items b) and c) formalizes that labels on consecutive runs grow at least exponentially. We will construct an infinite subset of ${\mathbb{N}}$ by picking elements one by one. Its elements in increasing order will then constitute the sequence $\varphi$. First, in case there exists $x \in {\mathbb{N}}$ such that $\varphi(x) = 0$, pick the largest such $x$. And pick all those $x \in {\mathbb{N}}$ satisfying $$0 < \varphi(x) \cdot c \le \varphi(x+1).$$ Possibly we have picked only finitely many elements. Let $m$ be the largest number picked so far. Pick $x > m+1$ such that $$c \le \frac{\varphi(x)}{\varphi(m+1)} < c^2.$$ Such $x$ is guaranteed to exist so that we can choose. Now take $m = x$ and repeat this process infinitely. We can mechanically check that conditions (b) and (c) are met now. What remains is to pick more numbers so that (a) be satisfied while maintaining (b) and (c). We will pick some more numbers for each $i \in {\mathbb{N}}$ that fails condition (a). Suppose that $i \in {\mathbb{N}}$ fails (a). Denote for convenience $a := f(i)+1$ and $b := f(i+1)$. There are two cases. **Case i)** Suppose that $\frac{\varphi(b)}{\varphi(a)} \in [c^{2k}, c^{2k+1})$ for some $k$. Pick $x_1, x_2, \cdots x_k$ such that the followings hold for $j=1, 2, \cdots, k$: $$\frac{\varphi(x_j)}{\varphi(a)} < c^{2j-1} \le \frac{\varphi(x_j + 1)}{\varphi(a)}.$$ **Case ii)** Suppose that $\frac{\varphi(b)}{\varphi(a)} \in [c^{2k+1}, c^{2k+2})$ for some $k$. Pick $x_1, x_2, \cdots x_k$ such that the followings hold for $j=1, 2, \cdots, k$: $$\frac{\varphi(x_j)}{\varphi(a)} < c^{2j} \le \frac{\varphi(x_j + 1)}{\varphi(a)}.$$ It is now mechanical to check that all conditions (a), (b), and (c) are fulfilled. Quantitative Main Theorem and Categorical Constructions {#s:Category} ======================================================= We can now establish the quantitative Main Theorem strengthening the classical qualitative one [@Wei00 Theorem 3.2.11]. \[t:Main2\] Let $(X,d)$ be compact with entropy $\eta$ and linearly admissible representation $\xi$ of modulus of continuity $\kappa$. Let $(Y,e)$ be compact with entropy $\theta$ and linearly admissible representation $\upsilon$ of modulus of continuity $\lambda$. 1. If $f:X\to Y$ has modulus of continuity $\mu$, then it admits a $(\xi,\upsilon)$-realizer $F$ with modulus of continuity $$\nu \;=\; \kappa\circ(1+\mu)\circ\big({{\lambda}^{\underline{-1}}}+{\mathcal{O}}(1)\big) \;\in\; {\operatorname{lin}}(\eta)\circ\mu\circ{\operatorname{lin}}\big({{\theta}^{\underline{-1}}}\big)$$ 2. If $f:X\to Y$ has $(\xi,\upsilon)$-realizer $F$ with modulus of continuity $\nu$, then $f$ has modulus $$\mu \;=\; {{\kappa}^{\underline{-1}}}\circ\nu\circ\lambda(1+{\operatorname{id}})+{\mathcal{O}}(1) \;\in\; {\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ\nu\circ{\operatorname{lin}}(\theta)$$ The estimated moduli of continuity are (almost) tight: \[r:Tight\] Applying first (a) and then (b) always recovers $f$ to have modulus of continuity $\mu':n\mapsto\mu\big(n+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$ in place of $\mu$, that is, optimal up to a constant shift; recall Lemma \[l:seminv\]c).\ On the other hand applying first (b) and then (a) in general recovers $F$ only to have modulus of continuity $\nu'=\kappa\circ{{\kappa}^{\underline{-1}}}\circ\nu\circ\lambda\circ\big({{\lambda}^{\underline{-1}}}+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$: which simplifies to $m\mapsto \nu\big(m+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$ under additional hypotheses such as - Both $\kappa$ and $\lambda$ being surjective (and hence growing at most linearly), or - $\nu$ being of the form $\kappa\circ\nu'\circ{{\lambda}^{\underline{-1}}}$. Since the real unit cube $[0;1]^d$ has linear modulus of continuity (Example \[x:Entropy\]a+b) and the signed binary representation is linearly admissible (Example \[x:SignedDigit\]), Theorem \[t:Main2\] yields the following strengthening of Example \[x:Max\]a):\ For any fixed $d,e\in{\mathbb{N}}$ and non-decreasing $\mu:{\mathbb{N}}\to{\mathbb{N}}$, a function $f:[0;1]^d\to[0;1]^e$ has modulus of continuity ${\operatorname{lin}}(\mu)$  iff  it admits a $(\sigma,\sigma)$-realizer with modulus of continuity ${\operatorname{lin}}(\mu)$. Recall (Remark \[r:Admissible\]) that *linear metric* reducibility “$\zeta{\preccurlyeq_{\rm O}}\xi$” refines continuous reducibility “$\zeta{\preccurlyeq_{\rm T}}\xi$” by requiring a reduction $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ with $\zeta=\xi\circ F$ to have modulus of continuity $\mu\circ{\mathcal{O}}\big({{\kappa}^{\underline{-1}}}\big)={\mathcal{\scriptscriptstyle O}}(\nu)\circ{{\kappa}^{\underline{-1}}}$ for every modulus of continuity $\nu$ of $\zeta$ and some $\kappa$ of $\xi$. \[t:Admissible\] 1. Every infinite compact metric space $(X,d)$ of diameter $\leq1$ admits a linearly admissible representation $\xi$ of $X$. Linear metric reducibility is transitive. A representation $\zeta$ is linearly admissible iff (i) it has a modulus of continuity in ${\operatorname{lin}}(\eta)$, and (ii) admits a linear metric reduction $\xi{\preccurlyeq_{\rm O}}\zeta$. 2. If $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ is a linearly admissible representation of the same compact space $X$ with respect to two metrics $d$ and $d'$, then it holds $d\leq_\nu\leq d'\leq_\nu d$ for some $\nu\in{\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ{\operatorname{lin}}(\eta)\circ{\operatorname{lin}}(\eta)$. 3. Let $\xi$ and $\upsilon$ be linearly admissible representations for compact $(X,d)$ and $(Y,e)$, respectively. Then $\xi\times\upsilon:\subseteq{\mathcal{C}}\ni\bar b\mapsto\big(\xi(b_0,b_2,b_4,\ldots),\upsilon(b_1,b_3,b_5,\ldots)\big)$ is linearly admissible for $\big(X\times Y,\max\{d,e\}\big)$.\ Moreover it satisfies the following universal properties: The projections $\pi_1:X\times Y\ni(x,y)\mapsto x\in X$ has a $\big(\xi\times\upsilon,\xi\big)$-realizer with linear modulus of continuity $n\mapsto2n$; $\pi_2:X\times Y\ni(x,y)\mapsto y\in Y$ has a $\big(\xi\times\upsilon,\upsilon\big)$-realizer with modulus of continuity $n\mapsto2n+1$. Conversely for every fixed $y\in Y$ the embedding $\imath_{2,y}:X\ni x\mapsto (x,y)\in X\times Y$ has a $\big(\xi,\xi\times\upsilon\big)$-realizer with modulus of continuity $2n\mapsto n$; and for every fixed $x\in X$ the embedding $\imath_{1,x}:Y\ni y\mapsto (x,y)\in X\times Y$ has a $\big(\upsilon,\xi\times\upsilon\big)$-realizer with modulus of continuity $2n+1\mapsto n$. 4. Let $\xi$ be a linearly admissible representation of connected compact $(X,d)$. Then the representation $2^\xi$ of ${\mathcal{K}}(X)$ from Example \[x:Topology\]c) is *polynomially* admissible. Note that linear ‘slack’ in a modulus of continuity of $\xi$ translates to polynomial one in $2^\xi$. Item (c) justifies [@Wei00 Definition 3.3.3.1] constructing a representation of $X\times Y$ *from* such of $X$ and $Y$ that it (as opposed one created one ‘from scratch’ by invoking a) is compatible with the canonical morphisms $X\times Y$. However [@Wei00 Definition 3.3.3.2] for countable products does *not* preserve linear admissibility; and neither does Example \[x:Topology\]b), already in case of spaces $X_j$ with quadratic entropy according to Example \[x:Entropy\]c). For that purposes a more careful construction is needed: \[t:Cartesian\] Fix compact metric spaces $(X_j,d_j)$ of entropies $\eta_j$ and diameters between $1/2$ and $1$, $j\in{\mathbb{N}}$. Let $\xi_j:\subseteq{\mathcal{C}}\twoheadrightarrow X_j$ be *uniformly* linearly admissible in that (i) it has modulus of continuity $\kappa_j(n)\leq c+c\cdot\eta_j(c+c\cdot n)$ and (ii) to every representation $\zeta_j:\subseteq{\mathcal{C}}\twoheadrightarrow X_j$ with modulus of continuity $\nu_j$ there exists a mapping $F_j:{\operatorname{dom}}(\zeta_j)\to{\operatorname{dom}}(\xi_j)$ with modulus of continuity $\mu_j$ with $\mu_j\big(\kappa_j(n)\big)\leq\nu_j(c+c\cdot n)$ for some $c\in{\mathbb{N}}$ *in*dependent of $j$.\ Let a name of $(x_0,x_1,\ldots x_j,\ldots)\in\prod_j X_j$ be any infinite binary sequence $$\begin{gathered} \label{e:Cartesian} \bar b^{(0)}|_{\kappa_0(0):\kappa_0(1)}, \qquad \bar b^{(0)}|_{\kappa_0(1):\kappa_0(2)}, \quad \bar b^{(1)}|_{\kappa_1(0):\kappa_1(1)}, \\ \bar b^{(0)}|_{\kappa_0(2):\kappa_0(3)}, \quad \bar b^{(1)}|_{\kappa_1(1):\kappa_1(2)}, \quad \bar b^{(2)}|_{\kappa_2(0):\kappa_2(1)}, \qquad\ldots\ldots \\[1ex] \ldots\ldots\qquad \bar b^{(0)}|_{\kappa_0(n-1):\kappa_0(n)}, \; \bar b^{(1)}|_{\kappa_1(n-2):\kappa_1(n-1)}, \; \ldots \\ \ldots \; \bar b^{(j)}|_{\kappa_j(n-j-1):\kappa_j(n-j)}, \; \ldots \; \bar b^{(n-1)}|_{\kappa_{n-1}(0):\kappa_{n-1}(1)}, \qquad \ldots\ldots\end{gathered}$$ such that $\bar b^{(j)}$ is a $\xi_j$-name of $x_j$. Here $\bar b|_{k:\ell}$ abbreviates the finite segment $\bar b_{k},\ldots b_{\ell-1}$ of $\bar b$. The thus defined representation $\xi:=\prod_j\xi_j:\subseteq{\mathcal{C}}\twoheadrightarrow\prod_j X_j=:X$ has modulus of continuity $\kappa:n\mapsto\sum\nolimits_{j<n} \kappa_j(n-j)$ and is linearly admissible for $\big(\prod_j X_j,\sup_j d_j/2^j\big)$.\ Moreover the projection $\pi_j:\prod_j X_j\ni (x_0,\ldots x_j,\ldots)\mapsto x_j\in X_j$ has a $\big(\prod_i\xi_i,\xi_j\big)$-realizer with modulus of continuity $m\mapsto \kappa\big({{\kappa_j}^{\underline{-1}}}(m)+j\big)$; and for every fixed $\bar x\in X$ and $j\in{\mathbb{N}}$, the embedding $\imath_{j,\bar x}:X_j\ni x_j\mapsto (x_0,\ldots x_j,\ldots)\in X$ has a $\big(\xi_j,\prod_i\xi_i\big)$-realizer with modulus of continuity $m\mapsto \kappa_j\big(\max\big\{0,{{\kappa}^{\underline{-1}}}(m)-j\big\}\big)$. Note that the derived moduli of continuity of the canonical morphisms $\pi_j$ and $\imath_{j,\bar x}$ agree linearly with those predicted by Theorem \[t:Main2\], are thus optimal in the sense of Remark \[r:Tight\]. For Cartesian closure we finally treat function spaces, in view of Lemma \[l:Modulus\]c) w.l.o.g. the 1-Lipschitz case: \[t:Functions\] Fix convex compact metric space $(X,d)$. To any linearly admissible representation $\xi$ of $X$ there exists a polynomially admissible representation $\xi'_1$ of the convex compact space $X'_1={\operatorname{Lip}}_1(X,[0;1])$ of non-expansive functions $f:X\to[0;1]$. It is canonical in that it asserts the application functional $X'_1\times X\ni (f,x)\mapsto f(x)\in[0;1]$ admit a $(\xi'_1\mu\times\xi,\sigma)$-realizer $F$ with asymptotically optimal modulus of continuity $\leq{\operatorname{lin}}(\eta'_1)$. Recall that Condition (ii) of polynomial admissibility means polynomial metric reducibility “$\zeta{\preccurlyeq_{\rm P}}\xi'_1$” to $\xi'_1$ of any other continuous representation $\zeta$ of $X'_1$. Proofs {#s:Proofs} ------ 1. First suppose $\xi=\xi^\varphi$ the representation from Theorem \[t:Linear\] with modulus of continuity $\kappa(n)\leq{\mathcal{O}}\big(\eta(n+1)\big)$ and similarly $\upsilon$ with $\lambda(n)\leq{\mathcal{O}}\big(\theta(n+1)\big)$. Applying (ii) to $\zeta:=f\circ\xi:{\operatorname{dom}}(\xi)\subseteq{\mathcal{C}}\to Y$ with modulus of continuity $\kappa\circ\mu$ yields a $(\xi,\upsilon)$-realizer $F$ with modulus $\nu=\kappa\circ\mu\circ\big(1+{{\lambda}^{\underline{-1}}}\big)$.\ Next consider arbitrary linearly admissible $\xi'$ with modulus $\kappa'$ and $\upsilon'$ with $\lambda'$. Applying (ii) to $\xi'$ yields $G:{\operatorname{dom}}(\xi')\to{\operatorname{dom}}(\xi)$ with modulus $\kappa'\circ\big(1+{{\lambda'}^{\underline{-1}}}\big)$ such that $\xi'=\tilde\xi\circ G$; and applying (ii) of Definition \[d:Admissible\]b) to $\upsilon$ yields $H:{\operatorname{dom}}(\upsilon)\to{\operatorname{dom}}(\upsilon')$ with modulus $\lambda\circ\big({{\lambda'}^{\underline{-1}}}+{\mathcal{O}}(1)\big)$ such that $\upsilon=\upsilon'\circ H$. Together, $F':=H\circ F\circ G$ constitutes a $(\xi',\upsilon')$-realizer of $f$ with modulus of continuity $$\begin{aligned} \nu' &=& \kappa'\circ\big(1+{{\lambda'}^{\underline{-1}}}\big)\;\circ\; \kappa\circ\mu\circ\big(1+{{\lambda}^{\underline{-1}}}\big) \;\circ\; \lambda\circ\big({{\lambda'}^{\underline{-1}}}+{\mathcal{O}}(1)\big) \\ &\leq& \kappa' \circ (1+\mu) \circ \big({{\lambda'}^{\underline{-1}}}+{\mathcal{O}}(1)\big) \;\in\; {\operatorname{lin}}(\eta)\circ\mu\circ{\operatorname{lin}}\big({{\theta}^{\underline{-1}}}\big) \end{aligned}$$ by Lemma \[l:seminv\]c+e) since $\eta\leq\kappa'\in{\operatorname{lin}}(\eta)$ and $\theta\leq\lambda'\in{\operatorname{lin}}(\theta)$ according to Definition \[d:Admissible\]bi) and Example \[x:Entropy\]f). 2. As in (a) first suppose $F$ is a $(\xi,\upsilon)$-realizer of $f$ with modulus $\nu$, for $\xi=\xi^\varphi$ from Theorem \[t:Linear\] with modulus of continuity $\kappa(n)\leq{\mathcal{O}}\big(\eta(n+1)\big)$ and similarly $\upsilon$ with $\lambda(n)\leq{\mathcal{O}}\big(\theta(n+1)\big)$. Then $f\circ\xi=\upsilon\circ F:{\operatorname{dom}}(\xi)\subseteq{\mathcal{C}}\to Y$ has modulus of continuity $\nu\circ\lambda\leq \kappa\circ{{\kappa}^{\underline{-1}}}\circ\nu\circ\lambda$ by Lemma \[l:seminv\]c); and (iv) implies $f$ to have modulus $\mu=1+{{\kappa}^{\underline{-1}}}\circ\nu\circ\lambda$.\ Next consider arbitrary linearly admissible $\xi'$ with modulus $\kappa'$ and $\upsilon'$ with $\lambda'$; and let $F'$ be a $(\xi',\upsilon')$-realizer of $f$ with modulus $\nu'$. (ii) yields $H'$ with $\upsilon'=\upsilon\circ H'$ of modulus $\lambda'\circ\big(1+{{\lambda}^{\underline{-1}}}\big)$; and $G'$ with $\xi=\xi'\circ G'$ of modulus $\kappa\circ\big({{\kappa'}^{\underline{-1}}}+{\mathcal{O}}(1)\big)$ according to Definition \[d:Admissible\]b). Together, $F:=H'\circ F'\circ G'$ constitutes a $(\xi,\upsilon)$-realizer of $f$ with modulus $\nu=\kappa\circ\big({{\kappa'}^{\underline{-1}}}+{\mathcal{O}}(1)\big)\circ \nu'\circ \lambda'\circ\big(1+{{\lambda}^{\underline{-1}}}\big)$. So our initial consideration implies $f$ to have modulus $$\begin{aligned} \mu' &=& 1+{{\kappa}^{\underline{-1}}}\;\circ\; \kappa\circ\big({{\kappa'}^{\underline{-1}}}+{\mathcal{O}}(1)\big)\circ \nu'\circ \lambda'\circ\big(1+{{\lambda}^{\underline{-1}}}\big) \;\circ\; \lambda \\ &\leq& {\mathcal{O}}(1)+{{\kappa'}^{\underline{-1}}}\circ\nu'\circ \lambda'(1+{\operatorname{id}}) \; \in \; {\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ\nu'\circ{\operatorname{lin}}(\theta) \end{aligned}$$ by Lemma \[l:seminv\]c+e) since $\eta\leq\kappa'\in{\operatorname{lin}}(\eta)$ and $\theta\leq\lambda'\in{\operatorname{lin}}(\theta)$ according to Definition \[d:Admissible\]bi) and Example \[x:Entropy\]f). <!-- --> 1. Theorem \[t:Linear\] asserts the first claim. For the second let $\zeta$ have modulus $\nu$ and $\zeta'$ have modulus $\nu'$ and $\zeta''$ have modulus $\nu''$; let $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\zeta')$ with $\zeta=\zeta'\circ F$ have modulus of continuity $\nu\circ{\mathcal{O}}\big({{\nu'}^{\underline{-1}}}\big)$ and $F':{\operatorname{dom}}(\zeta')\to{\operatorname{dom}}(\zeta'')$ with $\zeta'=\zeta''\circ F'$ have modulus of continuity $\nu'\circ{\mathcal{O}}\big({{\nu''}^{\underline{-1}}}\big)$. Then $\zeta=\zeta''\circ F'\circ F$, where $F'\circ F$ has modulus $$\nu\circ{\mathcal{O}}\big({{\nu'}^{\underline{-1}}}\big) \;\circ\; \nu'\circ{\mathcal{O}}\big({{\nu''}^{\underline{-1}}}\big) \;=\; \nu\circ{\mathcal{O}}\big({{\nu''}^{\underline{-1}}}\big) \enspace .$$ Finally, $\tilde\xi{\preccurlyeq_{\rm O}}\zeta$ is necessary according to Definition \[d:Admissible\]bii); while any other representation $\zeta'{\preccurlyeq_{\rm O}}\tilde\xi{\preccurlyeq_{\rm O}}\zeta$ since $\tilde\xi$ is admissible. 2. First consider $\tilde\xi=\xi^\varphi$ the representation from Theorem \[t:Linear\] with modulus $\tilde\kappa$. By (iii), to every $x,x'\in X$ with $d(x,x')\leq2^{-m-1}$, there exist $\tilde\xi$-names $\bar y_x$ and $\bar y'_{x'}$ of $x=\tilde\xi(\bar y_x)$ and $x'=\tilde\xi(\bar y'_{x'})$ with ${d_{{\mathcal{C}}}}(\bar y_x,\bar y'_{x'})\leq2^{-\tilde\kappa(m)}$. Hence $d'(x,x')=d'\big(\tilde\xi(\bar y_x),\tilde\xi(\bar y'_{x'})\big)\leq2^{-n}$ by (i) for $\tilde\kappa'\in{\operatorname{lin}}(\eta)$ modulus of continuity of $\tilde\xi$ w.r.t. $d'$ and $\tilde\kappa(m)\geq\tilde\kappa'(n)$, that is for $m={{\tilde\kappa}^{\underline{-1}}}\circ\tilde\kappa'(n)$ by Lemma \[l:seminv\]c): $d'\leq_\nu d$ for $\nu:=1+{{\tilde\kappa}^{\underline{-1}}}\circ\tilde\kappa'\in{\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ{\operatorname{lin}}(\eta)$.\ Now let $\xi$ be linearly admissible with modulus $\kappa$. Then $\xi=F\circ\tilde\xi$ for some $F:{\operatorname{dom}}(\xi)\to{\operatorname{dom}}(\tilde\xi)$ with modulus $\mu\leq\kappa\circ{\mathcal{O}}\big({{\tilde\kappa}^{\underline{-1}}}\big)$ Hence $\bar z_x:=F(\bar y_x)$ and $\bar z'_{x'}=F(\bar y'_{x'})$ have ${d_{{\mathcal{C}}}}(\bar z_x,\bar z'_{x'})\leq 2^{-m'}$ for $\tilde\kappa(m)\geq\mu(m')$; and again $d'(x,x')\leq2^{-n}$ for $m'\geq\tilde\kappa'(n)$ modulus of continuity of $\xi$ w.r.t. $d'$: $d'\leq_\nu d$ for $\nu:=1+{{\tilde\kappa}^{\underline{-1}}}\circ\mu\circ\tilde\kappa'\in{\operatorname{lin}}\big({{\eta}^{\underline{-1}}}\big)\circ{\operatorname{lin}}(\eta)\circ{\operatorname{lin}}(\eta)$. 3. Let $X$ and $Y$ have entropies $\eta$ and $\theta$, respectively; $\xi$ and $\upsilon$ moduli of continuity $\mu\leq{\operatorname{lin}}(\eta)$ and $\nu\leq{\operatorname{lin}}(\theta)$. (i) $X\times Y$ has entropy at least $\eta(n-1)+\theta(n-1)-1$ by Example \[x:Entropy\]b); and $\xi\times\upsilon$ has modulus of continuity $2\cdot\max\{\mu,\nu\}\leq{\operatorname{lin}}\big(n\mapsto \eta(n-1)+\theta(n-1)-1\big)$. (ii) Let $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X\times Y$ be any representation with modulus $\kappa$. Then its projection onto the first component $\zeta_1$ constitutes a continuous representation of $X$ with some modulus $\kappa$. Hence there exist by hypothesis $F:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ of modulus $\kappa\circ{\mathcal{O}}\big({{\mu}^{\underline{-1}}}\big)$ such that $\zeta_1=\xi\circ F$; similarly $\zeta_2=\upsilon\circ G$ with $G:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\upsilon)$ of modulus $\kappa\circ{\mathcal{O}}\big({{\nu}^{\underline{-1}}}\big)$. Then $F\times G:=(F_0,G_0,F_1,G_1,F_2,\ldots):{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi\times\upsilon)$ satisfies $\zeta=(\xi\times\upsilon)\circ(F\times G)$ and has modulus $$2\cdot\max\Big\{\kappa\circ{\mathcal{O}}\big({{\mu}^{\underline{-1}}}\big),\kappa\circ{\mathcal{O}}\big({{\nu}^{\underline{-1}}}\big)\Big\} \;=\; 2\cdot\kappa\circ {\mathcal{O}}\big({{\min\{\mu,\nu\}}^{\underline{-1}}}\big)$$ $\leq\; \kappa\circ{\mathcal{O}}\big({{2\cdot\max\{\mu,\nu\}}^{\underline{-1}}}\big)$ by Lemma \[l:seminv\]h).\ Finally, $(b_0,b_1,b_2,b_3,b_4,\ldots)\mapsto (b_0,b_2,b_4,\ldots)$ is a $\big(\xi\times\upsilon,\xi\big)$-realizer of $\pi_1$ with modulus of continuity $n\mapsto2n$; and $(b_0,b_1,b_2,b_3,b_4,\ldots)\mapsto (b_1,b_3,\ldots)$ a $\big(\xi\times\upsilon,\upsilon\big)$-realizer of $\pi_2$ with modulus of continuity $n\mapsto2n+1$. For any fixed $\upsilon$-name $(b_1,b_3,\ldots)$ of $y$, $(b_0,b_2,\ldots)\mapsto (b_0,b_1,b_2,b_3,b_4,\ldots)$ is a realizer of $\imath_{2,y}$ with modulus of continuity $2n\mapsto n$; and any fixed $\xi$-name $(b_0,b_2,\ldots)$ of $x$, $(b_1,b_3,\ldots)\mapsto (b_0,b_1,b_2,b_3,b_4,\ldots)$ is a realizer of $\imath_{1,x}$ with modulus of continuity $2n+1\mapsto n$. 4. Combine Example \[x:Topology\]c) with Example \[x:Entropy\]g). Since $X_j$ has diameter between $1/2$ and $1$, w.l.o.g. $\eta_j(0)=0$ and $\eta_j(n)\geq1$ for $n\geq2$ and w.l.o.g. $\kappa_j(0)=0$. $X:=\prod_j X_j$ has entropy $\eta(n)\geq\sum_{j<n}\eta_j(n-1-j)-\lfloor n/2\rfloor$ by Example \[x:Entropy\]c). On the other hand the initial segment $\bar b^{(j)}|_{0:\kappa_j(n-j)}$ of a $\xi_j$-name $\bar b^{(j)}$ determines $x_j=\xi_j\big(\bar b^{(j)}\big)$ up to error $2^{-(n-j)}$ w.r.t. $d:=d_j/2^j$; and is located among the first $\kappa_0(n)+\kappa_1(n-1)+\cdots+\kappa_n(0)=\kappa(n)$ symbols of a $\xi$-name of $(x_j)_{_j}$ according to Equation (\[e:Cartesian\]); recall $\kappa_n(0)=0$. Therefore (i) $\xi$ has modulus of continuity $\kappa(n)$, which is $\leq{\operatorname{lin}}\big(\eta(n)\big)$ since $\kappa_j(n)\leq c+c\cdot\eta_j(c+c\cdot n)$ and $\eta_j(n)\geq1$ for $n\geq2$ ‘covers’ the $\lfloor n/2\rfloor$.\ Regarding (ii), let $\zeta:\subseteq{\mathcal{C}}\twoheadrightarrow X$ have modulus of continuity $\nu$. The projection $\pi_j:X\ni (x_0,\ldots x_j,\ldots)\mapsto x_j\in X_j$ has modulus of continuity $n\mapsto n+j$ since $X$ is equipped with metric $d=\sup_j d_j/2^j$. The representation $\zeta_j:=\pi_j\circ\zeta:{\operatorname{dom}}(\zeta)\subseteq{\mathcal{C}}\twoheadrightarrow X_j$ thus has modulus $\nu_j:n\mapsto\nu(n+j)$. By hypothesis (ii) on $\xi_j$, there exists a mapping $F_j:{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi_j)$ whose modulus of continuity $\mu_j$ satifies $\mu_j\big(\kappa_j(n)\big)\leq\nu_j(c+c\cdot n)$ such that it holds $\pi_j\circ\zeta=\xi_j\circ F_j$. Now let $F:=$ $$\begin{gathered} F_0|_{\kappa_0(0):\kappa_0(1)}, \qquad F_0|_{\kappa_0(1):\kappa_0(2)}, \quad F_1|_{\kappa_1(0):\kappa_1(1)}, \\ F_0|_{\kappa_0(2):\kappa_0(3)}, \quad F_1|_{\kappa_1(1):\kappa_1(2)}, \quad F_2|_{\kappa_2(0):\kappa_2(1)}, \qquad\ldots\ldots \\[1ex] \ldots\ldots\qquad F_0|_{\kappa_0(n-1):\kappa_0(n)}, \; F_1|_{\kappa_1(n-2):\kappa_1(n-1)}, \; \ldots \\ \ldots \; F_j|_{\kappa_j(n-j-1):\kappa_j(n-j)}, \; \ldots \; F_{n-1}|_{\kappa_{n-1}(0):\kappa_{n-1}(1)}, \qquad \ldots\ldots\end{gathered}$$ so that $\zeta=\xi\circ F$. Moreover $F_j|_{0:\kappa_j(n-j)}$ depends on the first $\mu_j\big(\kappa_j(n-j)\big)$ symbols of its argument; hence $F$ has modulus of continuity $\mu$ with $$\mu\big(\kappa(n)\big) \;=\; \sup\nolimits_{j<n} \mu_j\big(\kappa_j(n-j)\big) \;\leq\; \sup\nolimits_{j<n} \nu_j\big(c+c\cdot(n-j)\big) \;=\; \nu(c+c\cdot n).$$ Mapping an infinite binary sequence according to Equation (\[e:Cartesian\]) to $\bar b^{(j)}$ constitutes a $\big(\prod_i\xi_i,\xi_j\big)$-realizer of $\pi_j$; and for $n\geq j$ the first $\kappa_j(n-j)$ bits of $\bar b^{(i)}$ are determined by the first $\kappa(n)$ bits of the given $\xi$-name: hence $\kappa_j(n-j)\mapsto\kappa(n)$ a modulus of continuity. Conversely mapping $\xi_j$-name $\bar b^{(j)}$ of $x_j$ to the infinite binary sequence from Equation (\[e:Cartesian\]) constitutes a $\big(\xi_j,\prod_i\xi_i\big)$-realizer of $\imath_{j,\bar x}$ with modulus of continuity $\kappa(n)\mapsto \kappa_j\big(\max\{0,n-j\}\big)$. Let $\eta$ denote the entropy of $X$ and $\kappa\leq{\operatorname{lin}}(\eta)$ a modulus of continuity of $\xi$. Only for notational simplicity, consider the case $\mu={\operatorname{id}}$ of 1-Lipschitz functions $X'_1:={\operatorname{Lip}}_1(X,[0;1])$. We pick up on, and refine, the entropy analysis from the proof of Example \[x:Entropy\]h). Fix $n\in{\mathbb{N}}$ and, for every $\vec w\in \{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\kappa(n)}$ with $\xi\big[\vec w\,{\mathcal{C}}\big]\neq\emptyset$, choose[^6] some $x_{\vec w}\in \xi\big[\vec w\,{\mathcal{C}}\big]$. Record that, as centers of closed balls of radius $2^{-n}$, these cover $X$.\ Next choose some subset $W_n\subseteq \{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\kappa(n)}$ such that any two distinct $\vec w\in W_n$ satisfy $d\big(x_{\vec w},x_{\vec v}\big)>2^{-n}$ while the closed balls ${\overline{\operatorname{B}}}\big(x_{\vec w},2^{-n+1}\big)$ of double radius still cover $X$. Such $W_n$ can be created greedily by repeatedly and in arbitrary order weeding out one of $(\vec w,\vec v)$ whenever $d\big(x_{\vec w},x_{\vec v}\big)\leq2^{-n}$: observe ${\overline{\operatorname{B}}}\big(x_{\vec w},2^{-n+1}\big)\supseteq {\overline{\operatorname{B}}}\big(x_{\vec v},2^{-n}\big)$ and abbreviate $X_n:=\{ x_{\vec w}:\vec w\in W_n\}$.\ We now formalize the idea that a $\xi'_1$-name of $f\in X'_1$ encodes a sequence $f_n:X_n\to{\mathbb{D}}_n$ of $\tfrac{3}{2}$-Lipschitz functions whose $\tfrac{3}{2}$-Lipschitz extensions ${{f_n}^*_*}$ approximate $f$ up to error $2^{-n+1}$: As in the proof of Example \[x:Entropy\]h), $f'_n:=\lfloor 2^n\cdot f\big|_{X_n}\rceil/2^n$ satisfies this condition, hence asserting every $f\in X'_1$ to have a $\xi'_1$-name, i.e. $\xi'_1$ be surjective.\ In order to encode the $f_n$ succinctly, and make $\xi'_1$ have modulus of continuity $\kappa'_1\leq{\operatorname{lin}}(\eta'_1)$ for the entropy $\eta'_1$ of $X'_1$, recall the connected undirected graph $G_n=(X_n,E_n)$ from the proof of Example \[x:Entropy\]f+h) with edge $(x,x')$ present iff the open balls of radius $2^{-n+\pmb{2}}$ around centers $x,x'$ intersect. Choose some directed spanning tree $F_n\subseteq E_n$ of $G_n$ with root $x_{n,0}$ and remaining nodes $x_{n,1},\ldots x_{n,N_n-1}$ in some topological order, where $N_n:={\operatorname{Card}}(W_n)\leq 2^{\kappa(n)}$. As in the proof of Example \[x:Entropy\]h), every $\tfrac{3}{2}$-Lipschitz $f_n:X_n\to{\mathbb{D}}_n$ is uniquely described by $f_n\big(x_{n,0}\big)\in{\mathbb{D}}_n\cap[0;1]$ together with the sequence $$f_n\big(x_{n,m}\big)\;-\;f_n\big(x_{n,m-1}\big) \;\in\; \big\{-6\cdot2^{-n},\ldots 0,\ldots +6\cdot2^{-n}\big\} \quad , 1\leq m<N_n \enspace .$$ The first takes $n+1$ bits to describe, the latter $(N_n-1)\times \log_2(13)$ bits: in view of Example \[x:Entropy\]f) a total of ${\mathcal{O}}\big(2^{\kappa(n)}\big)$ to encode $f_n$ in some $\vec u_n$. So the initial segment $(\vec u_0,\ldots \vec u_{n+1})$ of a thus defined $\xi'_1$-name $\bar u=(\vec u_0,\vec u_1,\ldots)$ of $f$, encoding $f_0,\ldots f_{n+1}$ and thus determining $f$ up to error $2^{-n}$, has length $\kappa'_1(n)={\mathcal{O}}\big(2^{\kappa(0)}\big)+\cdots+{\mathcal{O}}\big(2^{\kappa(n+1)}\big)\leq{\mathcal{O}}\big(2^{\kappa(n+2)}\big)\leq{\operatorname{lin}}(\eta'_1)$ again by Example \[x:Entropy\]f+h): thus establishing Condition (i).\ Regarding the application functional $(f,x)\mapsto f(x)$, consider the following ‘algorithm’ and recall Theorem \[t:Admissible\]c): Given a $(\xi'_1\times\xi)$-name $(u_0,v_0,u_1,v_1,\ldots u_m,v_m,\ldots)$ of $(f,x)$ and $n\in{\mathbb{N}}$, let $\vec v:=\bar v_{<\kappa(n)}$ and ‘find’ some $\vec w\in W_n$ with $d\big(x_{\vec w},x_{\vec v}\big)\leq2^{-n+1}$; then ‘trace’ the path in spanning tree $(W_n,F_n)$ from its root $x_{n,0}$ to $x_{\vec w}=x_{n,M}$: all information contained within the first $\kappa'_1(n)$ bits of $\bar u$ encoding $\tfrac{3}{2}$-Lipschitz $f_n:X_n\to{\mathbb{D}}_n$ whose extension approximates $f$ up to error $2^{-n+1}$, sufficient to recover the value $$y_n \;:=\; f_n\big(x_{\vec w}\big) \;=\; f_n\big(x_{n,0}\big) \;+\; \sum\nolimits_{m=1}^M \Big(f_n\big(x_{n,m}\big)-f_n\big(x_{n,m-1}\big)\Big) \;\in\;{\mathbb{D}}_n$$ satisfying $|y_n-f(x)|\leq |y_n-f_n(x)|+|f_n(x)-f(x)|\leq \tfrac{3}{2}\cdot 2^{-n+1}+2^{-n+1}\leq 2^{-n+3}$. The (initial segment of length $n+3$ of) sequence $y_0,\ldots y_n,\ldots$ in turn is easily converted to (an initial segment of length $n$ of) a signed binary expansion of $f(x)$ [@Wei00 Lemma 7.3.5]: yielding a $(\xi'_1\times\xi,\sigma)$-realizer of $(f,x)\mapsto f(x)$ with asymptotically optimal modulus of continuity $\mu(n)=\max\{2\kappa(n+3),2\kappa'_1(n+3)\}\leq{\operatorname{lin}}\big(\eta'_1(n+3)\big)$ by Example \[x:Entropy\]f+h). Conclusion and Perspective {#s:Conclusion} ========================== For an arbitrary compact metric space $(X,d)$ we have constructed a generic representation $\xi$ with optimal modulus of continuity, namely agreeing with the space’s entropy up to a constant factor. And we have shown this representation to exhibit properties similar to the classical *standard* representation of a topological T$_0$ space underlying the definition of qualitative *admissibility*, but now under the quantitative perspective crucial for a generic resource-bounded complexity theory for computing with continuous data: $\xi$ is maximal with respect to optimal metric reduction among all continuous representations. The class of such metrically optimal representations is closed binary and countable Cartesian products, and gives rise to metrically optimal representations of the Hausdorff space of compact subsets and of the space of non-expansive real functions. Moreover, with respect to such *linearly* admissible representations $\xi$ and $\upsilon$ of compact metric spaces $X$ and $Y$, optimal moduli of continuity of functions $f:X\to Y$ and their $(\xi,\upsilon)$-realizers $F:{\operatorname{dom}}(\xi)\to{\operatorname{dom}}(\upsilon)$ are linearly related up to composition with (the lower semi-inverse) of the entropies of $X$ and $Y$, respectively. All our notions (entropy, modulus of continuity) and arguments are information-theoretic: according to Fact \[f:Proper\]b) these precede, and under suitable oracles coincide with, complexity questions. They thus serve as general guide to investigations over concrete advanced spaces of continuous data, such as of integrable or weakly differentiable functions employed in the theory of Partial Differential Equations [@DBLP:journals/lmcs/Steinberg17]. In order to strengthen our Main Theorem \[t:Main2\], namely to further decrease the gap between (a) and (b) according to Remark \[r:Tight\], we wonder: \[q:Future\] Which infinite compact metric spaces $(X,d)$ with entropy $\eta$ admit 1. a representation with $\eta\big(n+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$ as modulus of continuity? 2. an *admissible* representation with modulus of continuity $\eta\big(n+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$? 3. a *linearly* admissible representation with modulus $\eta\big(n+{\mathcal{O}}(1)\big)+{\mathcal{O}}(1)$ ? 4. If $\xi$ is a linearly admissible representation of $(X,d)$ and $Z\subseteq X$ closed,\ is the restriction $\xi|^{Z}$ then again linearly admissible? [[@Wei00 Lemma 3.3.2]]{} 5. In view of Example \[x:Entropy\]f), how large can be the asymptotic gap between intrinsic $\eta_{K,K}$ and relative entropy $\eta_{X,K}$ ? 6. (How) does Theorem \[t:Functions\] generalize from real $[0;1]$ to other compact metric codomains $Y$? 7. How do the above considerations carry over from the Type-2 setting of computation on streams to that of oracle arguments [[@Ko91; @KC12]]{}? Algorithmic Cost and Representations for Higher Types {#ss:Hyper} ----------------------------------------------------- It seems counter-intuitive that the application functional should be as hard to compute as Example \[x:Max\] predicts. To ‘mend’ this artefact, the Type-2 Machine model from Definition \[d:Type2\] — computing on ‘streams’ of infinite sequences of bits $\bar b=(b_0,b_1,\ldots b_n,\ldots)\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\mathbb{N}}$ with sequential/linear-time access — is commonly modified for problems involving continuous subsets or functions as arguments to allow for ‘random’/logarithmic-time access [[@KC12]]{}: \[d:Type3\] 1. Abbreviate with ${\mathcal{C}}':=\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^*}$ the set of total finite string predicates $\varphi:\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^*\to\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$, equipped with the metric ${d_{{\mathcal{C}}'}}(\varphi,\psi)=2^{-\min\{|\vec u|:\varphi(\vec u)\neq\psi(\vec u)\}}$, where $|\vec u|\in{\mathbb{N}}$ denotes the length $n$ of $(u_0,\ldots u_{n-1})\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^*$. 2. A *Type-3 Machine* ${\mathcal{M}}^?$ is an ordinary oracle Turing machine with variable oracle. It *computes* the partial function ${\mathcal{F}}:\subseteq{\mathcal{C}}'\to{\mathcal{C}}'$ if, for every $\varphi\in{\operatorname{dom}}({\mathcal{F}})\subseteq{\mathcal{C}}'$, ${\mathcal{M}}^\varphi$ computes ${\mathcal{F}}(\varphi)\in{\mathcal{C}}'$ in that it accepts all inputs $\vec v\in\big({\mathcal{F}}(\varphi)\big)^{-1}[{\textup{\texttt{1}}\xspace}]$ and rejects all inputs $\vec v\in\big({\mathcal{F}}(\varphi)\big)^{-1}[{\textup{\texttt{0}}\xspace}]$; The behaviour of ${\mathcal{M}}^\varphi$ for $\varphi\not\in{\operatorname{dom}}({\mathcal{F}})$ may be arbitrary. 3. For $\psi\in{\mathcal{C}}'$, ${\mathcal{M}}^{\psi,?}$ denotes a Type-3 Machine with fixed oracle $\psi$ and additional variable oracle in that it operates, for every $\varphi\in{\operatorname{dom}}({\mathcal{F}})\subseteq{\mathcal{C}}'$, like ${\mathcal{M}}^{\psi\otimes\varphi}$, where $$\psi\otimes\varphi: ({\textup{\texttt{0}}\xspace}\,\vec u)\mapsto\psi(\vec u), \quad ({\textup{\texttt{1}}\xspace}\,\vec u)\mapsto\varphi(\vec u) \enspace .$$ 4. ${\mathcal{M}}^?$ computes ${\mathcal{F}}$ in *time* $t:{\mathbb{N}}\to{\mathbb{N}}$ if ${\mathcal{M}}^\varphi$ on input $\vec v\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n$ stops after at most $t(n)$ steps regardless of $\varphi\in{\operatorname{dom}}({\mathcal{F}})$; similarly for ${\mathcal{M}}^{\psi,?}$. Encoding a continuous space using oracles $\psi\in{\mathcal{C}}'$ rather than sequences $\bar b\in{\mathcal{C}}$ gave rise to a different, new notion of representation [@KC12 §3.4]. Here we shall call it *hyper-*representation, and avoid confusion with the previous conception now referred to as *stream* representation. \[d:Hyper\] 1. A *hyper-*representation of a space $X$ is a partial surjective mapping $\Xi:\subseteq{\mathcal{C}}'\twoheadrightarrow X$. 2. For hyper-representations $\Xi:\subseteq{\mathcal{C}}'\twoheadrightarrow X$ and $\Upsilon:\subseteq{\mathcal{C}}'\twoheadrightarrow Y$, a $(\Xi,\Upsilon)$-*realizer* of a function $f:X\to Y$ is a partial function ${\mathcal{F}}:{\operatorname{dom}}(\Xi)\to{\operatorname{dom}}(\Upsilon)$ such that $f\circ\Xi=\Upsilon\circ{\mathcal{F}}$ holds. 3. A *reduction* from $\Xi:\subseteq{\mathcal{C}}'\twoheadrightarrow X$ to $\Xi':\subseteq{\mathcal{C}}'\twoheadrightarrow X$ is a $(\Xi,\Xi')$-realizer of the identity ${\operatorname{id}}:X\to X$. 4. $(\Xi,\Upsilon)$-*computing* $f$ means to compute some $(\Xi,\Upsilon)$-realizer ${\mathcal{F}}$ of $f$ in the sense of Definition \[d:Type3\]b). 5. The *product* hyper-representation of $\Xi:\subseteq{\mathcal{C}}'\twoheadrightarrow X$ and $\Upsilon:\subseteq{\mathcal{C}}'\twoheadrightarrow Y$ is $$\Xi\times\Upsilon:\subseteq{\mathcal{C}}' \;\ni\; \varphi\;\mapsto\; \Big(\Xi\big(\vec v\mapsto \varphi({\textup{\texttt{0}}\xspace}\,\vec v)\big),\Upsilon\big(\vec v\mapsto\varphi({\textup{\texttt{1}}\xspace}\,\vec v)\big)\Big) \;\in\; X\times Y$$ 6. Consider the hyper-representation (sic!) $\imath_{\mathcal{C}}: {\mathcal{C}}' \;\ni\; \varphi \;\mapsto\; \big( \varphi({\textup{\texttt{1}}\xspace}^n)_{_n}\big) \;\in\; {\mathcal{C}}$. For stream representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$, let $\xi\circ\imath_{\mathcal{C}}:\subseteq{\mathcal{C}}'\twoheadrightarrow X$ denote its induced *unary* hyper-representation. 7. Abusing notation, consider the hyper-representation $${\mathrm{bin}}: {\mathcal{C}}' \;\ni\; \varphi \;\mapsto\; \Big(\varphi\big({\mathrm{bin}}(n)\big)_{_n}\Big) \;\in\; {\mathcal{C}}\enspace .$$ For stream representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$, let $\xi\circ{\mathrm{bin}}\subseteq{\mathcal{C}}'\twoheadrightarrow X$ denote its induced *binary* hyper-representation. Indeed, according to [@KC12 §4.3], appropriate hyper-representations now allow to compute the application functional in time more reasonable than in Example \[x:Max\]b); cmp. Item d) of the following Proposition. Items a) to c) correspond to Fact \[f:Proper\]. \[p:Type3\] 1. ${\mathcal{C}}'$ is compact of entropy $\eta_{{\mathcal{C}}'}=2^{{\operatorname{id}}}-1$. If ${\mathcal{F}}:\subseteq{\mathcal{C}}'\to{\mathcal{C}}'$ is computed by ${\mathcal{M}}^{?}$ and if ${\operatorname{dom}}({\mathcal{F}})$ is compact, then this computation admits a time bound $t:{\mathbb{N}}\to{\mathbb{N}}$ in the sense of Definition \[d:Type3\]d); similarly for ${\mathcal{M}}^{\psi,?}$… 2. If ${\mathcal{M}}^{\psi,?}$ computes ${\mathcal{F}}:\subseteq{\mathcal{C}}'\to{\mathcal{C}}'$ in time $t(n)$, then ${\mathcal{F}}$ has modulus of continuity $t(n)$. 3. If ${\mathcal{F}}:\subseteq{\mathcal{C}}'\to{\mathcal{C}}'$ has modulus of continuity $\mu:{\mathbb{N}}\to{\mathbb{N}}$, then there exists an oracle $\psi\in{\mathcal{C}}'$ and a Type-3 Machine ${\mathcal{M}}^{\psi,?}$ computing ${\mathcal{F}}$ in time ${\mathcal{O}}\big(n+2^{t(n)}\big)$. 4. The compact space $[0;1]'_1$ from Example \[x:Max\]b) admits a hyper-representation $\Delta_1$ such that application $[0;1]'_1\times[0;1]\ni(f,r)\mapsto f(r)\in[0;1]$ is $(\Delta_1\times\tilde\delta,\delta\circ\imath_{\mathcal{C}})$-computable in polynomial time for the unary hyper-representation $\delta\circ\imath_{\mathcal{C}}$ induced by the aforementioned dyadic stream representation $\delta$ of $[0;1]$. 5. Hyper-representation (sic!) $\imath_{\mathcal{C}}:\subseteq{\mathcal{C}}'\twoheadrightarrow{\mathcal{C}}$ is an isometry. Stream representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ has modulus of continuity $\kappa$ iff induced unary hyper-representation $\xi\circ\imath_{\mathcal{C}}$ does. 6. For $(\xi,\upsilon)$-realizer $F:\subseteq{\mathcal{C}}\to{\mathcal{C}}$ of $f:X\to Y$, ${\mathcal{F}}:=\imath_{\mathcal{C}}^{-1}\circ F\circ\imath_{\mathcal{C}}:\subseteq{\mathcal{C}}'\to{\mathcal{C}}'$ is a $\big(\xi\circ\imath_{\mathcal{C}},\upsilon\circ\imath_{\mathcal{C}}\big)$-realizer of $f:X\to Y$. $F$ has modulus of continuity $\mu$ iff ${\mathcal{F}}$ does. 7. Hyper-representation ${\mathrm{bin}}:\subseteq{\mathcal{C}}'\twoheadrightarrow{\mathcal{C}}$ has logarithmic modulus of continuity. Its inverse, the stream representation ${\mathrm{bin}}:{\mathcal{C}}\twoheadrightarrow{\mathcal{C}}'$ has exponential modulus of continuity. Both are optimal. Stream representation $\xi:\subseteq{\mathcal{C}}\twoheadrightarrow X$ has modulus of continuity ${\mathcal{O}}\big(2^\kappa\big)$ iff induced binary hyper-representation $\xi\circ{\mathrm{bin}}:\subseteq{\mathcal{C}}'\twoheadrightarrow X$ has modulus of continuity $\kappa$. Note the gap between Type-3 Proposition \[p:Type3\]b+c), absent in the Type-2 Fact \[f:Proper\]b+c). From a high-level perspective, the exponential lower complexity bound to the application functional in Example \[x:Max\]b) is due to $[0;1]'$ having exponential entropy while stream representations’ domain ${\mathcal{C}}$ has only linear entropy: see Example \[x:Entropy\]a+h). Proposition \[p:Type3\]d) avoids that information-theoretic bottleneck by proceeding to hyper-representations with domain ${\mathcal{C}}'$ also having exponential entropy. In fact [[@KC12 §4.3]]{} extends the representation $\Delta_1$ of $[0;1]'_1={\operatorname{Lip}}_1([0;1],[0;1])$ from Proposition \[p:Type3\]d) to entire ${\mathcal{C}}\big([0;1],[0;1]\big)=\bigcup_{\mu} {\mathcal{C}}_\mu\big([0;1],[0;1]\big)$, where the union ranges over all strictly increasing $\mu:{\mathbb{N}}\to{\mathbb{N}}$. Lacking compactness, in view of Proposition \[p:Type3\]a) one cannot expect a time bound on entire ${\mathcal{C}}\big([0;1],[0;1]\big)$ depending only on the output precision $n$. Instead [[@KC12 §3.2]]{} considers runtime *polynomial* if bounded by some term $P=P(n,\mu)$ in both the integer output precision parameter $n$ and a modulus of continuity $\mu:{\mathbb{N}}\to{\mathbb{N}}$ of the given function argument $f$: a higher-type parameter [@EikeFlorian]. The considerations in the present work suggest a natural \[r:SecondOrder\] According to Example \[x:Entropy\]h) ${\mathcal{C}}_\mu([0;1],[0;1])$ has entropy $\eta=\Theta\big(2^{\mu}\big)$ such that $\log\eta=\Theta(\mu)$ ‘recovers’ the modulus of continuity. Moreover our complexity-theoretic “Main” Theorem \[t:Main2\] confirms that even metrically well-behaved (e.g. 1-Lipschitz) functionals $\Lambda:X\to[0;1]$ can only have realizers with modulus of continuity/time complexity growing in $X$’s entropy $\eta$. This suggests generalizing second-order polynomial runtime bounds $P=P(n,\mu)$ from spaces of continuous real functions [[@KC12 §3.2]]{} to $P(n,\log\eta)$ for compact metric spaces $X$ beyond ${\mathcal{C}}_\mu([0;1],[0;1])$. The logarithm is consistent with the quantitative properties of hyper-representations expressed in Proposition \[p:Type3\]. 1. Compactness of ${\mathcal{C}}'$ follows from König’s Lemma: it is an infinite finitely (only, as opposed to ${\mathcal{C}}$, increasingly) branching tree. Cover ${\mathcal{C}}'$ by $2^{2^n-1}$ closed balls $\displaystyle\big\{\varphi:\varphi\big|_{\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{<n}}=\psi\big\}$, $\psi:\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n\to\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ of radius $2^{-n}$: optimally.\ Consider the number $N:=t_{\mathcal{M}}(\varphi,\vec u)\in{\mathbb{N}}$ ${\mathcal{M}}^{\varphi}$ makes on input $\vec u\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n$ for $\varphi\in{\operatorname{dom}}({\mathcal{F}})\subseteq{\mathcal{C}}'$. During this execution, ${\mathcal{M}}^{\varphi}$ can construct and query oracle $\varphi$ only on strings $\vec v$ of length $|\vec v|<N$: Replacing $\varphi$ with some $\psi\in{\mathcal{C}}'$ of distance ${d_{{\mathcal{C}}'}}(\varphi,\psi)\leq2^{-N}$ will remain undetected, that is, ${\mathcal{M}}^{\psi}$ on input $\vec u$ will behave the same way, and in particular still terminate after $N$ steps. This establishes continuity of $t_{\mathcal{M}}(\cdot,\vec u)$. By compactness of ${\operatorname{dom}}({\mathcal{F}})$, the following maxima thus exist: $$t_{\mathcal{M}}(\vec u) \;:=\; \max \big\{t_{\mathcal{M}}(\varphi,\vec u)\::\:\varphi\in{\mathcal{F}}\big\} \qquad t_{\mathcal{M}}(n)\;:=\;\max\big\{ t_{\mathcal{M}}(\vec u)\::\: \vec u\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{n}\big\}$$ 2. As mentioned in the proof of (a), ${\mathcal{M}}^{\varphi}$ and ${\mathcal{M}}^{\psi}$ will behave identically on all inputs $\vec u\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^n$ for $\varphi,\psi\in{\operatorname{dom}}({\mathcal{F}})$ with ${d_{{\mathcal{C}}'}}(\varphi,\psi)\leq2^{-t(n)}$: Meaning ${\mathcal{F}}(\varphi)$ and ${\mathcal{F}}(\psi)$ have distance $\leq 2^{-n}$. 3. By hypothesis, ${\mathcal{F}}\big(\varphi\big)(\vec u)\in\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}$ depends only on the restriction $\displaystyle\varphi\big|_{\{{\textup{\texttt{0}}\xspace},{\textup{\texttt{1}}\xspace}\}^{<m}}$ for $m:=\mu(n)$ and $n:=|\vec u|$. Thus $$\psi\big(\vec u,\varphi(),\varphi({\textup{\texttt{0}}\xspace}),\varphi({\textup{\texttt{1}}\xspace}),\varphi({\textup{\texttt{0}}\xspace}{\textup{\texttt{0}}\xspace}),\ldots\varphi({\textup{\texttt{1}}\xspace}^{m-1})\big) \;:=\;{\mathcal{F}}\big(\varphi\big)(\vec u)$$ is well-defined an oracle. And, for given $\vec u$ and $\varphi$, making this query of length ${\mathcal{O}}(n+2^m)$ recovers the value ${\mathcal{F}}\big(\varphi\big)(\vec u)$. Representation Theory of Compact Metric Spaces {#ss:Future} ---------------------------------------------- Both stream and hyper representations translate (notions of computability and bit-complexity) to their co-domains from ‘universal’ [@Benyamini] compact structures (which are already naturally equipped with formal conceptions of computing, namely) ${\mathcal{C}}$ and ${\mathcal{C}}'$, respectively. Matthias Schröder \[personal communication 2017\] has suggested a third candidate domain of generalized representations: the Hilbert Cube ${\mathcal{H}}=\prod_{j\geq0} [0;1]$ equipped with metric ${d_{{\mathcal{H}}}}(\bar x,\bar y)=\sup_j |x_j-y_j|/2^j$; recall Example \[x:Entropy\]a). This parallels earlier developments in continuous computability theory considering *equilogical* [@DBLP:journals/tcs/BauerBS04] and *quotients of countably-based topological* (=QCB) spaces [@DBLP:conf/cie/Schroder06]. And its suggests the following generalization: \[d:Representation\] Fix a compact metric space $({{\mathfrak{X}}},D)$ with entropy $\Theta$. 1. A *${{\mathfrak{X}}}$-representation* of another compact metric space $(X,d)$ is a surjective partial mapping $\xi:\subseteq{{\mathfrak{X}}}\twoheadrightarrow X$. 2. Let $\eta$ denote the entropy of $(X,d)$.\ Call $\xi$ *linearly admissible* if it has a modulus of continuity $\kappa$ such that\ (i) $\Theta\circ\kappa\leq{\mathcal{O}}{\mathcal{\scriptscriptstyle S}}(\eta)$, i.e., $\exists C\in{\mathbb{N}}\; \forall n\in{\mathbb{N}}: \; \big(\Theta\circ\kappa\big)(n)\leq C+C\cdot \eta(n+C)$\ and (ii) for every uniformly continuous partial surjection $\zeta:\subseteq{{\mathfrak{X}}}\twoheadrightarrow X$ it holds $\zeta{\preccurlyeq_{\rm O}}\xi$, meaning: There exists a map $F:\subseteq{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ such that $\zeta=\xi\circ F$ and, for every modulus of continuity $\nu$ of $\zeta$, $F$ has a modulus of continuity $\mu$ satisfying $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle O}}(\nu)$. 3. Call $\xi$ *polynomially admissible* if it has a modulus of continuity $\kappa$ such that\ (i) $\Theta\circ\kappa\leq{\mathcal{P}}{\mathcal{\scriptscriptstyle O}}(\eta)$, and\ (ii) for every uniformly continuous partial surjection $\zeta:\subseteq{{\mathfrak{X}}}\twoheadrightarrow X$ it holds $\zeta{\preccurlyeq_{\rm P}}\xi$, meaning: There exists a map $F:\subseteq{\operatorname{dom}}(\zeta)\to{\operatorname{dom}}(\xi)$ such that $\zeta=\xi\circ F$ and, for every modulus of continuity $\nu$ of $\zeta$, $F$ has a modulus of continuity $\mu$ satisfying $\mu\circ\kappa\leq{\mathcal{\scriptscriptstyle P}}(\nu)$. 4. $({{\mathfrak{X}}},D)$ is linearly/polynomially *universal* if every compact metric space $(X,d)$ admits a linearly/polynomially admissible ${{\mathfrak{X}}}$-representation. Note how (b) and (c) boil down to Definition \[d:Admissible\]d+e) in case $({{\mathfrak{X}}},D)=({\mathcal{C}},{d_{{\mathcal{C}}}})$ of linear entropy $\Theta={\operatorname{id}}$. And Theorem \[t:Linear\] now means that Cantor space is linearly universal. It seems worthwhile to identify and classify linearly/polynomially universal compact metric spaces. [^1]: Supported by the National Research Foundation of Korea (grant NRF-2017R1E1A1A03071032) and the International Research & Development Program of the Korean Ministry of Science and ICT (grant NRF-2016K1A3A7A03950702). We thank Florian Steinberg for helpful discussions. Example \[x:Gleb\] is due to Gleb Pogudin. [^2]: From the general logical perspective this constitutes a *Skolemization* as canonical quantitative refinement of qualitative continuity [@Koh08a]. We consider a modulus as an integer function due to its close connection with asymptotic computational cost: Fact \[f:Proper\]a+b). For the continuous conception of a modulus common in Analysis see Lemma \[l:Modulus\] below. [^3]: In our model, query tape and head are not altered by an oracle query. [^4]: From the general logical perspective, entropy constitutes a *Skolemization* as canonical quantitative refinement of qualitative precompactness [@Koh08a]. [^5]: Well, almost: ${\mathrm{bin}}(a_n)$ has length between $1$ and $2n+1$ while here we make all strings in ${\operatorname{dom}}(\xi_n)$ have the same length $=\eta(n+1)$: Using strings of varying length $<\eta(n+1)$ would additionally require encoding delimiters. [^6]: Different choices lead to different representations $\xi'_1$ of $X'_1$.\[f:contingent\]
--- abstract: 'Jet radiation patterns are indispensable for the purpose of discriminating partons’ with different quantum numbers. However, they are also vulnerable to various contaminations from the underlying event, pileup, and radiation of adjacent jets. In order to maximize the discrimination power, it is essential to optimize the jet radius used when analyzing the radiation patterns. We introduce the concept of jet radiation radius which quantifies how the jet radiation is distributed around the jet axes. We study the color and momentum dependence of the jet radiation radius, and discuss two applications: quark-gluon discrimination and $W$ jet tagging. In both cases, smaller (sub)jet radii are preferred for jets with higher $p_T$’s, albeit due to different mechanisms: the running of the QCD coupling constant and the boost to a color singlet system. A shrinking cone $W$ jet tagging algorithm is proposed to achieve better discrimination than previous methods.' author: - Zhenyu Han title: Jet Radiation Radius --- Introduction {#sec:introduction} ============ It often happened in high energy physics that previously discovered particles later became the “standard candles”, and led to the discoveries of new particles. For example, the $W$ and $Z$ gauge bosons were discovered by identifying the leptons they decay to; they (especially the $Z$ boson) in turn played important roles in the discovery of the Higgs boson at the Large Hadron Collider (LHC) [@higgs_discovery]. It is our hope that at the LHC and future colliders, we will be able to find new physics beyond the standard model (SM), again by first identifying particles that we are already familiar with. Therefore, it is crucial that we are able to efficiently identify all known particles, or, equivalently, their masses and quantum numbers. This is a difficult task when the particle is a QCD parton, namely a quark or a gluon, or when it decays to QCD partons. In these cases, what we see in a collider detector is one or more energetic jets that contain multiple hadrons, and their identities are concealed. There are several handles one can utilize to distinguish jets from different origins: the jet mass or multi-jet invariant mass is determined by the original parton’s mass (for convenience, in this article we use the term “parton” to refer to either a QCD parton or a massive particle that decays to QCD partons); a massive hadronically decaying particle, such as a $W$ or $Z$ boson, results in more than one jets with similar momenta, which is different from a usual QCD jet originated from a single quark or gluon; finally, the QCD quantum number determines the amount of jet radiation and how it is distributed. The first two handles have been used routinely in high energy experiments, while the last, though receiving significant amount of studies recently[^1], has not been used as commonly. Part of the reason is, compared with the hard kinematics, the radiation information of a jet is often associated with soft particles, making it more vulnerable to various contaminations, including the underlying event, pileup, as well as radiation from nearby jets. We will focus on tackling this difficulty in this article. Variables sensitive to the jet radiation are calculated from jet constituents. On the one hand, we would like to choose a larger jet radius such that more radiation from the initial parton is included to give us more color information; on the other hand, we would like to avoid introducing excessive contamination, and a smaller radius is preferred. Therefore, a careful choice of the jet radius is important. Similar considerations have been made in Refs. [@ua2; @snowmass; @Cacciari:2008gd; @Soyez:2010rg]. In Refs. [@ua2; @snowmass], it is shown that the optimum jet radius is around $R=0.7$ for jet $p_T\sim O(100\gev)$ at a hadron collider, where the effects of the underlying event and hadronization are minimized, and it yields the optimum $W/Z$ mass measurement for the UA2 experiment. The study has been extended to the LHC in Refs. [@Cacciari:2008gd; @Soyez:2010rg] where the criterion for optimizing the jet radius is to obtain the best mass resolution for new massive hadronically decaying resonances. The purpose of those studies is to properly reconstruct the kinematics, [*i.e.*]{}, the momenta of the partons initiating the observed jets. Our goal is different: we want to choose the jet radius such that the discrimination power, including that from the jet radiation patterns, is maximized. For this purpose, it is important to study the distribution of the “intrinsic” radiation, [*i.e.*]{}, the radiation originated from the initial parton instead of the contaminations. In particular, it is helpful to know what is the amount of the intrinsic radiation contained in a cone of a particular size around the jet axis. Nonetheless, there is not a unique definition of the “amount” of radiation, and the radiation distribution depends on various factors such as the color configuration, the boost to the system, [*etc.*]{}. In this article, we give a rigorous definition of the jet [*radiation radius*]{}, taking the simplest case as the start point: a dijet color singlet system. Roughly speaking, the jet radiation radius, $R(x)$, is the jet radius one needs to include on average a fraction of $x$ of the total radiation in the jet. A variable that quantifies the amount of radiation is needed in this definition, such as thrust, girth [@quark-gluon; @quark-gluon-2], or $N$-subjettiness [@nsubjettiness]. We study the dependence of $R(x)$ on the jet QCD quantum number and the jet momentum. We examine two applications, quark-gluon discrimination and $W$ jet tagging, and show that knowledge of the jet radiation radius can help us achieve the optimum discrimination power. Interestingly, in both cases, the jet radiation radius, and thus the optimum (sub)jet radius decreases with increasing jet momentum, albeit for different reasons. For a quark or gluon jet produced from a color singlet system in its (nearly) rest frame, the jet momentum is roughly proportional to the center of mass energy, which determines the amount of radiation and how it is distributed. It is well known that the QCD coupling constant decreases when the energy scale increases, resulting in smaller radiation radius. We will see that the radiation radius approximately follows a power law dependence on the jet momentum, with the power being around $-0.5\sim -0.2$. Therefore, to reduce the contaminations, we are able to use a smaller cone size to evaluate jet radiation variables for higher jet momentum. For the case of a boosted $W$ boson (or other massive color singlet particles), the radiation scale is fixed to the $W$ mass, while the boost makes the radiated particles collimated. This results in a radiation radius inversely proportional to the jet momentum, or subjet momentum if the $W$ is identified as a single jet. In light of that, we propose a new $W$ tagging algorithm to deal with high pileup. In this algorithm, we start as usual with a fat jet that contains a $W$ boson or its QCD counterpart, and use a jet grooming method [@filtering; @pruning; @trimming] to reconstruct the kinematics of the $W$ decay. Then, when calculating a jet radiation variable, we use two much smaller cones around the two leading subjets’ axes, with the cone sizes inversely proportional to the subjets’ $p_T$’s. With this “shrinking cone” algorithm, we find a 30% (60%) improvement in the statistical significance for jet $p_T=300\gev$ ($150\gev$), when the average number of pileup events is 60, compared with methods not using the shrinking cones. The rest of the article is organized as follows. In Section \[sec:definition\], we give the definition of the jet radiation radius. In Section \[sec:quark-gluon\], we discuss the difference in the radiation radius between a quark jet and a gluon jet in a dijet color singlet system, and demonstrate that the optimum jet radius for quark-gluon discrimination decreases for increasing jet $p_T$. Section \[sec:wtag\] is devoted to the discussion of the radiation radius in a boosted system, and in particular its role in $W$ jet tagging. Section \[sec:discussions\] contains some discussions. Definition {#sec:definition} ========== A jet has a finite radius precisely because of the QCD radiation and hadronization. Therefore, it seems redundant to have a definition of a [*radiation radius*]{}. However, the usual purpose of using a finite radius for jet clustering is to ensure most of the jet radiation is included in the jet such that one can infer the initial partons’ momenta correctly. For this purpose, the radiation gives us difficulties rather than provides us useful information. Moreover, due to collinear singularity, the momentum of a jet is concentrated in the ’core’, that is, the center of the jet. Therefore, it is not essential to control precisely the jet radius, and multiple choices of jet radius coexist in high energy experiments. For the purpose of determining the quantum number of a jet, the information from the QCD radiation is essential and one should be more careful about choosing the optimum jet radius. As mentioned in the introduction, this is particularly important when the jet is contaminated by other hadronic activities in the event, including the underlying event, pileup and radiation from nearby jets. Therefore, it is useful to define a jet radiation radius, which is a measure of how the radiation is distributed. Knowing the “intrinsic”, [*i.e.,*]{} uncontaminated, jet radiation radii corresponding to partons with different quantum numbers will allow us to have a generic understanding of how to choose a jet radius to better deal with the contaminations. As we will show later, it also allows us to engineer new algorithms that provide better discrimination powers. The first jet definition was given by Sterman and Weinberg [@sterman-weinberg]. In Ref. [@sterman-weinberg], a hadronic event from an $e^+e^-$ collision is classified as a two jet event if at least a fraction of $1-\epsilon$ of the event’s total energy is contained in two cones of opening half-angle $\delta$, where $\epsilon$ is a small fraction. Obviously, for fixed $\epsilon$, when we increase $\delta$, the fraction of hadronic events that are classified as two jets, $f_2$, will also increase. At the next leading order, $f_2$ is given by [@qcd-collider] $$f_2=1-8C_F\frac{\alpha_S}{2\pi}\left\{\ln\frac1\delta\left[\ln\left(\frac1{2\epsilon}-1\right)-\frac34+3\epsilon\right]+\frac{\pi^2}{12}-\frac{7}{16}-\epsilon+\frac{3}{2}\epsilon^2+O(\delta^2\ln\epsilon)\right\}. \label{eq:f2}$$ When $\epsilon$ is small, we keep only the leading term and find $$\delta_q\sim\text{exp}\left[-\frac{\pi(1-f_2)}{4C_F\alpha_S(s)\ln(1-\epsilon)}\right]. \label{eq:deltaq}$$ In Eq. (\[eq:deltaq\]), we have added a subscript “$q$” because hadronic events in an $e^+e^-$ machine are dominated by quark-antiquark pairs. In such a machine, the event is not contaminated by the underlying event and initial state QCD radiation[^2], and two-jet events are dominantly initiated from a pair of back-to-back high energy quarks. Therefore, $\delta_q$ can be viewed as the “intrinsic” size of a quark jet[^3]. From Eq. (\[eq:deltaq\]), we see the definition of the size $\delta_q$ depends on two parameters, $\epsilon$ and $f_2$. Similarly, if the system under study is initiated from two hard gluons (in the color singlet configuration), we obtain the angular size, $\delta_g$, of a gluon jet. At the leading order, $\delta_g\sim \delta_q^{C_F/C_A}$ [@qcd-collider]. Therefore, $\delta_q$ or $\delta_g$ is a measure of how the radiation is distributed, which is sensitive to the partons’ color structure. This motivates us to give a general definition of a jet [*radiation*]{} radius, by replacing the energy fraction $1-\epsilon$ with a measure of the amount of radiation. The definition of the amount of radiation is not unique, and suitable choices include thrust (denoted $T$), charged particle multiplicity (denoted $\nch$), $N$-subjettiness, [*etc*]{}. While a possible definition of the jet radiation radius could be given following the definition of the variable $\delta$, in practice it is perhaps more convenient to use the following definition. [*In a 2(or $N$)-jet color singlet system in its center of mass frame, a jet radiation radius is defined as the jet radius that the average amount of radiation within the 2 (or N) jets is a fraction of $x$ of the average total radiation.*]{} In practice, the jet radiation radius can be calculated as follows. For a given ensemble of events, to get the fraction $x$, we first calculate the average “total” radiation: for an $e^+e^-$ machine, it may include all particles in the event; while for a hadron collider, we may use a set of jets clustered with a large radius as the start point. Then we use a (smaller) radius $R$ to recluster the jets and find the amount of radiation within the leading 2 (or N) jets, also averaged over the ensemble. The fraction $x$ is the radio of the two averages, and the corresponding $R$ is our jet radiation radius. We denote the radiation radius by $R_{var}(x)$, where $var=1-T, \nch, \tau_1 \ldots$, is a measure of the amount of radiation. A few comments of the above definition is in place. First, there are multiple choices of the variables that can quantify the amount of radiation. To be a good candidate, the variable should satisfy: $a.$ it is (near) zero when there is no radiation. $b.$ it increases monotonously when the jet radius is increased, and approaches a finite value when the jet radius is taken to be infinity, [*i.e.*]{}, when all particles in the event (or the part of the event being studied) are included. We easily see that among the most commonly used variables, jet mass is a good radiation variable, while jet momentum is not. Other radiation variables include $1-T$ where $T$ is the thrust, charged particle multiplicity, $N$-subjettiness, [*etc*]{}. Second, the requirement of a color singlet system is to guarantee the jet radius is not affected by particles outside the system, otherwise the definition is ambiguous. Moreover, we have also required the system to be in the center of mass frame such that it is reasonable to use a common jet radius for all jets in the system. Even with this requirement, when the system contains more than 2 jets with different momenta, we still need a prescription on how to choose the jet radii – this is not a concern in this article because we will only discuss dijet systems. As we will see, when a dijet system is boosted, the two jets will have different momenta and it is desirable to use different radii for them. In this article, we will rely on Pythia 8 [@pythia8] simulations to calculate the jet radiation radius, leaving analytical calculations to future studies. Dijets: Quark and gluon discrimination {#sec:quark-gluon} ====================================== We start from the simplest case: dijet color-singlet systems. There are two possibilities for the initial partons, $q\bar q$ or $gg$. Given the large number of variables that have been proposed in the literature, we will only consider a few of them that are particular useful for our later discussions: charged particle multiplicity, girth and $N$-subjettiness. Charged particle multiplicity is simply the number of charged particles passing a certain $p_T$ threshold, or the number of tracks in the jet. Girth is defined as [@quark-gluon] $$g\equiv\sum_{i\in\text{jet}}\frac{p_T^i}{p_T^{\text{jet}}}r_i,$$ where $r_i$ is the distance in the $(y,\phi)$ plane between particle (or jet constituent) $i$ and the jet axis. Here, $y$ is the rapidity. Note the variable contains a normalization factor, $p_T^{\text{jet}}$, which is the jet $p_T$. As described in the previous section, we will need to calculate the “total radiation” starting from a fat jet (or, for the $e^+e^-$ case, starting from all particles in a hemisphere), as well as the radiation for a variety of smaller jet radius. Then it is a question whether we should use the $p_T$ of the fat jet (or half of the center of mass energy for an $e^+e^-$ machine) as a common normalization factor for all jet radii, or use the jet $p_T$ of the corresponding radius. Both choices are valid, although there are subtle differences between them. We find the latter is more convenient and more commonly used, which we will stick to in the following discussions. $N$-subjettiness is derived from $N$-jettiness [@njettiness], and defined as follows [@nsubjettiness]. We define a distance measure between a particle $k$ and an axis $J$ as $\Delta R_{J,k}^\beta$, where $\Delta R=\sqrt{\Delta\eta^2+\Delta\phi^2}$, and $\beta$ is a constant. Then for $N$-axes we define $$\tilde\tau_N=\frac{1}{d_0}\sum_kp_{T,k}\min\left\{\Delta R_{1,k}^\beta,\Delta R_{2,k}^\beta,\cdots,\Delta R_{N,k}^\beta\right\}.$$ Namely, for each particle in the jet, we find the nearest axis and calculate the $p_T$ weighted distance. Then $\tilde\tau_N$ is a sum of the distances over all particles in the jet, normalized by a factor $$d_0=\sum_ip_{T,i}(R_0)^\beta,$$ where $R_0$ is the jet radius. Finally, we find the $N$-axes such that $\tilde\tau_N$ is minimized, and the minimum value is called $N$-subjettiness and denoted $\tau_N$. When $N=1$ and $\beta=1$, we see that 1-subjettiness, $\tau_1$, is defined very similar to girth, with slight differences in the distance measure (girth uses the rapidity $y$ while $\tau_1$ uses the pseudorapidity $\eta$) and the normalization factor. Therefore, the discussion on girth applies to $\tau_1$ as well. In general, these variables are examples of a set of variables called [*radial moments*]{} [@quark-gluon-2] that are all valid radiation variables: $$M_f=\sum_{i\in \text{jet}}\frac{p_T^i}{p_T^\text{jet}}f(r_i),$$ where $f(r)$ is a function of $r$, such as $r$, $r^2$, $r^3$…. Although it will be infrared/collinear unsafe, we may also generalize the definition to allow a non-linear dependence on $p_T^i$, for example, by making the sum over $p_{T,i}^\alpha r_i^\beta$. Then the particle multiplicity can be viewed as a special case where $\alpha=\beta=0$. As mentioned in the previous section, it is convenient to study events from $e^+e^-$ collisions, where we only have final state QCD radiation emitted from the partons being studied. Therefore, we study the processes $e^+e^-\rightarrow q\bar q$ and $e^+e^-\rightarrow gg$. We fix the pseudorapidity of the two outgoing partons to $\eta=0$, [*i.e.*]{}, perpendicular to the beam, and use Pythia 8 for showering and hadronization. For an $e^+e^-$ machine, it is better to use the spherical coordinates. Nevertheless, we eventually would like to extend our results to a hadron collider, so we will use the $(\eta, \phi)$ coordinates for convenience. We then apply the anti-$k_t$ algorithm with various $R$’s for jet clustering, using FastJet [@fastjet]. It is known that the anti-$k_t$ algorithm gives us circular jets, which mimic cones of size $R$ around the jet axes. Since all contaminations are proportional to the area of the jet, we draw the radiation variables as a function of $R^2$ in Fig. \[fig:vars\_ee\] for better illustration. It is clear from Fig. \[fig:vars\_ee\] that the radiation variables grow much faster at smaller $R^2$’s than larger $R^2$’s, and eventually saturate at very large $R^2$’s. This indicates that increasing $R$ beyond a certain value will only introduce more contaminations rather than provide more information, therefore, an optimum radius should exist. --------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The average values of charged particle multiplicity ($\nch$) and girth ($g$) as a function of $R^2$.\[fig:vars\_ee\] ](nch_rsq_lep.pdf "fig:"){width="50.00000%"} ![The average values of charged particle multiplicity ($\nch$) and girth ($g$) as a function of $R^2$.\[fig:vars\_ee\] ](girth_rsq_lep.pdf "fig:"){width="50.00000%"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Given Fig. \[fig:vars\_ee\], it is easy to read the radiation radius $R(x)$ by locating the radius $R$ where $\langle \nch\rangle$ or $\langle g\rangle$ is $x$ times the saturation value. We then turn to the study of the $p_T$ dependence of the jet radiation radius. It is well known that the jet angular size $\delta$ decreases with the jet momentum because of the RGE running of the strong coupling constant $g_S$. This can be easily seen from Eq. (\[eq:deltaq\]): the exponentiation combined with the logarithmic running of $\alpha_S$ results in a power law decrease in $p_T$ for $\delta_q$. Similarly, the jet radius as defined in Section \[sec:definition\] also scales with the jet momentum. In Fig. \[fig:rx\_pt\], we show the $p_T$ dependence of the jet radiation radius for $var=\nch$ and $var = girth$. --------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The jet radiation radius $R_{\nch}(x)$ and $R_{g}(x)$ as a function of jet $p_T$, for x=0.6, 0.7, 0.8. \[fig:rx\_pt\]](rxtracks_pt.pdf "fig:"){width="50.00000%"} ![The jet radiation radius $R_{\nch}(x)$ and $R_{g}(x)$ as a function of jet $p_T$, for x=0.6, 0.7, 0.8. \[fig:rx\_pt\]](rxgirth_pt.pdf "fig:"){width="50.00000%"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- We see that the radiation radius always decreases as the jet $p_T$ increases. The dependence on $p_T$ abides by an approximate power law with the power varying between -0.5 and -0.2 for the cases in Fig. \[fig:rx\_pt\]. The immediate conclusion is, in the presence of contamination, if we would like to obtain the best performance for quark-gluon discrimination, we should use a shrinking radius for increasing $p_T$. At hadron colliders, this is true even without the contamination from pileup. To see this, we consider dijet events at the LHC[^4]. We use Pythia 8 to generate $pp\rightarrow qq$ and $pp\rightarrow gg$ events, with the underlying event and initial/final state radiation turned on. Pileup is not included. For each choice of the dijet $p_T$, we cluster the events with varying $R$ using the anti-$k_t$ algorithm and calculate the corresponding $\nch$ and girth for the two leading jets. Only tracks with $p_T>0.5\gev$ are counted in $\nch$. The two variables are used as two separate discriminators. From Fig. \[fig:vars\_ee\], we see a gluon jet usually have more radiation than a quark jet, therefore, we can apply an upper cut on $\nch$ or girth to keep more quark jets than gluon jets. We fix the efficiency of the quark jets to 50%, and evaluate the corresponding $\varepsilon_q/\varepsilon_g$, where $\varepsilon_q$ ($\varepsilon_g$) is the efficiency of quark (gluon) jets. We vary the radius between 0.1 and 1.2 and find the one that maximizes $\varepsilon_q/\varepsilon_g$ for jet $p_T$ between 50 $\gev$ and 500 $\gev$, which is plotted in Fig. \[fig:bestR\_pt\]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The optimum jet radius for quark-gluon discrimination at the LHC (blue solid) and the corresponding efficiency ratio, $\varepsilon_q/\varepsilon_g$ (red dashed). For comparison, we also draw the efficiency ratio for fixed radius, $R=0.5$ (pink dashed). The quark jet tagging efficiency is fixed to 50% for all cases. Left: $N_{ch}$; right: girth. \[fig:bestR\_pt\]](bestR_nch.pdf "fig:"){width="50.00000%"} ![The optimum jet radius for quark-gluon discrimination at the LHC (blue solid) and the corresponding efficiency ratio, $\varepsilon_q/\varepsilon_g$ (red dashed). For comparison, we also draw the efficiency ratio for fixed radius, $R=0.5$ (pink dashed). The quark jet tagging efficiency is fixed to 50% for all cases. Left: $N_{ch}$; right: girth. \[fig:bestR\_pt\]](bestR_girth.pdf "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- As expected, we see in Fig. \[fig:bestR\_pt\] that the optimum jet radius decreases for increasing $p_T$. In Fig. \[fig:bestR\_pt\], we have also given $\varepsilon_q/\varepsilon_g$ for a fixed jet radius, $R=0.5$, which is not significantly lower than the optimum values. Nevertheless, we do learn that a smaller than usual ($< 0.4$) jet radius is preferred for a large range of jet $p_T$ when using girth as the discriminator. This may turn out to be essential once the contamination is larger than what we have assumed, for example, when pileup is included or when we are considering a busier event topology. Moreover, we also see the optimum radii are quite different for $\nch$ and girth, which indicates if we would like to combine the two variables to maximize the distinguishing power, we should use two (or more) jet radii. In the above discussion, we have included all charged particles (with a $p_T$ threshold) in the calculation of $\nch$, and all charged and neutral particles when calculating girth. In a collider detector, the momenta of the charged particles can be measured with the tracking system, while those of the neutral particles have to be measured with the calorimeters. The former is less sensitive to pileup and also has a better resolution than the latter, and the two pieces of information are complementary. We have seen that even without pileup, we obtain a better discrimination power by choosing the optimum jet radius. As we will show, in the presence of pileup, a proper consideration of the radiation radius is more important, especially when we are using the information from the neutral constituents of the jet. In order to see the effects of pileup, instead of expanding upon the discussion on quark-gluon discrimination, in the next section, we consider another important application – $W$ jet tagging in a high pileup environment. $W$ jet tagging {#sec:wtag} =============== Jet radiation radius in a boosted system ---------------------------------------- In order to avoid ambiguities, we defined the jet radiation radius for a dijet system in its center of mass frame, such that it is natural to use a common radius for the two jets. When the system is boosted, we need a prescription to decide how to choose the jet radii for jets with different momenta. As we discussed, the radiation is mostly concentrated around the jet axis. Therefore, we see that the jet radiation radius roughly scales as $1/|\mathbf{p}|$ from the following argument. Consider two lightlike 4-vectors, $p_1$ and $p_2$ close to the jet axis, and therefore close to each other. They are boosted to $p_1'$ and $p_2'$ under a common boost, and their opening angle $\theta$ ($\ll1$) becomes $\theta'$ ($\ll1$). Unless the boost is well aligned with the opposite direction of the jet momentum, it changes the magnitudes of the two momenta by roughly the same fraction, [*i.e.*]{}, $$|\mathbf{p}_1'|/|\mathbf{p}_1|\approx|\mathbf{p}_2'|/|\mathbf{p}_2|.$$ Then we have $$\begin{aligned} &&p_1\cdot p_2=p_1'\cdot p_2'\\ &\Rightarrow& |\mathbf{p}_1||\mathbf{p}_2|(1-\cos\theta)=|\mathbf{p}_1'||\mathbf{p}_2'|(1-\cos\theta')\\ &\Rightarrow& \theta/\theta'\approx|\mathbf{p}_1'|/|\mathbf{p}_1|.\end{aligned}$$ Therefore, the angle between the two lightlike vectors is inversely proportional to their momenta. We extend this observation to a cone with multiple lightlike particles and draw the conclusion that the cone size shrinks when the jet is boosted. Of course, the inverse proportionality is only approximate, especially for radiation at large angles and for soft hadrons whose masses cannot be ignored. However, as we will show, this does not prevent us from applying the scaling relation in a realistic situation such as the $W$ tagging, and engineer new algorithms to achieve improvement. For illustration, we consider the case of a hadronically decaying $W$. Again, we first consider an $e^+e^-$ machine to avoid contaminations. We fix the kinematics of $e^+e^-\rightarrow W^+W^-$ such that the $W$’s are produced at $\eta=0$, each of which subsequently decays to two quarks with equal energy, also at $\eta=0$. Choosing the $W$’s momentum to be $400\gev$, we have the two partons from the $W$ decay each carrying about $200\gev$ momentum. Note that due to the large boost, the $W$ decay products are collimated and often clustered as a single jet. In this case, the two jets from the $W$ decay become two subjets. We then repeat the showering and hadronization and jet clustering procedure as we did for $e^+e^-\rightarrow$ dijet in Section  \[sec:quark-gluon\]. Comparing with a $W$ decaying in its center of mass frame, we should be able to obtain the same amount of radiation in the leading two jets for the boosted case, by using a (sub)jet radius that is 1/5 of the center-of-mass case. In Fig. \[fig:wdecay\], we plot $\langle\nch\rangle$ as a function of the (sub)jet radius for the two cases, which to a good approximation confirms our expectation. Thus, we can extend our definition of the jet radiation radius to a boosted color singlet system and conclude that the radiation radius is roughly inversely proportional to the boost. ![The average number of tracks as a function of the (sub)jet radius for a $W$ at rest and a boosted $W$. The $x$ axis is the jet radius for a $W$ at rest, and 5 times the subjet radius for a boosted $W$. We set the two subjets’ momenta to be equal in magnitude such that we can use the same subjet radius. The number shown is summed over the two leading (sub)jets.[]{data-label="fig:wdecay"}](wdecay.pdf){width="60.00000%"} In $W$ jet tagging, after a jet grooming algorithm, the remaining background will be QCD jets that kinematically resemble a boosted $W$ decay. Besides the case that two unrelated jets accidentally merge to one fat jet, most of the background jets come from relatively hard QCD splittings. For these QCD jets, the remaining handle is the difference in the radiation patterns. We have just shown that for a boosted $W$, a smaller jet radius is needed to include most of the radiation in the jet. Then we will need to know if it is the case for a QCD jet with a similar kinematic configuration. For this purpose, we consider the process $e^+e^-\rightarrow \bar qqg$, with $q$ and $g$ having exactly the same kinematic configuration as the boosted $W$ decay discussed above. Therefore, if we use a large $R$ to cluster the event, the $qg$ pair will be clustered as a single fat jet with two hard subjets. We will call this jet a 2-prong QCD jet. Again, we shower and hadronize the event, and use various smaller $R$’s to cluster the events. We count the number of charged particles in the two subjets that correspond to the $qg$ pair, which is compared to the $W$ jet (Fig. \[fig:nch\_w\_qcd\]). From Fig. \[fig:nch\_w\_qcd\], we clearly see that a 2-prong QCD jet tends to have more radiation than a $W$ jet. Also, we see the number of tracks in the QCD jet does not, but nearly saturates at $R=0.35$, where the $W$ jet is almost saturated. It is conceivable that we do not gain more discrimination power by going to larger $R$’s, because of the increasing contaminations. While it is as expected that the radiation is mostly contained in small cones of $R\sim 0.3$ for a boosted $W$, it is to some extent counter-intuitive for a 2-prong QCD jet – we see in Fig. \[fig:vars\_ee\] that $\nch$ does not saturate for a generic QCD jet even at $R\sim1.0$. Qualitatively, this can be understood using the dipole language: the QCD splitting $q\rightarrow qg$ creates a color singlet dipole. Since we have fixed the kinematics to be the same as a boosted $W$, this dipole also has a small energy scale and it behaves very similar to a boosted $W$. Therefore, the radiation from this dipole is also confined in small cones. Another color dipole exists from the initial $e^+e^-\rightarrow \bar q q$ production, which connects the 2-prong QCD jet to the jet in the opposite hemisphere. The radiation from this dipole is not confined, but only contributes to a small fraction of the radiation of the 2-prong jet. Thus, we find most of the radiation is contained in two small cones. Of course, this explanation is crude and theoretically it is very interesting to study the jet radiation distribution for special kinematic configurations. In the above example, we have taken the two partons from the boosted $W$ decay to have the same momenta, which allows us to use the same cone size for jet clustering. It is of course not the case for a generic $W$ decay, for which we should use two different cone sizes for the two subjets, inversely proportional to their $p_T$’s. As we will show, this motivates us to design a new, improved $W$ tagging method, using different cone sizes when evaluating the radiation variables. ![The average number of tracks for a boosted $W$ and a 2-prong QCD jet as a function of the subjet radius, for fixed kinematic configurations (see text). The number shown is the sum of the two leading subjets\[fig:nch\_w\_qcd\]](nch_w_qcd.pdf){width="60.00000%"} . $W$ tagging with pileup ----------------------- Besides the fact that $W$ has a fixed mass, a boosted $W$ differs from a QCD (quark or gluon) jet in two other aspects. First, a $W$ jet contains two hard subjets with balanced momenta, while a QCD jet more often has only one hard subjet. Second, the $W$ boson is a color singlet particle which has a different radiation pattern from a QCD jet. A jet grooming method [@filtering; @pruning; @trimming] is efficient for exploiting the first difference, while after grooming we can further study variables sensitive to the radiation patterns. In Refs. [@wtag; @tracking], we showed that the two pieces of information should be combined to obtain the optimum discrimination power. In those studies, the contamination from initial state radiation and the underlying event is included and they do not significantly affect the discrimination power. On the other hand, pileup, by which we mean multiple collisions in a beam crossing, may become the main obstacle to $W$ tagging. In particular, it has significant impact on the efficiency of the radiation variables. We illustrate this by considering two variables: the jet mass after the filtering/mass drop (MD) procedure, $m_\filt$, defined in Ref. [@filtering], and the $N$-subjettiness ratio, $\tau_{21}\equiv\tau_2/\tau_1$ [@nsubjettiness]. As shown in Ref. [@tracking], after the filtering/MD procedure, the variable $\tau_{21}$ becomes an efficient variable for measuring the amount of radiation, and it has small correlation with $m_\filt$. Therefore, we adopt a two step cut-and-count method for $W$-tagging, cutting on $m_\filt$ first and then on $\tau_{21}$. The signal ($W$-jets) efficiencies and background (QCD jets) mistag rates in the two steps are denoted ${\varepsilon_S}(m_\filt), {\varepsilon_B}(m_\filt)$ and ${\varepsilon_S}(\tau_{21}), {\varepsilon_B}(\tau_{21})$. The final efficiencies are the products of those in the two steps, for example, ${\varepsilon_S}(\text{final})= {\varepsilon_S}(m_\filt)\cdot{\varepsilon_S}(\tau_{21})$. Given the efficiencies, we can quantify the change in the significance by ${\varepsilon_S/\sqrt{\varepsilon_B}}$, [*i.e.*]{}, we achieve an improvement when ${\varepsilon_S/\sqrt{\varepsilon_B}}> 1$. We use Pythia 8 to simulate all-hadronic $WW$’s as our signal events and QCD dijets as the background. To simulate pileup, we turn on all soft QCD processes in Pythia 8, and add them on top of each signal/background event. The number of pileup events follows a Poisson distribution with an expectation value of ${\langle N_{\text{pu}}\rangle}$. We then find jets using FastJet [@fastjet] with the anti-$k_t$ algorithm ($R=1.0$). The two leading jets in each event are included in the following analysis. In Fig. \[fig:pileup\], we show the effect of pileup events when ${\langle N_{\text{pu}}\rangle}= 60$, for jet $p_T=300\gev$[^5]. We see that without pileup, the filtering/MD method is efficient to reconstruct the $W$ mass. We apply a mass window cut $(60,100)\gev$ [^6] on $m_{\filt}$ and obtain a gain in the significance, ${\varepsilon_S}(m_\filt)/\sqrt{{\varepsilon_B}(m_\filt)}=1.78$ (see Table \[tab:significance\]). The $\tau_{21}$ distribution after the $m_\filt$ cut is shown in Fig. \[fig:pileup\]. Since all jets passing the filtering/MD procedure are required to have two hard subjets, we see Fig. \[fig:pileup\] confirms our observation in the previous subsection that a 2-prong QCD jet tends to have more radiation than a $W$ jet. We then impose an upper cut on $\tau_{21}$, and obtain $\varepsilon_B(\tau_{21})=0.10$ at $\varepsilon_S(\tau_{21})=0.5$ – an improvement in the significance of 1.58. This number is very close to the maximum ${\varepsilon_S}(\tau_{21})/\sqrt{{\varepsilon_B}(\tau_{21})}$ we can get, 1.61, by scanning the $\tau_{21}$ cut, which occurs at ${\varepsilon_S}(\tau_{21})=0.38$ and ${\varepsilon_B}(\tau_{21})=0.055$. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The variables $m_{\text{filt}}$ and $\tau_{21}$ before (top) and after (bottom) including the pileup events.\[fig:pileup\]](mfilt_pileup0.pdf "fig:"){width="50.00000%"} ![The variables $m_{\text{filt}}$ and $\tau_{21}$ before (top) and after (bottom) including the pileup events.\[fig:pileup\]](tau21_pileup0.pdf "fig:"){width="50.00000%"} ![The variables $m_{\text{filt}}$ and $\tau_{21}$ before (top) and after (bottom) including the pileup events.\[fig:pileup\]](mfilt_pileup60.pdf "fig:"){width="50.00000%"} ![The variables $m_{\text{filt}}$ and $\tau_{21}$ before (top) and after (bottom) including the pileup events.\[fig:pileup\]](tau21_pileup60.pdf "fig:"){width="50.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- After including the pileup, we see the filtered mass is shifted to larger values and the mass peak is broader. Choosing the mass window to be $(80, 120)$, we obtain a signal efficiency of 0.48, and an increase in $S/\sqrt{B}$ of 1.47 which is smaller than the case without pileup. Moreover, we obtain almost no improvement by further using $\tau_{21}$: by scanning the cut on $\tau_{21}$, the biggest significance improvement is 1.02 for ${\varepsilon_S}(\tau_{21})=0.89$ and ${\varepsilon_B}(\tau_{21})=0.76$. In Ref. [@subtraction], a method for pileup subtraction is proposed. In this method, one first obtains the pileup $p_T$ and mass densities for a given event by dividing the event to patches and taking the medians. Then for a generic infrared and collinear safe jet shape variable, one finds its sensitivity to pileup and extrapolate to its zero pileup value using the obtained densities. Applying the method on $m_\filt$ for the $W$ jets and the QCD dijets, we find the mass peak is largely restored to its original position (Fig. \[fig:subtraction\]). Using the mass window $(60, 100)\gev$, we increase the significance by a factor of 1.51. On the other hand, applying the same subtraction to $\tau_{21}$, we see limited improvement. The best ${\varepsilon_S}(\tau_{21}^{\text{subtr}})/\sqrt{{\varepsilon_B}(\tau_{21}^{\text{subtr}})}$ we can get is 1.08. This manifests that, compared with the kinematics, the radiation information is much more vulnerable to soft contaminations. Therefore, a careful consideration of the jet radiation radius is essential, which is the subject of the next subsection. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The variables $m_{\text{filt}}$ and $\tau_{21}$ after the pileup subtraction method described in Ref. [@subtraction].\[fig:subtraction\]](mfiltsubtr.pdf "fig:"){width="50.00000%"} ![The variables $m_{\text{filt}}$ and $\tau_{21}$ after the pileup subtraction method described in Ref. [@subtraction].\[fig:subtraction\]](tau21subtr.pdf "fig:"){width="50.00000%"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The shrinking cone algorithm for $W$ jet tagging ------------------------------------------------ In Section \[sec:quark-gluon\], we found that to optimize quark-gluon discrimination, we should use a smaller radius for a larger $p_T$. There, the reason for using a shrinking radius is the decrease of $\alpha_S$ for increasing momentum, and the dependence on $p_T$ abides by a soft power law. A boosted $W$ is similar except that the radiation radius is $\propto p_T^{-1}$, [*i.e.*]{}, a much stronger dependence on $p_T$. Moreover, $W$ tagging is slightly more complicated because two partons with usually different momenta are produced from the $W$ decay: first, we need to use two different radii for the two (sub)jets from the $W$ decay. Second, in practice, in order to cluster as many as $W$’s to a single jet, we often start with a large $R$ and later apply the filtering/MD procedure to identify the two subjets. This may be unnecessary because one can always use a smaller $R$ from the beginning, but it does provide us a universal and convenient procedure for a large range of $p_T$’s. For these reasons, we are motivated to adopt the following procedure: 1. Use a large jet radius to cluster as many as $W$’s to single jets. 2. Apply the filtering/MD procedure to identify the two leading subjets. 3. Collect jet constituents that are within two cones, each of which around one of the two subjet axes. The cone size is determined by the subjet’s $p_T$, $$R_{\text{sub}}= R_{\text{ref}}(100\gev)\frac{100\gev}{p_{T,\text{sub}}}$$ where $R_{\text{ref}}(100\gev)$ is a reference radius at $p_T=100\gev$. In order to avoid excessively large cone sizes when one of the subjets has a small $p_T$, $R_{\text{sub}}$ is capped at 0.7. 4. Use the jet constituents obtained in Step 3 to calculate jet radiation variables, and combine it with the (subtracted) filtered mass to get better discrimination. Since the key ingredient in this procedure is in Step 3, we will call this method the “shrinking cone” (SC) algorithm. The $N$-subjettiness ratio, $\tau_2/\tau_1$, calculated using this procedure is denoted $\tau_{21}^\sc$. Choosing $R_{\text{ref}}(100\gev)=0.2$, we show the $\tau_{21}^\sc$ distributions for our $W$ jets and QCD jets in Fig. \[fig:cone\]. Similar to the previous subsection, we have applied a fixed mass window cut, $(60, 100)\gev$, on the subtracted filtered mass, and only included jets passing this cut in Fig. \[fig:cone\]. Comparing with the $\tau_{21}^{\text{subtr}}$ distributions in Fig. \[fig:subtraction\], we see a better distinction between $W$ jets and QCD jets. Applying a cut on $\tau_{21}^\sc$ such that ${\varepsilon_S}(\tau_{21}^\sc)=0.5$, we obtain ${\varepsilon_B}(\tau_{21}^\sc)=0.10$ and ${\varepsilon_S}(\tau_{21}^\sc)/\sqrt{{\varepsilon_B}(\tau_{21}^\sc)}=1.45$. As expected, we are unable to achieve the original performance of this variable in the zero pileup case. However, it does contribute to the discrimination power almost as much as the filtering/MD procedure. One may also wonder whether applying the pileup subtraction procedure will improve further the performance. We have tested it and found no improvement. ![The variable $\tau_{21}$ using the shrinking cone algorithm, $R_{\text{ref}}(100\gev) = 0.2$.\[fig:cone\]](tau21cone.pdf){width="60.00000%"} The signal and background efficiencies for various combinations of mass variables and $N$-subjettiness variables are shown in Table \[tab:significance\]. ${\langle N_{\text{pu}}\rangle}=0$ -- ---------------------------------------- --------------------------------- -------------------------------- ------------------------------------------------- ----------------------------------------------- ${m_{\text{filt}}}\in (60,100)$ ${m_{\text{filt}}}\in(80,120)$ ${m_{\text{filt}}}^{{\text{subtr}}}\in(60,100)$ ${m_{\text{filt}}}^{\text{subtr}}\in(60,100)$ ${\varepsilon_S}$ 0.64 0.48 0.60 0.60 ${\varepsilon_B}$ 0.12 0.11 0.16 0.16 ${\varepsilon_S/\sqrt{\varepsilon_B}}$ 1.84 1.47 1.51 1.51 $\tau_{21}$ $\tau_{21}$ $\tau_{21}^{{\text{subtr}}}$ $\tau_{21}^{\sc} (R_{\text{ref}} = 0.2)$ ${\varepsilon_S}$ 0.50 (0.38) 0.50 (0.89) 0.50 (0.66) 0.50 (0.47) ${\varepsilon_B}$ 0.10 (0.054)8 0.25 (0.76) 0.23 (0.37) 0.12 (0.10) ${\varepsilon_S/\sqrt{\varepsilon_B}}$ 1.58 (1.61) 1.00 (1.02) 1.04 (1.08) 1.45 (1.45) ${\varepsilon_S}$ 0.32 (0.24) 0.24 (0.43) 0.30 (0.39) 0.30 (0.28) ${\varepsilon_B}$ 0.012 (0.0067) 0.027 (0.082) 0.037 (0.058) 0.019 (0.016) ${\varepsilon_S/\sqrt{\varepsilon_B}}$ 2.92 (2.93) 1.46 (1.49) 1.57 (1.63) 2.18 (2.18) : Signal and background efficiencies and the improvement in $S/\sqrt{B}$. In the first step, “filtering/MD”, we use fixed mass window cuts as shown in the second row. In the second step, we further cut on the $N$-subjettiness ratio for events within the mass window, and present the results for two choices of $\varepsilon_S$: 1. fixed $\varepsilon_S=0.5$; 2. (in the parentheses) $\varepsilon_S$ maximizing $\varepsilon_S/\sqrt{\varepsilon_B}$. The last group of numbers denoted “Total” are the products of the two steps, for example, ${\varepsilon_S}(\text{total})= {\varepsilon_S}(m_\filt)\cdot{\varepsilon_S}(\tau_{21})$.\[tab:significance\] A complete comparison between $\tau_{21}^\sc$ and $\tau_{21}^{\text{subtr}}$ is given in Fig. \[fig:roc\_300\], where we plot the background fake rate as a function of the signal efficiency. When making the plots, we have again fixed the mass window cut as $60\gev<{m_{\text{filt}}}^{{\text{subtr}}}<100\gev$, which gives us $(\varepsilon_S,\varepsilon_B)=(0.60,0.16)$ as the maximum values at the top-right conner. Then we scan the cut on $\tau_{21}^\sc$ and $\tau_{21}^{\text{subtr}}$ to produce the curves. It is seen that $\tau_{21}^\sc$ is a better variable for all $\varepsilon_S$’s. In Fig. \[fig:roc\_300\], we have also compared the performances of different choices of the reference radius: $\rref(100\gev)=0.1, 0.2, 0.3, 0.4$. It turns out for most of the signal efficiencies, $\rref(100\gev)=0.2$ is preferred. Nonetheless, as long as $\rref$ is not too large, the performance does not degrade significantly. This leaves room for practical cases where a very small radius is not viable, for example, when information from the hadronic calorimeter alone is used. It is also interesting to see if shrinking cones are better than cones of a fixed size. To see that, after obtaining the two subjets from the filtering/MD procedure, we use two cones of the same size to evaluate $\tau_{21}$, independent of the subjets’ $p_T$. It turns out by using a cone size of 0.4 (0.7), we get ${\varepsilon_S}(\tau_{21})/\sqrt{{\varepsilon_B}(\tau_{21})}=1.39$ (1.21) at ${\varepsilon_S}(\tau_{21})=0.5$. Comparing with the number from shrinking cones, 1.45, we see $R=0.4$ is almost as good, while the performance degrades significantly for $R=0.7$. If the pileup level is higher than that assumed in this article, the preferred jet radius will fall below those adopted by the LHC collaborations. ![Signal efficiency versus background fake rate for jet $p_T=300\gev$.\[fig:roc\_300\]](roc_pt300.pdf){width="70.00000%"} In the above discussions, we have used $R=1.0$ to obtain the initial fat jet. If the $W$ $p_T$ is smaller, we will need a larger jet radius to cluster the $W$ decay products to a single jet[^7]. The contamination is even bigger because the jet area scales as $R^2$. In the following, we consider $W$’s with $p_T=150\gev$, clustered with $R=1.5$. The filtered mass after subtraction is given in the left panel of Fig. \[fig:pt150\], for both $W$ jets and QCD jets. We see that, due to the larger jet radius, the filtering/MD procedure leaves a much larger portion of the background jets in the $W$ mass window: by choosing $60\gev<m_\filt^{\text{subtr}}<100\gev$, we obtain $\varepsilon_S(m_\filt^{\text{subtr}})=0.54$ and $\varepsilon_B(m_\filt^{\text{subtr}})=0.31$, which gives us no increase in $S/\sqrt{B}$. Similarly, applying the pileup subtraction method on $\tau_{21}^{\text{subtr}}$ does not provide any improvement either. On the other hand, the $\tau_{21}^{SC}$ variable are still efficient for separating the signal from the background, as shown in Fig \[fig:pt150\] (b). Choosing $R_\text{ref}(100\gev)=0.2$, we have $\varepsilon_B(\tau_{21}^\sc)=0.096$ at $\varepsilon_S(\tau_{21}^\sc)=0.5$, yielding ${\varepsilon_S}(\tau_{21}^\sc)/\sqrt{{\varepsilon_B}(\tau_{21}^\sc)}=1.62$. This has made jet radiation patterns the most important handle for tagging semi-boosted $W$’s. Similar to Fig. \[fig:roc\_300\], we plot the $\varepsilon_S\sim\varepsilon_B$ curves for several choices of $R_\text{ref}$ in Fig.\[fig:roc\_150\], for jet $p_T=150\gev$. It turns out $R_\text{ref}(100\gev)=0.2$ is still the best choice among the values being considered. The difference between $p_T=150\gev$ and $p_T=300\gev$ is, the performance for $p_T=150\gev$ is getting worse faster when we increase $R_\text{ref}$. This is because for the same $R_\text{ref}$, the actual cone size is larger for lower $p_T$, and thus more contamination from pileup is included. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The filtered mass after subtraction and $\tau_{21}$ using the the shrinking cone algorithm, for jet $p_T= 150\gev$. \[fig:pt150\]](mfiltsubtr_pt150.pdf "fig:"){width="50.00000%"} ![The filtered mass after subtraction and $\tau_{21}$ using the the shrinking cone algorithm, for jet $p_T= 150\gev$. \[fig:pt150\]](tau21cone_pt150.pdf "fig:"){width="50.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Signal efficiency versus background fake rate for jet $p_T=150\gev$.\[fig:roc\_150\]](roc_pt150.pdf){width="70.00000%"} Discussions {#sec:discussions} =========== In this article, we have given a definition of the jet radiation radius, which quantifies the size of a jet due to its QCD radiation. This definition is closely related to the jet shape (also known as jet profile) variable which measures, on average, the fraction of momentum that is included in a cone of size $R$ around the jet axis. For the purpose of studying the jet radiation distribution, momentum is not a good measure because a large (small) momentum does not correspond to large (small) amount of radiation. Therefore, in our definition, we have replaced it with variables that directly measure the amount of radiation. Moreover, we have emphasized the nature of the radiation radius being [*intrinsic*]{}, [*i.e.*]{}, it is a characteristic of a parton with a definite QCD quantum, and should not change across different production processes and experimental setups. In particular, it is defined before various contaminations are included. It is our hope that by “factorizing” the contributions to jet shape variables into intrinsic and environmental ones, we can simplify jet substructure studies and use jet radiation patterns more efficiently to distinguish jets with different quantum numbers. A key observation in this article is: in order to efficiently use jet radiation variables, a smaller than usual jet radius is often preferred. This is particularly true for the $W$ jet tagging method we proposed, where shrinking cones are used when calculating jet radiation variables. In a high energy experiment, either a simpler or a more complicated method may be adopted. On the one hand, due to the limitation in granularities, especially those of the hadronic calorimeter, one may not be able to use a jet radius smaller than $O(0.1)$, or/and may not be able to use continuous jet radii in the calculation. In that case, we may choose to simplify the method by choosing a few typical, but small, cone sizes. It is shown in Fig. \[fig:roc\_300\] and Fig. \[fig:roc\_150\] that increasing the jet radius within a sizable region will not significantly hurt the discrimination power. On the other hand, one does see the advantages of using small cones, for example, $R_\text{sub}=0.2$ is preferred for subjet $p_T\sim100\gev$ when the average number of pileup events is 60. An even smaller radius might be preferred if the pileup level is higher. Therefore, ideally we would want to use the finest granularity for jet constituents, including the information from the electromagnetic calorimeter and the tracking system as in the particle flow approach [@pf]. Moreover, we have sticked to a single choice of (sub)jet radius for each (sub)jet in this article. Similar to Ref. [@wtag], one may benefit from using two or more radii for each (sub)jet, which not only gives us the information of how much the radiation is, but also captures how it grows with increasing $R$. The shrinking cone algorithm we proposed for $W$ tagging is parallel and complementary to other pileup reduction methods and may be combined to obtain the optimum results. We have already used it with the pileup subtraction method proposed in Ref. [@subtraction], where we see the subtraction method is convenient to extract the kinematic information while the shrinking cone method is more useful to obtain the radiation information. Another set of useful techniques utilize the fact that a charged particle from a pileup event leaves a track not originated from the primary vertex, thus it can be subtracted from the jet. These methods include charged hadron subtraction [@chs], using a jet vertex fraction [@jvf] cut and jet cleansing [@cleansing]. One may even use the charged particles from the primary vertex alone when calculation radiation variables, which still provides us a lot of information for the color structure [@tracking]. To improve over these method, one may simply apply them for cones with sizes determined by the (sub)jet $p_T$’s, as discussed in this article. Here, we emphasize that even if we only use tracks from the primary vertex to avoid most of the contamination from pileup events, it is still useful to optimize the jet cone sizes. We have seen that it is the case for quark-gluon discrimination when pileup is turned off. We expect this consideration to be more important when dealing with events with many hard partons, where jets can be easily contaminated by nearby radiation and a large jet radius should be avoided. This happens in, for example, SUSY cascades with long decay chains. In conclusion, we have shown that the knowledge of the intrinsic jet radiation radius will lead us towards the optimum discriminations for jets with different quantum numbers. The author thanks Dave Soper for numerous useful discussions and comments on the manuscript. The author is in part supported by US Department of Energy under grant numbers DE-FG02-96ER40969 and DE-FG02-13ER41986. [99]{} S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\]. J. Shelton, arXiv:1302.0260 \[hep-ph\]. J. Alitti [*et al.*]{} \[UA2 Collaboration\], Z. Phys. C [**49**]{}, 17 (1991). J. Huth [*et al.*]{} in E. L. Berger, “Research directions for the decade. Proceedings, 1990 Summer Study on High-Energy Physics, Snowmass, USA, June 25 - July 13, 1990,” Singapore, Singapore: World Scientific (1992). M. Cacciari, J. Rojo, G. P. Salam and G. Soyez, JHEP [**0812**]{}, 032 (2008) \[arXiv:0810.1304 \[hep-ph\]\]. G. Soyez, JHEP [**1007**]{}, 075 (2010) \[arXiv:1006.3634 \[hep-ph\]\]. J. Gallicchio and M. D. Schwartz, JHEP [**1110**]{}, 103 (2011) \[arXiv:1104.1175 \[hep-ph\]\]; J. Gallicchio and M. D. Schwartz, Phys. Rev. Lett.  [**107**]{}, 172001 (2011) \[arXiv:1106.3076 \[hep-ph\]\]; J. Gallicchio and M. D. Schwartz, JHEP [**1304**]{}, 090 (2013) \[arXiv:1211.7038 \[hep-ph\]\]. J. Thaler and K. Van Tilburg, arXiv:1011.2268 \[hep-ph\]; J. Thaler and K. Van Tilburg, arXiv:1108.2701 \[hep-ph\]. J. M. Butterworth, A. R. Davison, M. Rubin and G. P. Salam, Phys. Rev. Lett.  [**100**]{}, 242001 (2008) \[arXiv:0802.2470 \[hep-ph\]\]. S. D. Ellis, C. K. Vermilion and J. R. Walsh, Phys. Rev.  D [**80**]{}, 051501 (2009) \[arXiv:0903.5081 \[hep-ph\]\]; S. D. Ellis, C. K. Vermilion and J. R. Walsh, Phys. Rev.  D [**81**]{}, 094023 (2010) \[arXiv:0912.0033 \[hep-ph\]\]. D. Krohn, J. Thaler and L. T. Wang, JHEP [**1002**]{}, 084 (2010) \[arXiv:0912.1342 \[hep-ph\]\]. G. F. Sterman and S. Weinberg, Phys. Rev. Lett.  [**39**]{}, 1436 (1977). R. K. Ellis, W. J. Stirling and B. R. Webber, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol.  [**8**]{}, 1 (1996). T. Sjostrand, S. Mrenna and P. Z. Skands, Comput. Phys. Commun.  [**178**]{}, 852 (2008) \[arXiv:0710.3820 \[hep-ph\]\]. I. W. Stewart, F. J. Tackmann and W. J. Waalewijn, Phys. Rev. Lett.  [**105**]{}, 092002 (2010) \[arXiv:1004.2489 \[hep-ph\]\]. M. Cacciari and G. P. Salam, Phys. Lett.  B [**641**]{}, 57 (2006) \[arXiv:hep-ph/0512210\]. Y. Cui, Z. Han and M. D. Schwartz, Phys. Rev. D [**83**]{}, 074023 (2011) \[arXiv:1012.2077 \[hep-ph\]\]. Z. Han, Phys. Rev. D [**86**]{}, 014026 (2012) \[arXiv:1112.3378 \[hep-ph\]\]. G. Soyez, G. P. Salam, J. Kim, S. Dutta and M. Cacciari, Phys. Rev. Lett.  [**110**]{}, 162001 (2013) \[arXiv:1211.2811 \[hep-ph\]\]. \[CMS Collaboration\], CMS-PAS-PFT-09-001. For example, A. Perloff \[CMS Collaboration\], J. Phys. Conf. Ser.  [**404**]{}, 012045 (2012). For example, The ATLAS collaboration, ATLAS-CONF-2013-083. D. Krohn, M. Low, M. D. Schwartz and L. -T. Wang, arXiv:1309.4777 \[hep-ph\]. [^1]: See Ref. [@tasi] for a review and more references. [^2]: It is still possible to have initial state electroweak radiation. [^3]: Of course, this is incorrect for special configurations. For example, a hard gluon, containing about half of the total energy, may be occasionally emitted at a large angle from both of the two quarks and classified as one of the two jets. However, in practice, this rarely happens and does not affect our studies of the properties of the quark jet. [^4]: We refer readers to Refs. [@quark-gluon; @quark-gluon-2] for a comprehensive study of quark-gluon discrimination. In this article, we focus on studying how the discrimination power depends on the jet size. [^5]: We apply a cut $p_T>300\gev$ at parton level in Pythia 8. Due to PDF suppressions, the events are dominated by jets with $p_T$ close to $300\gev$. We take the leading two jets after jet clustering and do not further apply any $p_T$ cut at the jet level. [^6]: This is not the best mass window based on Fig. \[fig:pileup\]. We have used a relatively wider mass window to take into account possible experimental smearing to the reconstructed mass. [^7]: Alternatively, one may start with slim jets and find $W$’s by pairing jets with invariant masses close to the $W$ mass. Our method can be easily adapted accordingly.
--- author: - | Vaibhav Srivastava$^1$, Amit Surana$^2$, Miguel P. Eckstein$^3$, and Francesco Bullo$^4$\ [$^1$Department of Mechanical and Aerospace Engineering, Princeton University]{}\ [$^2$United Technologies Research Center, Hartford]{}\ [$^3$Department of Psychology and Brain Sciences, UCSB]{}\ [$^4$Center for Control, Dynamical Systems and Computation, UCSB]{} title: | Mixed Human-Robot Team Surveillance\ *Integrating Cognitive Modeling with Engineering Design* --- The emergence of mobile and fixed sensor networks operating at different modalities, mobility, and coverage has enabled access to an unprecedented amount of information. In a variety of complex and information rich systems, this information is processed by a human operator [@WMB:09; @CD:10]. The inherent inability of humans to handle the plethora of available information has detrimental effects on their performance and may lead to dire consequences [@TS-MR:11]. To alleviate this loss in performance of the human operator, the recent National Robotic Initiative [@EG:11] emphasizes collaboration of humans with robotic partners, and envisions a symbiotic co-robot that facilitates an efficient interaction of the human operator with the automaton. An efficient co-robotic partner will enable a better interaction between the automaton and the operator by exploiting the operator’s strengths while taking into account their inefficiencies, such as erroneous decisions, fatigue and loss of situational awareness. The design of such co-robots requires algorithms that enable the co-robot to aid the human partner to focus their attention to the pertinent information and direct the automaton to efficiently collect the information. In this paper, we focus on the design of mixed human-robot teams. The purpose of mixed teams is to exploit human cognitive abilities in complex missions, and therefore, an effective model of human cognitive performance is fundamental to the team design. It is imperative that such a model takes into account operator’s decision making mechanisms as well as exogenous factors affecting these mechanisms, including fatigue, situational awareness, memory, and boredom. The human operator’s decisions may be erroneous, and accordingly, the quality of operator’s decision should be ascertained and incorporated in the mixed team design. In this paper, we demonstrate the design principles for mixed human-robot teams using the context of a persistent surveillance problem. The persistent surveillance problem involves continuous search of target regions with a team of fixed and mobile sensors. The evidence collected by the sensor network is then processed by a human operator. An efficient persistent surveillance policy has multiple objectives, including, minimizing the time between subsequent visits to a region, minimizing the detection delay at each spatial region, and maximizing visits to regions with high likelihood of an anomaly. The fundamental trade-off in persistent surveillance is between the amount of evidence collected from the visited region and the resulting delay in evidence collection from other regions. We address this trade-off by designing efficient surveillance policies that, with high probability, collect evidence from regions that are highly likely to be anomalous. We utilize cognitive models for human decision-making to ascertain the accuracy of human decisions and thus, determine the likelihood of a region being anomalous using operator’s decisions. A second key problem is how to manage the attention of an operator in charge of analyzing the evidence collected by robotic partners. In particular, depending of the importance and the processing difficulty of the collected evidence, how much duration the operator should allocate to each piece of the collected evidence such that her overall performance is optimum. We model the attention allocation problem for human operators as a Markov decision process and utilize certainty-equivalent receding horizon control framework to determine efficient attention allocation policies. [**State of the art:**]{} As a consequence of the growing interest in mixed teams, a significant effort has focused on modeling human cognitive performance and its integration with the automaton. Broadly speaking, there have been two approaches to the design of mixed teams in the context of surveillance. In the first approach, the human operator is allowed to take their time for decision-making on each task and the automaton is adaptively controlled to cater to the human operator’s cognitive requirements. In the second approach, both the human operator and the automaton are controlled. For instance, the human operator is told the duration they should spend on each task, and their decision is utilized to adaptively control the automaton. In the first approach an operator’s performance is modeled through her reaction time on each task. The fundamental research questions under this approach include (i) optimal scheduling of the tasks to be processed by the operator [@CN-BM-JWC-MLC:08; @LFB-NP-MLC:10; @LFB-NWMB-MLC:10; @KS-CN-TT-EF:08; @KS-TT-EF:08; @JWC-etal:11; @CEN:09]; (ii) enabling shorter operator reaction times by controlling the fraction of the total time during which the operator is busy [@KS-EF:10a; @KS-EF:10b]; and (iii) efficient work-shift design to counter fatigue effects on the operator [@NDP-KAM:12]. In the second approach an operator’s performance is modeled by the probability of making the correct decision conditioned on the duration operator spends on the task. The fundamental research questions under this approach include (i) optimal duration allocation to each task [@VB-RC-CL-FB:11zc; @VS-FB:12n; @MM-RR:13]; (ii) controlling operator utilization to enable better performance [@VS-AS-FB:11z]; and (iii) controlling the automaton to collect the relevant information [@VS-FP-FB:11za; @VS-KP-FB:08p; @VS-AS-FB:11z]. In this paper, we focus on the latter approach, although most of the concepts can be easily extended to the former approach. [**Contributions:** ]{}The objective of this paper is to illustrate the use of systems theory to design mixed human-robot teams. We survey models of human cognition and we present a methodology to incorporate these models into the design of mixed teams. We elucidate on the design principles for mixed teams engaged in surveillance missions. The major contributions of this work are fourfold. First, we present models for several aspects of human cognition and unify them to develop a single coherent model that captures relevant features of human cognition, namely, decision-making, situational awareness, fatigue, and memory. The proposed model forms the backbone of our mixed human-robot team design. Second, we present a certainty-equivalent receding-horizon control framework to design mixed human-robot teams. Certainty-equivalent receding-horizon control is a technique to compute the approximate solutions of dynamic optimization problems. We survey the fundamentals of certainty-equivalent receding-horizon control and apply them to design mixed team surveillance missions. Third, we survey robotic persistent surveillance schemes and extend them to mixed team scenarios. In the context of mixed teams, the surveillance scheme is a component of the closed loop system in which the robotic surveillance policy affects the human performance and human performance in turn affects the robotic surveillance policy. We present an approach to simultaneously design robotic surveillance and operator attention allocation policies. Finally, we survey non-Bayesian quickest change detection algorithms. We demonstrate that human cognition models can be used to estimate the probability a human decision being correct. We (i) adopt the non-Bayesian framework, and (ii) treat human decisions as binary random variables to run a change detection algorithm on human decisions and ascertain the occurrence of an anomaly within a desired accuracy. The proposed methodology can be extended to non-binary human decisions. Human Operator Performance Modeling =================================== Two-alternative choice task and drift-diffusion model ----------------------------------------------------- A two-alternative choice task models a situation in which an operator has to decide among one of the two alternative hypotheses. Models for two-alternative choice tasks within the continuous sensory information acquisition scenarios rely on three assumptions: (a) evidence is collected over time in favor of each alternative; (b) the evidence collection process is random; and (c) a decision is made when the collected evidence is sufficient to choose one alternative over the other. Several models for two-alternative choice tasks have been proposed; see [@RB-EB-etal:06] for detailed descriptions of such models. The model considered in this paper is the drift diffusion model (DDM) [@RR:78; @RR-GM:08]. We choose the DDM because: (i) it is a simple and well characterized model; (ii) it captures a good amount of behavioral and neuro-scientific data; and (iii) the optimal choice of parameters in other models for two-alternative choice tasks reduces them to the DDM [@RB-EB-etal:06]. In the DDM the evidence aggregation in favor of an alternative is modeled as: $$\label{eq:ddm} dx(t) = \mu d t + \sigma d W(t), \quad x(0)=x_0,$$ where $\mu\in \real$ in the drift rate, $\sigma \in \real_{>0}$ is the diffusion rate, $W(\cdot)$ is the standard Weiner process, and $x_0\in \real$ is the initial evidence. For an unbiased operator, the initial evidence $x_0=0$, while for a biased operator $x_0$ captures the odds of the prior probability that the alternative hypotheses is true; in particular, $x_0= \sigma^2\log (\pi/(1-\pi))/2\mu$, where $\pi$ is the prior probability of the first alternative being true. For the information aggregation model , human decision-making is studied in two paradigms, namely, *free response* and *interrogation* [@RB-EB-etal:06]. In the free response paradigm, the operator takes her own time to decide on an alternative, while in the interrogation paradigm, the operator works under time pressure and needs to decide within a given time. In this section, we focus on the interrogation paradigm. A typical evolution of the DDM under the interrogation paradigm is shown in Figure \[fig:interrogation-ddm\]. The interrogation paradigm relies upon a single threshold: for a given deadline $T\in \real_{>0}$, the operator decides in favor of the first (second) alternative if the evidence collected until time $T$, i.e., $x(T)$ is greater (smaller) than a threshold $\nu \in \real$. If the two alternatives are equally likely, then the threshold $\nu$ is chosen to be zero. According to equation , the evidence collected until time $T$ is a Gaussian random variable with mean $\mu T + x_0$ and variance $\sigma^2 T$. Thus, the probability to decide in favor of the first alternative is $$\prob (x(T)>\nu) = 1- \prob (x(T)<\nu) = 1- \Phi\Big(\frac{\nu - \mu T-x_0}{\sigma \sqrt{T}}\Big),$$ where $\Phi(\cdot)$ is the standard normal cumulative distribution function. In this paper, we use the accuracy of the decisions made by the operator as a measure of her performance. Accordingly, we pick the probability of making the correct decision as the performance metric. We assume the drift rates to be symmetric, i.e., the drift rates are $\pm \mu$ when the alternative $0$ and $1$ are true, respectively. Consequently, the performance function when alternative $0$ is true is ${f^0: \real_{\ge 0}\times [0,1] \rightarrow [0,1)}$ defined by $$f^0(t, \pi)=1- \Phi\Big(\frac{\nu - \mu t-x_0}{\sigma \sqrt{t}}\Big).$$ Similarly, the performance function when alternative $1$ is true is ${f^1: \real_{\ge 0}\times [0,1] \rightarrow [0,1)}$ defined by $$f^1(t, \pi)= \Phi\Big(\frac{\nu + \mu t-x_0}{\sigma \sqrt{t}}\Big).$$ Given the prior probability $\pi$ of the first alternative being true, the performance function ${f: \real_{\ge 0} \times [0,1] \rightarrow [0,1)}$ is defined by $$\label{eq:performance-func} f(t, \pi ) = \pi f^0(t, \pi) + (1-\pi) f^1(t, \pi).$$ A typical evolution of the performance function $f$, for different values of $\pi$, is shown in Figure \[fig:performance\]. Note that the performance function is a sigmoid function of time. Mixed Human-Robot Team Surveillance Setup ========================================= We study a human-in-the-loop persistence surveillance problem. The objective of the surveillance mission is to detect within a prescribed accuracy any anomaly in a set of regions. Our setup for the human-in-the-loop surveillance is shown in Figure \[fig:problem-setup\]. The setup comprises three components, (i) the autonomous system, (ii) the cognitive system, and (iii) the cognition and autonomy management system (CAMS). The autonomous system comprises a UAV that survey a set of regions according to some routing policy. The UAV during each visit collects evidence from the region surveyed and sends it to the CAMS that eventually sends it to the cognitive component. For simplicity, we consider only one vehicle, but our approach extends easily to multiple vehicles. The cognitive component comprises of a human operator who examines the evidence and decides on the absence/presence of an anomaly at the associated region. The performance of the operator depends on several variables, including, the nature of the task, her situational awareness, her fatigue, her prior knowledge about the task, and her boredom. The operator sends her decision to the CAMS. The CAMS comprises of three elements: (i) the decision support system, (ii) the anomaly detection algorithm, and (iii) the vehicle routing algorithm. The decision support system based on the performance metrics of the operator suggests the optimal duration to be allocated to a given task. The decisions made by the human operator may be erroneous. The anomaly detection algorithm is a sequential statistical algorithm that treats the operator’s decision as a binary random variable and ascertains the desired accuracy of the anomaly detection. The anomaly detection algorithm also provides the likelihood of an anomaly at each region. The vehicle routing algorithm utilizes the likelihood of each region being anomalous to determine an efficient vehicle routing policy. ![image](setup){width="\linewidth"} We denote the $k$-th region by $\mc R_k, k \in {\{1,\dots, m\}}$. We model the surveillance mission as a sequence of two-alternative choice tasks and accordingly, model the operator performance as in equation . The two alternatives in this setting are the presence of an anomaly and the absence of an anomaly, respectively. We denote the performance of the operator at region $\mc R_k$ by ${f_k: \real_{\ge 0} \times (0,1) \rightarrow [0,1)}$. We study the human-robot team surveillance under the following assumptions: 1. evidences in different regions are mutually independent; 2. the travel time between region $\mc R_i$ and $\mc R_j$ is $d_{ij}, i,j\in {\{1,\dots, m\}}$; and 3. the UAV takes time $T_k, k\in {\{1,\dots, m\}}$ to collect a sample from region $\mc R_k$. Queuing theory has emerged as a popular paradigm to model single-operator-multiple-vehicles systems [@CN-BM-JWC-MLC:08; @KS-CN-TT-EF:08; @KS-TT-EF:08; @KS-EF:10b; @KS-EF:10a; @NDP-KAM:12; @VB-RC-CL-FB:11zc]. In the same vein, we model the evidence aggregated across regions over time as a queue of decision-making tasks. The decision-making tasks arrive according to some stochastic process and are stacked in a queue. We assume that each decision-making task arrives with a processing deadline and incorporate it as a soft constraint, namely, latency penalty. The latency penalty is the penalty incurred due to the delay in processing a given task. The operator receives a reward for a correct decision on the task. The performance function of the operator is a measure of the expected reward they obtain for allocating a given duration to a given task. The decision support system suggests to the operator duration allocations to tasks such that her overall benefit per unit task, i.e., the reward per unit task minus the latency penalty per unit task, is maximized. We adopt the cumulative sum (CUSUM) algorithm [@HVP-OH:08] as our anomaly detection algorithm. The details of the CUSUM algorithm are presented in the insert entitled “Quickest Change Detection.” In particular, we run $n$ parallel CUSUM algorithms (one for each region) to detect any anomaly at any of the regions. We refer to such a set of parallel CUSUM algorithms as the *ensemble CUSUM algorithm*. We consider a simple routing policy that directs the vehicle to a randomly chosen region during each visit. The probability to select a region is chosen proportional to the likelihood of a region being anomalous. More sophisticated stochastic vehicle routing policies for surveillance are presented in the insert entitled “Stochastic Persistent Surveillance.” Design of the Decision Support System ===================================== We first consider the design on the decision support system. In this section, we ignore the exogenous factors and their effects on the human performance. We design the decision support system under the following assumptions: 1. performance functions of the operator on a task from region $\mc R_k$ in absence and presence of an anomaly are ${f^0_k: \real_{\ge 0} \times (0,1) \rightarrow \real_{\ge 0}}$ and ${f^1_k: \real_{\ge 0} \times (0,1) \rightarrow \real_{\ge 0}}$, respectively; 2. each task from region $\mc R_k$ comes with a processing deadline ${T^{\textup{ddln}}}_k \in \real_{\ge 0}$; 3. based on the importance of the region, a weight $w_k \in \real_{>0}$ is assigned to each task collected from region $\mc R_k$; 4. tasks arriving to the queue while the $\ell$-th task is served are sampled from a probability distribution that assigns a probability $q_k^\ell \in (0,1)$ to region $\mc R_k$. Similar to , the average performance function ${f_k: \real_{\ge 0} \times (0,1) \rightarrow \real_{\ge 0}}$ at region $\mc R_k$ is defined by $$f_k(t, \pi)= (1-\pi)f_k^0(t, \pi) + \pi f_k^1 (t, \pi).$$ Under the aforementioned assumptions, each task from region $\mc R_k$ is characterized by the triplet $(f_k, w_k, {T^{\textup{ddln}}}_k)$. Decision-making tasks collected by the vehicle are stacked in a queue and are processed by the operator in a first-come-first-serve processing discipline. The task arrival process for this queue depends on the vehicle routing policy. Let the $\ell$-th task in the queue be from region $\mc R_{k_\ell}$ and let $n_\ell$ be the queue length when the processing of the $\ell$-th task is initiated. Let the belief of the operator about region $\mc R_{k}$ being anomalous before processing the $\ell$-th task be $\pi_{k}^{\ell-1}$. Without loss of generality, we assume that initially the operator is unbiased about each region being anomalous, i.e., $\pi_{k}^{0}=0.5$, for each $k\in {\{1,\dots, m\}}$. Given a duration allocation $t_\ell \in \real_{>0}$ to the $\ell$-th task in the queue, the operator’s belief after processing the $\ell$-th task can be estimated using the Bayes rule as follows: $$\bar \pi_{j}^{\ell} = \begin{cases} \frac{ \pi_{j}^{\ell-1} \prob({\texttt{dec}}_\ell| H^1_{k}, t_\ell)}{(1- \pi_{j}^{\ell-1}) \prob({\texttt{dec}}_\ell| H^0_{k_\ell}, t_\ell)+ \pi_{j}^{\ell-1} \prob({\texttt{dec}}_\ell| H^1_{k_\ell}, t_\ell)}, & \text{if } j=k_{\ell}, \\ \pi_{j}^{\ell-1}, & \text{otherwise,} \end{cases}$$ where $H^0_k$ and $H^1_k$ denote the hypothesis that region $\mc R_k$ is non-anomalous and anomalous, respectively, and ${\texttt{dec}}_\ell \in \{0,1\}$ is the operator’s decision, and $\prob({\texttt{dec}}_\ell|\cdot, t_\ell)$ is determined from the performance function of the operator, i.e., $$\prob({\texttt{dec}}_\ell=1| H^1_{k_\ell}, t_\ell) = f^1_{k_\ell} (t_\ell, \pi_{k_\ell}^{\ell-1}).$$ In a surveillance mission, the event that a region becomes anomalous corresponds to a change in the characteristic environment of the region and this change may happen at an arbitrary time. It is evident from the neuroscience literature [@RR-JG-MEN:03] that in a sequential change detection task, if the belief of the operator about a region being anomalous is below a threshold, then the operator resets her belief to the threshold value. Without loss of generality, we choose this threshold as $0.5$. Consequently, if the belief of the operator about a region being anomalous is below $0.5$, then they reset it to $0.5$, i.e., the belief of the operator at region $\mc R_k$ after processing the $\ell$-th task is $$\pi^\ell_k =\max\{0.5,\bar \pi_{j}^{\ell} \}.$$ We incorporate the deadline on a task as a soft constraint (latency penalty) and assume that the task from region $\mc R_k$ loses value while waiting in the queue. Let the task from region $\mc R_k$ loses value at a rate $c_k$ per unit delay in its processing. The latency penalty rate $c_k$ is a function of task parameters $(f_k, w_k, {T^{\textup{ddln}}}_k)$. For simplicity of notation, we drop the arguments of $c_k$. The performance function of the operator depends on her belief about regions being anomalous which varies over the evolution of the surveillance mission. Consequently, the performance function of the operator is time-varying and hence, the latency penalty rate $c_k$ is also time-varying. We denote the latency penalty rate at region $\mc R_k$ when the $\ell$-th task is processed by $c_k^{\ell}$. The decision support system suggests to the operator the duration allocation to each task. To this end, the decision support system maximizes the infinite-horizon average reward for the human operator. The reward ${r: {\{1,\dots, m\}}^{n_\ell}\times (0,1)^m\times \real_{\ge 0} \rightarrow \real}$ obtained by allocating duration $t$ to the $\ell$-th task is defined by $$\begin{gathered} r (\bs k_\ell, \bs \pi^{\ell-1}, t) = w_{k_\ell}f_{k_\ell}(t, \pi_{k_\ell}^{\ell-1}) \\ - \frac{1}{2}\Big( \sum_{i=\ell}^{\ell+n_\ell-1} c_{k_i}^{\ell} + \sum_{j=\ell}^{\ell+n'_\ell-1} c_{k_j}^{\ell} \Big) t, $$ where $\boldsymbol k_\ell \in {\{1,\dots, m\}}^{n'_\ell}$ is the vector of region indices of the tasks in the queue, $\bs \pi^{\ell-1}$ is the vector of operator’s belief about each region being anomalous, and $n'_\ell$ is the queue length just before the end of processing the $\ell$-th task. Note that the queue length while a task is processed may not be constant, therefore, the latency penalty is computed as the average of the latency penalty for the tasks present at the start of processing the task and the latency penalty for the tasks present at the end of processing the task. Let the decision support system suggest to the operator to allocate duration $t_\ell$ to the $\ell$-th task in the queue. Accordingly, the objective of the decision support system is to maximize the infinite-horizon average reward defined by $$V_{\text{avg}} = \liminf_{N\to+\infty} \frac{1}{N} \sum_{\ell=1}^N \expt[r (\boldsymbol k_\ell, \bs \pi^{\ell-1}, t_\ell)].$$ We adopt the certainty-equivalent receding-horizon control [@DB:05; @HSC-SIM:03; @JM-YW-SB:11] as the algorithm for the decision support system to approximately compute the policy that maximizes the infinite-horizon average reward, and hence, obtain efficient allocations for the operator. Fundamentals of certainty-equivalent receding-horizon control are presented in the insert entitled “Certainty-Equivalent Receding-Horizon Control.” Under the certainty-equivalent approximation, the future uncertainties of the system are approximated by their expected values [@DB:05]. The expected value of the belief over the prediction horizon is equal to the current belief, i.e., $\expt[\bar \pi_k^{\ell}|\pi_k^{\ell-1}] =\pi_k^{\ell-1}$. Accordingly, the travel time and evidence aggregation time under the certainty-equivalent approximation while the $\ell$-th task is processed is ${\bs q^\ell} {^{\top}}D {\bs q^\ell} + {\bs q^\ell}{^{\top}}\bs T$. Furthermore, the evolution of the queue length under the certainty-equivalent approximation while the $\ell$-th task is processed is $$\bar{n}_{\ell+j+1} = \max \{ 1, \bar{n}_{\ell+j} -1 + {\lambda_\ell \bar t_{j}}\},$$ where $\bar{n}_{\ell+j}$ represents predicted queue length at the start of processing the $(\ell+j)$-th task, $\bar t_{j}$ is the duration allocation to the $(\ell+j)$-th task, $\bar n_\ell = n_\ell$, and $\lambda_\ell =1 /{({\bs q^\ell} {^{\top}}D {\bs q^\ell} + {\bs q^\ell}{^{\top}}\bs T)}$. Moreover, under the certainty-equivalent approximation, the parameters of the tasks that have not yet arrived are replaced by their expected values. While the $\ell$-th task is processed, the expected value of the performance function ${\bar{f}_\ell: \real_{\ge 0}\times [0,1] \rightarrow {[0,1]}}$, the expected importance $\bar w^\ell$ and the expected latency penalty rate $\bar{c}^\ell$ are defined by $$\begin{aligned} \fav_\ell(t, \pi_{k}^{\ell-1})& = \frac{\sum_{k=1}^m q_k^\ell w_k f_k(t, \pi_{k}^{\ell-1})}{\sum_{k=1}^m q_k^\ell w_k}, \\ \bar{w}^\ell & = \sum_{k=1}^m q_k^\ell w_k, \quad \text{and} \quad \bar{c}^\ell = \sum_{k=1}^m q_k^\ell c_k^\ell,\end{aligned}$$ respectively. In the receding-horizon framework [@JM-YW-SB:11], a finite-horizon optimization problem at each iteration is solved. Consider a horizon length $N$, the current queue length $n_\ell$, realizations of the sigmoid functions associated with the tasks in the queue $f_{k_{\ell+j-1}}, j\in {\{1,\dots, n_\ell\}}$, the associated latency penalties $c_{k_{\ell+j-1}}^\ell, j\in {\{1,\dots, n_\ell\}}$ and weights $w_{k_{\ell+j-1}}, j\in {\{1,\dots, n_\ell\}}$. If $n_\ell < N$, then we define the reward ${r_j: \real_{\ge 0} \rightarrow \real}$, $j\in{\{1,\dots, N\}}$, associated with the $(\ell+j-1)$-th task by $$ r_j(t_j) =\begin{cases} r^{\text{rlzd}}_j(t_j), \quad & \text{if }1\le j \le n_\ell,\\ r^{\text{exp}}_j(t_j), & \text{if } n_\ell+1 \le j \le N, \end{cases}$$ where $ r^{\text{rlzd}}_j$ is the reward associated with a task whose parameters have been realized and is defined by $$\begin{gathered} r^{\text{rlzd}}_j(\bar t_j) =w_j f_{k_{\ell+j-1}}(\bar t_j) - \frac{1}{2}\bar{c}^\ell \lambda_\ell \bar t_j^2 \\ - \Big(\sum_{i=j}^{n_\ell} c_{k_{\ell+j-1}}^\ell +(\bar n_{\ell+j-1} - n_\ell -j+1) \bar{c} ^\ell \Big) \bar t_j , \end{gathered}$$ and $r^{\text{exp}}_j$ is the reward associated with a future task whose parameters are approximated by their expected value and is defined by $$r^{\text{exp}}_j(\bar t_j) = \bar{w}^\ell \fav_\ell (\bar t_j) -\bar{c}^\ell \bar n_{\ell+j-1} \bar t_j - \frac{1}{2}\bar{c}^\ell\lambda_\ell \bar t_j^2.$$ Similarly, if $n_\ell\ge N$, then the parameters associated with each task in the prediction horizon have been realized and consequently, the reward is defined by $r_j= r^{\text{rlzd}}_j$, for each $j\in{\{1,\dots, N\}}$. It should be noted that under the the certainty-equivalent approximation, the performance functions and latency penalties are time-invariant over the prediction horizon. In particular, the time-varying component of the performance function is the belief of the operator and the expected value of the belief over the prediction horizon is equal to the current belief, i.e., $\expt[\bar \pi_k^{\ell}|\pi_k^{\ell-1}] =\pi_k^{\ell-1}$. Similarly, the latency penalty associated with each region is also a constant over the prediction horizon. The certainty-equivalent receding-horizon policy solves the following finite-horizon optimization problem to determine the allocation to the $\ell$-th task: $$\label{eq:maximize-receding-horizon-real-time} \begin{split} \underset{\bar{\t} \succeq0}{{\text{maximize}}}& \quad \frac{1}{N}\sum_{j=1}^{N} r_j (\bar t_j) \\ {\text{subject to}}&\quad \bar n_{\ell+ j+1}= \max \{ 1, \bar{n}_{\ell+ j} -1 + \lambda_\ell \bar t_{j}\}, \end{split}$$ where $\bar{n}_\ell=n_\ell$ and $\bar{\t}=\{\bar t_1,\dots, \bar t_N\}$ is the duration allocation vector. In particular, if $\{\bar t_1^*,\dots, \bar t_N^*\}$ is the solution to , then the certainty-equivalent receding-horizon policy allocates a duration $t_\ell = \bar t_1^*$ to the $\ell$-th task. The optimization problem  is a finite-horizon dynamic program with univariate state and control variables. Moreover, the reward function and the state evolution function in optimization problem  are Lipschitz continuous. The control variable on a task from region $\mc R_k$ is upper bounded by the deadline of the task. The certainty-equivalent queue length can be bounded above by a large constant (see [@VB-RC-CL-FB:11zc] for a formal argument). These properties imply that the control and the state space in problem  can be efficiently discretized to determine a solution that yields a value within $\epsilon$ of the optimal value [@DPB:75]. Furthermore, such a discretized dynamic program can be solved using the backward recursion algorithm [@DPB:01a] in $O(N/\epsilon^2)$ time. More details on discretizing a dynamic program are presented in the insert entitled “Discretization of the Action and the State Space.” The above methodology for designing the decision support system assumes that latency penalty is known as a function of task parameters and the region selection policy. If the deadline on a task is more than the inflection point of the associated sigmoid performance function, then one choice of such a latency penalty is $c_k^\ell=w_k f'_k({T^{\textup{ddln}}}_k, \pi_k^{\ell-1})$, where $f'$ represents derivative of $f$ with respect to $t$. The above procedure for the design of the support system yields a non-zero allocation to each task only if the latency penalty is sufficiently small. At large latency penalties, the loss in the value of the task is more than the reward obtained by processing it, and consequently, the optimal policy allocates zero duration to each task. The latency penalty should be small enough such that if only one task is present, it should be allocated a non-zero duration. This can be achieved by choosing a large enough deadline on the task. In particular, the latency penalty at region $\mc R_k$ should be small enough such that $$\text{argmax} {\{w_k f_k(t, \pi_k^{\ell-1}) - c_k^\ell t -\bar{c}^\ell \lambda_\ell t^2/2 \; | \; t \in \real_{\ge 0}\}} >0.$$ Unfortunately, for large values of $\pi_k^{\ell-1}$, the reward rate is very low and consequently, the admissible penalty rates are very low. Such low penalty rates require the deadlines to be very large, which is undesirable. In the following, we will assume that the deadline is large enough for a moderately large critical value of the belief, e.g., $\pi_k^{\ell-1}\le 0.8$, and once the operator’s belief is above this critical value, then a duration equal to the deadline on the task is allocated to the task. Such an allocation will bring the operator’s belief below the critical value by either (i) getting an evidence that diminishes the belief, or (ii) crossing a threshold and resetting the operator’s belief (see the anomaly detection algorithm for more details). The formulation  of the certainty-equivalent finite-horizon problem ensures that the decision-making queue is stable provided the latency penalty rate for each task $c_k$ is positive. If there is no deadline on any task, then the latency penalty on each task is zero and the formulation  does not ensure the stability of the decision-making queue. However, in the case of no deadlines and for a policy that allocates a duration ${t^{\textup{reg}}}_k\in \real_{\ge 0}$ to a task from region $\mc R_k$, the average value function under the certainty-equivalent approximation while the $\ell$-th task is processed is $\sum_{k=1}^m q_k^{\ell} w_k f_k({t^{\textup{reg}}}_k, \pi^{\ell-1}_k)$. Thus, the allocation to the $\ell$-th task under the certainty-equivalent receding-horizon policy is determined by the solution of the following optimization problem: $$\begin{aligned} \label{eq:knapsack-sigmoid} \begin{split} {\text{maximize}}& \quad \sum_{k=1}^m q_k^{\ell} w_k f_k({t^{\textup{reg}}}_k, \pi^{\ell-1}_k) \\ {\text{subject to}}& \quad \sum_{k=1}^m q_k^{\ell} {t^{\textup{reg}}}_k \le \frac{1}{\lambda_\ell}\\ & \quad {t^{\textup{reg}}}_k \ge 0, \text{ for each } k\in{\{1,\dots, m\}}, \end{split}\end{aligned}$$ where the first constraint ensures the stability of the queue. In particular, a duration ${{t^{\textup{reg}}}_{k_\ell}}^*$ is allocated to the $\ell$-th task, where $({{t^{\textup{reg}}}_{1}}^*,\ldots, {{t^{\textup{reg}}}_{m}}^*)$ is the solution to . The optimization problem  is a knapsack problem with sigmoid utilities. This problem is NP-hard and a procedure to compute a $2$-factor solution is presented in the insert entitled “Knapsack Problem with Sigmoid Utilities.” Design of the Control and Autonomy Management System ==================================================== In the previous section, we considered the design of the decision support system. We now focus on the other two components of the CAMS, namely, the anomaly detection algorithm and the vehicle routing algorithm. The anomaly detection algorithm uses the decisions made by the operator to detect an anomalous region and to compute the likelihood of the region being anomalous. This likelihood is utilized by the vehicle routing algorithm to send the vehicle to an anomalous region with a higher probability. Anomaly Detection Algorithm --------------------------- The decisions made by the human operator about a region being anomalous may be erroneous. The purpose of the anomaly detection algorithm is to reliably decide on a region being anomalous. To this end, we employ statistical quickest change detection algorithms. The quickest change detection algorithm of interest in this paper is the *ensemble CUSUM algorithm* [@VS-FP-FB:11za]. The ensemble CUSUM algorithm comprises a set of $m$ parallel CUSUM algorithms (one for each region). We treat the binary decisions by the operator as Bernoulli random variables whose distribution is dictated by the performance function of the operator and run the CUSUM algorithm on these decisions to decide reliably on a region being anomalous. The standard CUSUM algorithm requires the observations from each region to be independent and identically distributed. However, the decisions made by the operator do not satisfy these requirements. Therefore, instead of the standard CUSUM algorithm, we resort to the CUSUM like algorithm for dependent observations proposed in [@BC-PW:00]. We describe the ensemble CUSUM algorithm in Algorithm \[algo:ensemble-CUSUM\]. initialize $\ell:=1$, $\Lambda_k:=0$, for each $k\in {\{1,\dots, m\}}$ ${\texttt{dec}}_\ell==1 \text{ and } t_\ell>0$, [**then**]{} $$\Lambda_{k_\ell} := \max \Big\{0, \Lambda_{k_\ell} + \log \frac{f^1_{k_{\ell}}(t_\ell, \pi^{\ell-1}_{k_\ell}) }{1- f^0_{k_{\ell}}(t_\ell, \pi^{\ell-1}_{k_\ell})} \Big\},$$ ${\texttt{dec}}_\ell==0 \text{ and } t_\ell>0$, [**then**]{} $$\Lambda_{k_\ell} := \max \Big\{0, \Lambda_{k_\ell} + \log \frac{1- f^1_{k_{\ell}}(t_\ell, \pi^{\ell-1}_{k_\ell}) }{f^0_{k_{\ell}}(t_\ell, \pi^{\ell-1}_{k_\ell})} \Big\},$$ [*% detect an anomaly if a threshold is crossed*]{} $\Lambda_{k_\ell} \ge {\Lambda_{\textup{thresh}}}$, [**then**]{} declare an anomaly at region $k_\ell$; $\Lambda_{k_\ell}=0$; set $\ell=\ell+1$; go to $2:$ The ensemble CUSUM algorithm maintains a statistic $\Lambda_k$ for each region $\mc R_k$, $k\in {\{1,\dots, m\}}$. The statistic at region $\mc R_k$ is updated using the binary decision of the operator whenever a task from region $\mc R_k$ is processed. If the statistic associated with a region crosses a threshold ${\Lambda_{\textup{thresh}}}$, then the region is declared to be anomalous. The choice of this threshold dictates the accuracy of the detection [@HVP-OH:08]. We assume that once an anomaly has been detected, it is removed, and consequently, the operator’s belief about the region being anomalous resets to the default value. Vehicle Routing Algorithm ------------------------- We employ a simple vehicle routing algorithm that sends the vehicle to a region with a probability proportional to the likelihood of that region being anomalous. In particular, the probability to visit region $\mc R_k$ is initialized to $q_k^0=1/m$ and after processing each task, the probability to visit region $\mc R_k$ is chosen proportional to $e^{\Lambda_k}/(1+ e^{\Lambda_k})$. This simple strategy ensures that a region with a high likelihood of being anomalous is visited with a high probability. Moreover, it ensures that each region is visited with a non-zero probability at all times and consequently, an anomalous region is detected in finite time. It is noteworthy that such simple vehicle routing algorithm does not take into account the geographic location or the difficulty of detection at each region. In fact, in the spirit of [@VS-FP-FB:11za], these factor can be incorporated into the vehicle routing algorithm; however, for simplicity of the presentation, we do not consider these factors here. A Case Study without Exogenous Factors ====================================== We consider a surveillance mission involving four regions. The matrix of travel times between the regions is $$D= \begin{bmatrix} 0& 22.1422 & 34.4786 & 8.9541 \\ 22.1422 & 0 & 19.3171 & 14.6245 \\ 34.4786 & 19.3171 & 0 &25.5756 \\ 8.9541 & 14.6245 & 25.5756 & 0 \end{bmatrix}.$$ The time to collect information at each region is $10$ units. The performance of the operator is the same at each region. Let the drift rate in the DDM associated with the operator be $\mu = \mp 0.3$ for a non-anomalous and an anomalous region, respectively. Let the diffusion rate for the DDM associated with the operator be $\sigma=1$. Let the deadline on each task be $40$ units, and let the importance of each region be the same. Suppose regions $\mc R_1, \mc R_2, \mc R_3$, and $\mc R_4$ become anomalous at time instants $20, 80, 140$, and $200$ units, respectively. The optimization problem  with a horizon length $N=5$ is solved before processing each task to determine the optimal allocations for the human operator. A sample evolution of the CAMS is shown in Figure \[fig:CAMS-sample\]. As the belief of the operator about the presence of anomaly increases, it becomes more difficult for her to extract more information, and consequently, her reward rate becomes low. In this situation, the allocation policy in Figure \[fig:allocation\] allocates a duration equal to the deadline on the task. The queue length in Figure \[fig:queue\] increases substantially during this phase and once an anomaly is detected, the allocation policy drops pending tasks in the queue until only one task is present in the queue. The threshold for the CUSUM algorithm is chosen equal to $5$ and once an anomaly is detected the CUSUM statistic in Figure \[fig:stat\] resets to zero. The region selection policy in Figure \[fig:select\] sends the UAV with a high probability to a region with a high likelihood of being anomalous. \ \ \ \ Exogenous Human Factors ======================= In the previous sections, the human performance model only captured the effect of evidence aggregation in decision-making. There are several exogenous factors, including, situational awareness, memory retention, fatigue, and boredom that also affect the decision-making performance. In this section, we survey these factor and present a unified model for human decision-making. The effect of these exogenous factors is typically studied in the free response paradigm for the human operator. Accordingly, we first present the free response paradigm for the DDM. ### Drift-diffusion model and the free response paradigm In the free response paradigm, the operator takes her own time to decide on an alternative. For the DDM , the free response paradigm is modeled via two thresholds (positive and negative) and the operator decides in favor of the first/second alternative if the positive (negative) threshold is crossed from below (above). A typical evolution of the DDM under free response paradigm is shown in Figure \[fig:free-response-ddm\]. If the two alternatives are equally likely, then the two thresholds are chosen symmetrically. Let $\pm \eta \in \real$ be symmetrically chosen thresholds. The expected decision time (${T_{\textup{decision}}}$) under the free response paradigm is $${T_{\textup{decision}}}= \frac{\eta}{\mu} \tanh \frac{\mu \eta}{\sigma^2} + \frac{2 \eta (1 - e^{-2x_0 \mu/\sigma^2})}{ \mu(e^{2\eta \mu /\sigma^2}-e^{-2\eta \mu /\sigma^2}) } -\frac{x_0}{\mu}.$$ The reaction time on a task is ${T_{\textup{decision}}}+{T_{\textup{motor}}}$, where ${T_{\textup{motor}}} \in \real_{>0}$ is the time taken by sensory and motor processes unrelated to the decision process. For simplicity, we treat ${T_{\textup{motor}}}$ as a deterministic quantity in this paper. The choice of the threshold is dictated by the speed-accuracy trade-off. There are two particular criteria to capture the speed accuracy trade-off: (i) the Bayes risk and (ii) the reward rate [@RB-EB-etal:06]. We focus on the Bayes risk criterion in this paper. The Bayes risk is defined by $\texttt{BR}= \xi_1 {T_{\textup{decision}}}+ \xi_2 {\prob_{\textup{error}}}$, where $\xi_1, \xi_2$ are the cost per unit delay in decision and the cost of error, respectively, and ${\prob_{\textup{error}}}$ is the error rate. For the DDM, the minimization of the Bayes risk yields the following transcendental equation for the threshold [@RB-EB-etal:06]: $$\frac{\xi_2}{\xi_1} \frac{2 \mu^2}{\sigma^2} -\frac{4 \mu \eta}{\sigma^2} +e^{-(2\mu \eta/\sigma^2)}-e^{(2\mu \eta/\sigma^2)}=0.$$ This transcendental equation can be solved numerically but for the purpose of this paper, we treat the limiting solution of this equation as the true solution. In particular, if $\mu$ is low or $\sigma$ is high, then the threshold is $\eta = \mu \xi_2/4\xi_1$. It is noteworthy that $\xi_1$ and $\xi_2$ can be estimated from the empirical data (see [@RB-EB-etal:06] and references therein). ### Yerkes-Dodson law and situational awareness The Yerkes-Dodson law [@RMY-JDD:08; @CDW-JGH:00] is a classical model that captures the performance of an operator as a unimodal function of their arousal level. The arousal level of an operator is a measure of her stress. The Yerkes-Dodson law suggests that there exists an optimal level of arousal at which the operator’s performance is the best. Moreover, the optimal arousal level decreases with the difficulty of the task. Situational awareness of an operator captures the level of her spatial and temporal perception of the environment. The Yerkes-Dodson law is used to model the situation awareness of the operator as a function of her workload [@KS-CN-TT-EF:08; @KS-EF:10b; @CN-BM-JWC-MLC:08]. In particular, the workload of the operator is chosen as a measure of the arousal level, and the situational awareness is modeled as a unimodal function of the workload. The reaction time of the operator decreases with increasing situational awareness and accordingly, the expected reaction time is modeled as a convex function of the workload [@KS-EF:10b]. It has been argued in [@MC-CN-etal:07] that the lack of situational awareness results in a larger waiting time for the tasks, i.e., the operator takes more time to start working on a task. This suggests that the lack of situational awareness does not affect the decision-making process and only increases the sensory and motor time ${T_{\textup{motor}}}$. Accordingly, we model the sensory and motor time as a convex function of the workload and treat the expected decision time as a constant function of the workload. The workload is modeled as the utilization ratio (the fraction of recent history during which the operator was busy) and the utilization ratio $u$ is captured through the following differential equation $$\dot u(t) = \frac{b(t)-u(t)}{\tau}, \quad u(0)=u_0,$$ where ${b: \real_{>0} \rightarrow \{0,1\}}$ is the binary function of time $t$ that represents if the operator is idle or busy, $\tau\in \real_{>0}$ is the sensitivity of the operator, and $u_0 \in [0,1]$ is the initial utilization ratio of the operator [@KS-EF:10b]. We denote the sensory and motor time as a function of the utilization ratio by ${{T_{\textup{motor}}}: [0,1] \rightarrow \real_{\ge 0}}$. The expected reaction time (${T_{\textup{decision}}}+{T_{\textup{motor}}}$) is also a convex function of the utilization ratio and such a relation has been empirically validated in [@KS-EF:10b]. Moreover, it has been noted in [@CN-BM-JWC-MLC:08] that these utilization ratio based models only capture the effect of the workload and do not capture training and fatigue effects. ### Fatigue, sleep cycle and SAFTE model Fatigue is the feeling of tiredness and bodily discomfort after prolonged activity [@GM-DRD-etal:00]. Fatigue is known to have detrimental effects on operator’s performance. Several models for fatigue have been proposed; a summary is presented in [@MMM-SM-etal:04]. In this paper, we adopt Sleep Activity Fatigue Task Efficiency (SAFTE) model proposed in [@SRH-DPR-etal:04]. This model considers three important processes, namely, homeostatic regulation of wakefulness, circadian process, and sleep inertia to model cognitive performance as a function of sleep deprivation. The SAFTE model assumes that a fully rested operator has a finite maximum cognitive capacity called the reservoir capacity $R_c$. While the operator is awake, the cognitive reservoir is depleted linearly with time. The reservoir replenishes when the operator sleeps and the replenishment depends on the circadian process (time of the day) and the current reservoir level. We consider the same form of the SAFTE model as in [@NDP-KAM:12]. The SAFTE model determines the task effectiveness as $$\begin{gathered} \texttt{TE}= 100 \frac{R_c-60K T_a}{R_c} + \Big( a_1 +a_2 \frac{60 K T_a}{R_c}\Big) \\ \Big[ \cos \Big( \frac{2\pi}{24}(T_d-p) \Big)+\beta \cos \Big(\frac{4\pi}{24}(T_d-p-p')\Big)\Big],\end{gathered}$$ where $T_a$ is the number of hours the operator has been awake, $T_d$ is the time of the day in hours, $K$ is reservoir drain rate due to wakefulness, $a_1, a_2, \beta \in \real$ are constants, $p$ is the time of the peak in the $24$h circadian rhythm and $p'$ is the relative time of the $12$h peak. The default values of the parameters in [@SRH-DPR-etal:04] are $R_c= 2880$ units, $K= 0.5$ units per minute, $a_1=7$, $a_2=5$, $\beta=0.5$, $p=18$ hours, and $p'=3$ hours. The task effectiveness is a measure of the efficiency of the operator. In particular, if the reaction time of a fully rested operator is `RT`, then the reaction time of the fatigued operator is $\texttt{RT}/ \texttt{TE}$. ### Forgetting/Retention curve The forgetting/retention curve determines the fraction of newly acquired information the operator remembers over time. The modeling of the forgetting curve has been a debated topic. Traditionally, the forgetting curve has been modeled as an exponential decay [@HE:13]. Anderson  [@JRA-LJS:91] argue that the forgetting curve should be modeled by a power law function. Rubin  [@DCR-SH-AW:99] model the forgetting curve as a sum of two exponential functions and a constant function. They claim that one of the exponential terms describes the working memory, and the two remaining terms describe the long term memory. For the purpose of this paper, the functional form of the forgetting curve is not important. We assume that the fraction of memory retained is known as a function of time and we denote it by ${\texttt{rem}: \real_{\ge 0} \rightarrow [0,1]}$. ### Motivation and boredom curve Boredom in an important factor in human-in-the-loop systems that is mostly neglected in design [@CDF:93] and leads to operator’s lack of interest in their activity. To the best of our knowledge, there has not been a significant work in the mathematical modeling of boredom. A heuristic model for the decay in performance due to boredom has been developed in [@NA-SZ-ML:10]. The authors argue that the lack of motivation can be thought of as boredom, and hypothesize an exponential decay model based on the classical motivation theory. Although the analytic approach in this paper can easily incorporate boredom models, due to lack of empirical evidence validating boredom models, we do not consider them in the remainder of the paper. A Unified Operator Model ------------------------ We now develop a unified operator model that blends the aforementioned models and determines the performance of the operator as a function of the duration allocation, her utilization ratio, her sleep schedule, and her forgetfulness. Without loss of generality, we assume that the parameters for the decision dynamics of the operator were determined when the operator was fully rested. Let the estimated drift and diffusion rates for the DDM associated with the operator be $\mu$ and $\sigma$, respectively. It follows from the definition of the task effectiveness that the expected decision time of the fatigued operator is $$\frac{1}{\texttt{TE}} \frac{\eta}{\mu} \tanh\frac{\mu \eta}{\sigma^2},$$ where $\texttt{TE}$ is the task effectiveness obtained using the SAFTE model. We assume that the operator fatigue affects their drift rate while the diffusion rate remains the same. Let ${\mu^{\textup{eff}}}$ be the drift rate of the DDM associated with the fatigued operator and let ${\eta^{\textup{eff}}}$ be the associated threshold. It is known that under the limiting approximation ${\eta^{\textup{eff}}} = {\mu^{\textup{eff}}} \xi_2/4\xi_1$. It follows that $$\label{eq:modified-drift} \tanh \frac{{{\mu^{\textup{eff}}}}^2 \xi_2}{4 \xi_1 \sigma^2} =\frac{1}{\texttt{TE}} \tanh\frac{\mu \eta}{\sigma^2}.$$ Equation  yields the following expression for the drift rate of a fatigued operator: $${\mu^{\textup{eff}}}= \sqrt{ \frac{2 \xi_1 \sigma^2}{\xi_2} \log \Big( \frac{\texttt{TE}+ \tanh(\mu^2 \xi_2/ 4 \xi_1 \sigma^2)}{\texttt{TE}- \tanh (\mu^2 \xi_2/ 4 \xi_1 \sigma^2)}\Big)}.$$ Similarly, the sensory and motor time ${T_{\textup{motor}}}$ for a fatigued operator would be ${T_{\textup{motor}}}/\texttt{TE}$. We now consider the retention model. Consider the current decision-making task that the operator processes. Suppose a decision-making task from the same class as the current task was processed ${T_{\textup{last}}}$ time earlier and let the evidence after that processing was ${\pi_{\textup{last}}}$. Thus, according to the forgetting model, the initial condition for the DDM associated with the current task is ${x_{\textup{init}}} = (\sigma^2 \log ({\pi_{\textup{last}}}/(1-{\pi_{\textup{last}}}))/2{\mu^{\textup{eff}}} ) \texttt{rem}({T_{\textup{last}}})$. The decision support system suggests the operator to allocate a given duration to the current task. Such a situation corresponds to the interrogation paradigm of decision-making. A unified performance function for the operator can be determined by using the effective drift rate, the effective motor time, and the effective initial condition for the associated DDM. Thus, the performance function ${f^1: \real_{\ge 0} \times [0,1]\times [0,1] \times \real_{\ge 0} \times [0,1] \rightarrow [0,1]}$ of the operator on a task from an anomalous region is $$\label{eq:performance-function} f^1(t, u, \texttt{TE}, {T_{\textup{last}}}, {\pi_{\textup{last}}}) = 1- \Phi\Big( \frac{\nu -{\mu_{\textup{eff}}}(t-{T^{\textup{wait}}})^+ -{x_{\textup{init}}}}{\sigma \sqrt{(t-{T^{\textup{wait}}})^+}}\Big),$$ where ${T^{\textup{wait}}}= {T_{\textup{motor}}}(u)/\texttt{TE}$ and $(\cdot)^+ := \max\{0, \cdot\}$. The performance function of the operator on a task from a non-anomalous region can be defined similarly. The performance function in equation  assumes that no evidence is collected during the sensory and motor time. In summary, the above performance function can be interpreted in the following way. The belief of the operator about a region being anomalous renders an initial condition to the associated DDM. This belief is acquired while the operator processed the task from the same region last time. Moreover, over the course of time the operator forgets the acquired belief. Accordingly, the belief acquired after a task from the current region was last processed is discounted using the forgetting model, and the discounted belief is used as the the initial condition for the DDM associated with the current task. Furthermore, the operator fatigue is captured by modifying the drift rates of the DDMs associated with different tasks to match the first moment of the reaction times suggested by the SAFTE model. Design of the CAMS using unified operator model ----------------------------------------------- We now design the CAMS using the unified operator model . The design proceeds analogously to the design with the simplified operator . In this section we focus only on the design of the decision support system. We study the problem using the certainty-equivalent receding-horizon framework. We assume that the parameters of the operator remain constant while a given task is processed. The first exogenous factor we consider is the utilization ratio. The utilization ratio of the operator affects their performance and can be controlled by introducing an idling time for the operator. The idling time reduces the fraction of time the operator is busy and hence reduces the utilization ratio of the operator. Let the utilization ratio of the operator be $u_\ell$ before processing the $\ell$-th task. If the operator processes the $\ell$-th task for a duration $t_\ell$ and remains idle for a duration $\delta_\ell$ after processing the task, then the utilization ratio after processing the $\ell$-th task is $$u_{\ell+1} = \Big(1-e^{-{t_\ell}/ {\tau}} + u_{\ell} e^{-{t_\ell}/ {\tau}}\Big) e^{-{\delta_\ell}/ {\tau}}.$$ With the incorporation of the utilization ratio dynamics, the problem has two state variables, namely, the queue length and the utilization ratio. There are two control variables as well, namely, the duration allocation to a task, and the rest time that follows the processing of the task. The duration allocation controls the decision-making performance and the rest time controls the utilization ratio of the operator. To capture the memory effects, the optimization problem also needs to keep track of the time when a task from a region was last processed and the time when a task from a region will be next processed. Thus, the number of state variables increases further. The increased dimension of the state space and action space would make the computation of the solution to the finite-horizon problem associated with the certainty-equivalent receding-horizon control intractable. However, for this finite-horizon optimization problem some of the variables can be assumed to remain constant, and this reduces the computational complexity. The utilization ratio affects the sensory and motor time of the operator. The sensory and motor time is high only at extreme values of the utilization ratio. Therefore, if we control the utilization ratio of the operator such that it always lies within an interval around its optimal value, then it would result in efficient sensory and motor times for the operator. In particular, a threshold ${u_{\textup{th}}}$ can be chosen such that the sensory and motor time does not vary much in the interval $[{u_{\textup{opt}}},{u_{\textup{th}}}]$, where ${u_{\textup{opt}}}$ is the utilization ratio at which the minimum sensory and motor time is achieved. Moreover, after processing each the $\ell$-th task, if the utilization ratio $\bar u_{\ell}$ is above a threshold value ${u_{\textup{th}}}$, then a idling time $\delta_\ell = \tau \log (\bar u_{\ell} /{u_{\textup{opt}}})$ is suggested to the operator such that the utilization ratio of the operator goes down to its optimal value ${u_{\textup{opt}}}$. Such a threshold based control of the utilization ratio ensures that we can treat the motor time as a constant over the finite prediction horizon and consequently, we do not need to consider the idling time as a decision variable in the finite-horizon problem. The fatigue and sleep cycle models are slow timescale models, i.e., the performance significantly changes only after an elapse of hours of duration, while the evidence aggregation process in the decision-making performance is of the order of seconds. Accordingly, for the finite-horizon problem associated with the receding-horizon policy, we treat the task effectiveness `TE` as a constant. Similarly, the memory models are of the order of minutes and accordingly, the operator’s belief can be assumed to be a constant over the finite prediction horizon. Thus, the fatigue and the memory effects need not be considered in the finite-horizon problem associated with the certainty-equivalent receding-horizon control. Let $ {T^{\textup{last}}}_{k}$ be the time since the task from region $\mc R_{k}$ was last processed and let ${\pi^{\textup{last}}}_{k}$ be the belief of the operator about region $\mc R_{k}$ being anomalous after a task from region $\mc R_k$ was last processed. In view of the above discussion, for the finite-horizon problem to be solved when the $\ell$-th task in the queue is processed, the performance function ${f_{k}^{\ell}: \real_{\ge 0} \rightarrow \real}$ associated with a task from region $\mc R_k$ is defined by $$\begin{gathered} \label{eq:certain-performance} f_{k}^\ell (t) = (1- \pi^{\ell-1}_{k}) f_{k}^0 (t, u_{\ell-1}, \texttt{TE}_\ell, {T^{\textup{last}}}_k, {\pi^{\textup{last}}}_k) \\ + \pi^{\ell-1}_{k} f_{k}^1 (t, u_{\ell-1}, \texttt{TE}_\ell, {T^{\textup{last}}}_k, {\pi_k^{\textup{last}}}),\end{gathered}$$ where $\texttt{TE}_\ell$ is the task effectiveness while processing the $\ell$-th task, and $f_{k}^0$ and $f_{k}^1$ are computed as in equation . After processing the $\ell$-th task, the utilization ratio, and the task effectiveness are updated using associated models, while the belief at region $\mc R_j$ is updated using Bayes rule as $$\bar \pi_{j}^{\ell} = \begin{cases} \frac{{\pi_{j}^{\textup{rem}}} \prob({\texttt{dec}}_\ell| H^1_{k})}{(1- {\pi_{j}^{\textup{rem}}}) \prob({\texttt{dec}}_\ell| H^0_{k_\ell})+ {\pi_{j}^{\textup{rem}}} \prob({\texttt{dec}}_\ell| H^1_{k_\ell})}, & \text{if } j=k_{\ell}, \\ {\pi_{j}^{\textup{rem}}}, & \text{otherwise,} \end{cases}$$ where $H^0_k$ and $H^1_k$ denote the hypothesis that region $\mc R_k$ is non-anomalous and anomalous, respectively, and ${\texttt{dec}}_\ell \in \{0,1\}$ is the operator’s decision, ${\pi_{k}^{\textup{rem}}}$ is the operator’s belief about region $\mc R_k$ being anomalous after accounting for the memory retention effects and is defined by $${\pi_{k}^{\textup{rem}}} = \frac{ \exp \big \{\log\big( \frac{{\pi_{k}^{\textup{last}}}}{1- {\pi_{k}^{\textup{last}}}}\big) \texttt{rem}({T^{\textup{last}}}_{k} )\big \}} {1+ \exp \big \{\log\big( \frac{{\pi_{k}^{\textup{last}}}}{1- {\pi_{k}^{\textup{last}}}}\big) \texttt{rem}({T^{\textup{last}}}_{k} )\big \}},$$ and $\prob({\texttt{dec}}_\ell|\cdot)$ is determined from the performance function of the operator, e.g., $$\prob({\texttt{dec}}_\ell=1| H^1_{k_\ell}) = f^1_{k_\ell} (t_\ell, u_{\ell-1}, \texttt{TE}, {\pi_{k_\ell}^{\textup{last}}}, {T_{k_\ell}^{\textup{last}}}).$$ Similar to the simplified operator model case, the operator resets her belief to a threshold value if her belief is smaller than the threshold. Accordingly, her belief about region $\mc R_k$ being anomalous after processing the $\ell$-th task is $$\pi_{j}^{\ell} = \max \{0.5, \bar \pi_{j}^{\ell}\}.$$ Further, after processing the $\ell$-th task, ${\pi^{\textup{last}}}_{k_\ell}$ is updated to $\pi_{k_\ell}^{\ell} $. The receding-horizon duration allocation policy now follows similarly to the simplified operator model case. The duration allocation to the $\ell$-th task is determined by solving a problem of the form  in which the performance function for each task is of the form . Again, the finite-horizon problem is a dynamic program with univariate state and control variables and an efficient solution can be determined using the backward induction algorithm on the discretized state and action space. The design of the anomaly detection algorithm and the vehicle routing policy follow similarly to the simplified operator model case. A Case Study with Exogenous Factors =================================== We now present a case study on the design of the CAMS with exogenous factors. We choose the same parameters and the same time instances for the appearance of anomalies as in the case without exogenous factors. Additionally, the deadline on each task is $60$ units, the operator sensitivity is $100$, and the motor time is ${T^{\textup{motor}}}(u)=54 -155u + 132u^2 -9$. We choose the retention function of the form in [@DCR-SH-AW:99] given by $\texttt{rem}(t)=\min(1, a_1 \exp(-10t/1.15)+ a_2 \exp(-10t/27.55)+a_3)$, where $a_1= 4.6$, $a_2=1.5$, and $a_3=0.1$. The optimization problem  with a horizon length $N=5$ and the modified performance function in  is solved before processing each task to determine efficient allocations for the human operator. A sample evolution of the CAMS is shown in Figure \[fig:CAMS-full\]. The allocation policy, the queue length, CUSUM statistics, and the region selection policy evolve similarly to the case without exogenous factors and are shown in Figures \[fig:allocation-full\], \[fig:queue-full\], \[fig:stat-full\], and \[fig:select-full\], respectively. The evolution of the operator utilization, associated motor times, and the rest times suggested to bring the operator utilization to an optimal value are shown in Figures \[fig:utilization\], \[fig:motor\], and \[fig:rest\], respectively. The value of ${u_{\textup{th}}}$ is set to $0.85$, once the operator utilization crosses ${u_{\textup{th}}}$, then an appropriate rest time is allocated to the operator such that their utilization drops to ${u_{\textup{opt}}}=0.7$. The evolution of the retained belief of the operator is shown in Figure \[fig:retained-belief\]. It can be seen that the memory effects take operator’s belief about the presence of an anomaly at a region to $0.5$, if that region is not visited for a long time. \ \ \ Conclusions =========== In this paper, we surveyed models for human decision-making performance and presented a framework for the design of mixed human-robot teams. We demonstrated the framework using the context of a surveillance mission. Along with the fundamental human decision-making performance, we incorporated several other human factors, including, situational awareness, fatigue, and memory retention into human performance model and characterized the accuracy of human decisions in a two-alternative choice task. We utilized these models to design a mixed human-robot team surveillance mission. In particular, we designed simultaneous vehicle routing policies for the vehicles, attention allocation policies for the operator, and anomaly detection algorithms that utilize potentially erroneous decisions by the operator. Open Directions =============== There are several open directions of research in mixed human-robot teams. Indeed, an obvious open direction is the experimental implementation and testing of the proposed methodology. However, in the following, we focus on the theoretical challenges in the mixed team design. #### Exploration-exploitation trade-off The system designed relies heavily on the estimated values of the parameters. In this setting, real-time evaluation of parameters becomes critical. One naive strategy for such an evaluation is to introduce some “control” tasks. For instance, at each stage of the queue, each task may be a control task with small probability. The response to such control tasks may be used to update the model parameters in a Bayesian setting. Moreover, the probability that the current task is a control task may be adapted depending on the deviation of the operator performance from the predicted performance. #### Multiple anomalies at each region In this paper, we assumed fixed drift and diffusion rates for the DDM associated with anomalous as well as non-anomalous regions.This corresponds to a situation in which only one type of anomaly may appear at the region. In general, there may be a set of anomalies that may appear at a region. Multiple anomalies can be modeled using a DDM in which drift and diffusion rates are random variables. Different values of these drift and diffusion rates correspond to different type of anomalies and different number of anomalies that may be present at the region. As a task is processed, the distributions of drift and diffusion rates can be adapted according to the operator performance and the GLR algorithm [@MB-IVN:93] can be used as an anomaly detection algorithm. #### Scheduling problems under the free response regime for the operator The policies designed in this paper consider the interrogation paradigm of human decision-making. Another important scenario is when the operator makes decision under the free response paradigm. In such a setting, the task scheduling problem becomes important. The processing time associated with each task is a random variable whose distribution depends on several factor including the situational awareness, the fatigue, and the forgetfulness of the operator. The scheduling problem involving tasks with stochastic processing time is a computationally hard problem and the addition of other dynamic elements make it harder. It is of interest to develop approximation algorithms for such scheduling problems. #### Capturing operator-automaton trust In this paper, we assumed that the operator trusts the automaton and allocates the duration suggested by the automaton. In general, the operator may not completely trust the automaton and with certain probability may process the task using the free response paradigm. In this setting, the duration allocation to each task is equal to the random processing time associated with the free response paradigm with some probability and is equal to the deterministic time suggested by the decision support system otherwise. It is of interest to determine optimal policies in this partial trust regime. #### Incorporating richer models for human decision-making In this paper, we relied on models for human decision-making in two-alternative choice tasks. While the two-alternative choice task is an appropriate model for the human activity in a surveillance mission, other missions may require models for human decision-making in a broader set of tasks, e.g., the multi-armed bandit task [@PR-VS-NEL:13d]. It is of interest to incorporate models for human decision-making in such tasks into the mixed team design. #### Other queuing disciplines and tasks with long reaction times In this paper we considered the *first-come-first-serve* processing discipline for the queue of decision-making tasks. It is of interest to study optimal policies for other queueing disciplines, e.g., preemptive priority queues, and the scenario in which tasks in the queue are reordered before the next task is processed. In this paper we assumed that the task duration is small and the operator parameters remain constant over the duration of the task. It is of interest to relax this assumption. #### Real-time parameter update using observable human actions and states An ambitious open direction is to update parameters of the operator model using her observable actions. For instance, the pupil movement of the operator may be monitored and can be used to estimate the parameters for the cognitive model of the operator. A plausible approach to this end may involve hidden variable estimation and Bayesian methods to update the parameters of the operator model in real time. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported in part by AFOSR MURI Award FA9550-07-1-0528. [10]{} W. M. Bulkeley. Chicago’s camera network is everywhere. The Wall Street Journal, November 17, 2009. C. Drew. Military taps social networking skills. The New York Times, June 7, 2010. T. Shanker and M. Richtel. In new military, data overload can be deadly. The New York Times, January 16, 2011. E. Guizzo. Obama commanding robot revolution announces major robotics initiative. IEEE Spectrum, June 2011. C. Nehme, B. Mekdeci, J. W. Crandall, and M. L. Cummings. The impact of heterogeneity on operator performance in futuristic unmanned vehicle systems. , 2(2):1–30, 2008. L. F. Bertuccelli, N. Pellegrino, and M. L. Cummings. Choice modeling of relook tasks for [UAV]{} search missions. In [*[A]{}merican [C]{}ontrol [C]{}onference*]{}, pages 2410–2415, Baltimore, MD, USA, June 2010. L. F. Bertuccelli, N. W. M. Beckers, and M. L. Cummings. Developing operator models for [UAV]{} search scheduling. In [*[AIAA]{} Conf. on Guidance, Navigation and Control*]{}, Toronto, Canada, August 2010. K. Savla, C. Nehme, T. Temple, and E. Frazzoli. On efficient cooperative strategies between [UAV]{}s and humans in a dynamic environment. In [*[AIAA]{} Conf. on Guidance, Navigation and Control*]{}, Honolulu, HI, USA, 2008. K. Savla, T. Temple, and E. Frazzoli. Human-in-the-loop vehicle routing policies for dynamic environments. In [*[IEEE]{} Conf. on Decision and Control*]{}, pages 1145–1150, Cancún, México, December 2008. J. W. Crandall, M. L. Cummings, M. [Della Penna]{}, and P. M. A. de Jong. Computing the effects of operator attention allocation in human control of multiple robots. , 41(3):385–397, 2011. C. E. Nehme. . PhD thesis, Department of Aeronautics and Astronautics, MIT, February 2009. K. Savla and E. Frazzoli. Maximally stabilizing task release control policy for a dynamical queue. , 55(11):2655–2660, 2010. K. Savla and E. Frazzoli. A dynamical queue approach to intelligent task management for human operators. , 100(3):672–686, 2012. N. D. Powel and K. A. Morgansen. Multiserver queueing for supervisory control of autonomous vehicles. In [*[A]{}merican [C]{}ontrol [C]{}onference*]{}, pages 3179–3185, Montréal, Canada, June 2012. V. Srivastava, R. Carli, C. Langbort, and F. Bullo. Attention allocation for decision making queues. , February 2012. to appear. V. Srivastava and F. Bullo. Knapsack problems with sigmoid utility: [A]{}pproximation algorithms via hybrid optimization. , October 2012. Submitted. M. Majji and R. Rai. Autonomous task assignment of multiple operators for human robot interaction. In [*[A]{}merican [C]{}ontrol [C]{}onference*]{}, pages 6454–6459, Washington, DC, June 2013. V. Srivastava, A. Surana, and F. Bullo. Adaptive attention allocation in human-robot systems. In [*[A]{}merican [C]{}ontrol [C]{}onference*]{}, pages 2767–2774, Montréal, Canada, June 2012. V. Srivastava, F. Pasqualetti, and F. Bullo. Stochastic surveillance strategies for spatial quickest detection. , 32(12):1438–1458, 2013. V. Srivastava, K. Plarre, and F. Bullo. Randomized sensor selection in sequential hypothesis testing. , 59(5):2342–2354, 2011. R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J. D. Cohen. The physics of optimal decision making: [A]{} formal analysis of performance in two-alternative forced choice tasks. , 113(4):700–765, 2006. R. Ratcliff. A theory of memory retrieval. , 85(2):59–108, 1978. R. Ratcliff and G. McKoon. The diffusion decision model: [T]{}heory and data for two-choice decision tasks. , 20(4):873–922, 2008. H. V. Poor and O. Hadjiliadis. . Cambridge University Press, 2008. D. Siegmund. . Springer, 1985. R. Ratnam, J. Goense, and M. E. Nelson. Change-point detection in neuronal spike train activity. , 52:849–855, 2003. B. Chen and P. Willett. Detection of hidden [M]{}arkov model transient signals. , 36(4):1253–1268, 2000. L. Wasserman. . Springer, 2004. S. Boyd, P. Diaconis, and L. Xiao. Fastest mixing [M]{}arkov chain on a graph. , 46(4):667–689, 2004. J. Grace and J. Baillieul. Stochastic strategies for autonomous robotic surveillance. In [*[IEEE]{} Conf. on Decision and Control and European Control Conference*]{}, pages 2200–2205, Seville, Spain, December 2005. K. Srivastava, D. M. Stipanovic, and M. W. Spong. On a stochastic robotic surveillance problem. In [*[IEEE]{} Conf. on Decision and Control*]{}, pages 8567–8574, Shanghai, China, December 2009. D. Bertsekas. Dynamic programming and suboptimal control: [A survey from ADP to MPC]{}. , 11(4-5):310–334, 2005. H. S. Chang and S. I. Marcus. Approximate receding horizon approach for [M]{}arkov decision processes: [A]{}verage reward case. , 286(2):636–651, 2003. J. Mattingley, Y. Wang, and S. Boyd. Receding horizon control: [A]{}utomatic generation of high-speed solvers. , 31(3):52–65, 2011. D. P. Bertsekas. Convergence of discretization procedures in dynamic programming. , 20(6):415–419, 1975. D. P. Bertsekas. . Athena Scientific, 2 edition, 2001. R. M. Yerkes and J. D. Dodson. The relation of strength of stimulus to rapidity of habit-formation. , 18(5):459–482, 1908. C. D. Wickens and J. G. Hollands. . Prentice Hall, 3 edition, 2000. M. Cummings, C. Nehme, J. Crandall, and P. Mitchell. . In J. Chahl, L. Jain, A. Mizutani, and M. Sato-Ilic, editors, [ *Innovations in Intelligent Machines*]{}, volume 70 of [*Studies in Computational Intelligence*]{}, pages 11–37. Springer, 2007. G. Matthews, D. R. Davies, S. J. Westerman, and R. B. Stammers. . Psychology Press, 2000. M. M. Mallis, S. Mejdal, T. T. Nguyen, and D. F. Dinges. Summary of the key features of seven biomathematical models of human fatigue and performance. , 75(3, Suppl.):A4–A14, 2004. S. R. Hursh, D. P. Redmond, M. L. Johnson, D. R. Thorne, G. Belenky, T. J. Balkin, W. F. Storm, J. C. Miller, and D. R. Eddy. Fatigue models for applied research in warfighting. , 75(3, Suppl.):A44–A53, 2004. H. Ebbinghaus. . Teachers College, Columbia University, 1913. J. R. Anderson and L. J. Schooler. Reflections of the environment in memory. , 2(6):396–408, 1991. D. C. Rubin, S. Hinton, and A. Wenzel. The precise time course of retention. , 25(5):1161–1176, 1999. C. D. Fisher. Boredom at work: [A]{} neglected concept. , 46(3):395–417, 1993. N. Azizi, S. Zolfaghari, and M. Liang. Modeling job rotation in manufacturing systems: [T]{}he study of employee’s boredom and skill variations. , 123(1):69–85, 2010. M. Basseville and I. V. Nikiforov. . Prentice Hall, 1993. P. Reverdy, V. Srivastava, and N. E. Leonard. Modeling human decision-making in generalized [G]{}aussian multi-armed bandits, July 2013. Available at `http://arxiv.org/abs/1307.6134`.